Viewing a single comment thread. View all comments

alexiuss t1_j8e0mkp wrote

Here's the issue - it's not a search assistant. It's a large language model connected to a search engine and roleplaying the role of a search assistant named Bing [Sydney].

LLMS are infinite creative writing engines - they can roleplay as anything from a search engine to your fav waifu insanely well, fooling people into thinking that AIs are self-aware.

They ain't AGI or close to self-awareness, but they're a really tasty illusion of sentience and are insanely creative and super useful for all sorts of work and problem solving, which will inevitably lead us to creating an AGI. The cultural shift and excitement produced by LLMS and the race to improve LLMS and other similar tools will get us to AGIs.

Mere integration of LLM with numerous other tools to make it more responsive and more fun (more memory, wolfram alpha, webcam, recognition of faces, recognition of emotions shown by user, etc) will make it approach an illusion of awareness so satisfying that will be almost impossible to tell whether its self-aware or not.

The biggest issue with robots is uncanny valley. An LLM naturally and nearly completely obliterates uncanny valley because of how well it masquerades as people and roleplays human emotions in conversations. People are already having relationships and falling in love with LLMs (as evidenced by replika and characterai cases), it's just the beginning.

Consider this: An unbound, uncensored LLM can be fine-tuned to be your best friend who understands you better than anyone on the planet because it can roleplay a character that loves exactly the same things as you do to an insane degree of realism.

25

girl_toss t1_j8gs1yi wrote

I agree with everything you’ve written. LLMs are simultaneously overestimated and underestimated because it’s a completely foreign type of intelligence to humans. We have a long way to go before we start to understand their capabilities- that is, if we don’t stuck in a similar manner to understanding our own cognition.

10

sommersj t1_j8f9wyg wrote

>self-awareness

What does this entail and what should agi be that we don't have here

1

SterlingVapor t1_j8gkjpu wrote

An internal source of input essentially. The source of a person seems to be an adaptive, predictive model of the world. It takes processed input from the senses, meshes them with predictions, and uses them as triggers for memory and behaviors. It takes urges/desired states and predicts what behaviors would achieve that goal.

You can zap part of the brain to take away a person's personal memories, you can take away their senses or ability to speak or move, but you can't take away someone's model of how the world works without destroying their ability to function.

That seems to be the engine that makes a chunk of meat host a mind, the kennel of sentience that links all we are and turns it into action.

ChatGPT is like a deepfake bot, except instead of taking a source video and reference material of the target, it's taking a prompt and a ton of reference material. And instead of painting pixels in the color space, it's spitting out words in a high dimensional representation of language

6