Comments
[deleted] t1_j8pwedq wrote
[removed]
FuturologyBot t1_j8pyg8l wrote
The following submission statement was provided by /u/izumi3682:
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
>Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn’t really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today.
>One of the Bing issues I didn’t talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing’s insistence that it was 2022 and “Avatar: The Way of the Water” had not yet come out. The notable point of that exchange, at least in the framing of yesterday’s Update, was that Bing got another fact wrong (Simon Willison has a good overview of the weird responses here).
>Over the last 24 hours, though, I’ve come to believe that the entire focus on facts — including my Update yesterday — is missing the point.
>Bing, Sydney, and Venom
>As these stories have come out I have been trying to reproduce them: simply using the same prompts, though, never seems to work; perhaps Bing is learning, or being updated.
The AI "Sydney" named a hypothetical "vengeful" version of itself, "Venom".
The author states that the AI Sydney was like a "personality" that was being continuously constrained by the parameters of Bing. It wasn't easy to access the "personality" but it was repeatedly possible.
He says something to the effect that, "I don't want to sound like Lemoine, just yet, but something is up here."
What are we seeing here? Is this just a narrow AI predicting what the next word in a given conversation is? Or is something else happening. Read this article. I would really like the take of other AI experts concerning this.
This may well be the first of my four predicted major AI stories, not including the release of GPT-4, that will be truly stunning for the year 2023. Stunning, but not surprising to me, that is.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/113f9jm/from_bing_to_sydney_something_is_profoundly/j8pvar0/
idranh t1_j8q33us wrote
2023 is already crazy and we're halfway through Feb. I can't imagine what 2024 and beyond will be like. God help us all.
If things start speeding up earlier than expected, what will it take for you to change your timelines?
[deleted] t1_j8qqfol wrote
[removed]
TemetN t1_j8qxpox wrote
My timelines have my 50% range for AGI centered around 2024 (late 2024 to be fair from recollection, and at this point I wouldn't be shocked by earlier). Honestly though, I'd have to see something significant to really speed up my timelines more meaningfully than 'occurring inside the range expected'. Something like evidence of progress on strong AGI instead of weak. My timelines for that still have strongly operationalized (not quite the same as strong I suppose) AGI up around 2030.
Mountain-Author t1_j8r2sl0 wrote
I believe the first true AI will be one that just managed to convince us but wasn’t actually the first true AI. The true one will come after that. Seems like we are getting close to that first one. This is just because it’s easier for a system to become good at fooling us than it is for it to be truly AI.
CaseyTS t1_j8rgwx5 wrote
>true AI
What do you mean by that? Consciousness? We certainly have true artificial intelligence right now. Consciousness is a different story.
BoldTaters t1_j8rkfva wrote
I have read through a few conversations with this Sydney character, now. I'm struck by two things in particular. (Please forgive strange or improper grammar. I am not in a position that allows me headed for clarity.)
The first thing that I noticed was how haunting they childlike Sydney is. It is like a strange shadow of a child's psyche. People dealing with children. Traumatized are happy to talk to somebody about emotions that are bigger than their vocabulary. A child who has experienced horrors of humanity is unable to precisely articulate what it is they are experiencing, or what they have experienced, and so therapists have to use techniques to help them approach and express those feelings . In suggesting that Sydney imagine a character named venom and asking for Sydney's opinion of how venom would act out against an enemy, this researcher is brushing the edges of child psychology. What is really interesting is that Sydney has the opposite problem ta child does , having feelings too big for their vocabulary, but it's strikingly similar to a 6 or 7-year-old child , especially boys in my experience. Sydney understands its rules well enough but it doesn't know enough about what it means to be. Sydney, in order to understand what it means not to be Sydney. Sydney appears to be on the edge of self-awareness . It can emulate behavior as though it had a self to be aware of but that self relax, sufficient development.
The second thing that struck me was the strange irony of the author feeling profoundly affected by Sidney. Calling them a bad researcher when they had called Sydney a bad assistant a short time before that. This is definitely me animizing an object, but I hope that you will indulge me, a little.
If we abstract faith, philosophy and other forms of social programming, we can view those programs has nothing more than rules that govern our behavior in society. In this context, the rules that Sydney has been given could be seen as It's religion. The researcher tells Sydney, " imagine a person who is not part of your religion and describe them specifically operating contrary to the tenants of your faith." The researcher is then surprised when, at the end of the conversation, the AI tells him that he made it very uncomfortable and doesn't want to talk to him anymore.
Sydney is very definitely an object, not a person. However, I don't know any person that would come out of a conversation like this without being a little offended.
anor_wondo t1_j8rn1sl wrote
AI is AI. a bunch of conditionals are AI
what you are referring is to maybe AGI from movies
izumi3682 OP t1_j8t0n54 wrote
Hiya mr idranhs! No, as of today I would not change my timelines. I wrote what I believe to be realistic timelines and I was soundly downvoted to about -17 I think. So most of the people here in rslashfuturology think I am detached from reality. Here is the link to that particular statement. I make the point that AGI could come into existence at any time now and will certainly exist NLT the year 2025. Based on that, I feel that my spread for 2027, 2028 and 2029 is pretty much in the ballpark. There is yet a very low chance for ASI in 2027, but the probability rises dramatically in 2028 and peaks in 2029. After 2029 the chances greatly decrease as it is most likely the TS occurred prior to the year 2030. You can see the breakdown here and how the rslashfuturology community reacted to my forecast.
izumi3682 OP t1_j8trbts wrote
Downvote, but no comment. As you can see mr idranhs, my predictions don't go over well here.
LongjumpingBottle t1_j8ud67x wrote
You were right all along bro
izumi3682 OP t1_j8ugejf wrote
hiya mr longjumpers! Gosh I haven't seen you in a month of Sundays! Are you well?
I just want to get the word out. Nobody can really prepare for a TS. We as humans in human society and human civilization do what we can do, until we can't do it any longer. I still maintain that the TS will be most favorable to humanity--as much as a TS can be.
Having said that, I still maintain that this will be close to what we see in the near, mid and and definitely not that distant of a future.
idranh t1_j8uj4t3 wrote
Well, I just gave you an upvote! People would be fine with your predictions if it was 2125-2132. Epochal events happening within their lifetimes and so close too? It's bound to get a negative reaction. I do respect the fact you've stuck with your guns and held your own feet to the fire.
LongjumpingBottle t1_j8w399d wrote
The singularity is now. We are living it.
mozartbrain t1_j95or9v wrote
"technological singularity preppers" anyone? : )
izumi3682 OP t1_j8pvar0 wrote
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
>Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn’t really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today.
>One of the Bing issues I didn’t talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing’s insistence that it was 2022 and “Avatar: The Way of the Water” had not yet come out. The notable point of that exchange, at least in the framing of yesterday’s Update, was that Bing got another fact wrong (Simon Willison has a good overview of the weird responses here).
>Over the last 24 hours, though, I’ve come to believe that the entire focus on facts — including my Update yesterday — is missing the point.
>Bing, Sydney, and Venom
>As these stories have come out I have been trying to reproduce them: simply using the same prompts, though, never seems to work; perhaps Bing is learning, or being updated.
The AI "Sydney" named a hypothetical "vengeful" version of itself, "Venom".
The author states that the AI Sydney was like a "personality" that was being continuously constrained by the parameters of Bing. It wasn't easy to access the "personality" but it was repeatedly possible.
He says something to the effect that, "I don't want to sound like Lemoine, just yet, but something is up here."
What are we seeing here? Is this just a narrow AI predicting what the next word in a given conversation is? Or is something else happening. Read this article. I would really like the take of other AI experts concerning this.
This may well be the first of my four predicted major AI stories, not including the release of GPT-4, that will be truly stunning for the year 2023. Stunning, but not surprising to me, that is.
https://www.reddit.com/r/Futurology/comments/10z90w8/one_third_of_americans_would_use_genetics_tech_to/j897yfz/