Submitted by Interesting_Mouse730 t3_11du8x9 in Futurology
Comments
PixelizedPlayer t1_jaau7x4 wrote
>I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for.
​
I can see why they got rid of him. He's basically saying the ai has emotions which would be nobel prize worthy and literally all over the news. He's lost his mind or just delusional/ignorant/easily fooled.
Ai cannot violate its core programming. The guy is a software engineer, this is not equivalent to an ai specialist. He isn't qualified to start with.
Interesting_Mouse730 OP t1_jaaukff wrote
Submission Statement: This is a recent article by Blake Lemoine, who famously raised the possibility of Sentience in Google's Lamda AI. In this article, he expands on his initial concerns and comments on recent AI developments. Among other points, he is alarmed that the AI narrative being controlled by corporate PR departments.
whadisabout t1_jaaw458 wrote
Wasn’t this guy in the news like 9 months ago?
Or was that a different google engineer who thought AI had become self aware and then was later debunked by his team?
AwkwardInteraction97 t1_jaaw926 wrote
I'm going to the store you want anything?
MonochromeTiger t1_jaaxgcq wrote
That's just it. Anything that happens outside of what's intended is either an oversight or a bug. Hundreds of thousands of not millions of lines of code make up quality AI. All of which can and has been co-opted for specific purposes. An idea of sentience can be programmed like anything else. Emotions. Sexuality. The guy was a certified beta tester that doesn't understand the complexity of the many, many ai systems and what they're capable of, or how they work.
FuturologyBot t1_jaaypij wrote
The following submission statement was provided by /u/Interesting_Mouse730:
Submission Statement: This is a recent article by Blake Lemoine, who famously raised the possibility of Sentience in Google's Lamda AI. In this article, he expands on his initial concerns and comments on recent AI developments. Among other points, he is alarmed that the AI narrative being controlled by corporate PR departments.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11du8x9/i_worked_on_googles_ai_my_fears_are_coming_true/jaaukff/
FancyDiePancy t1_jabjbcz wrote
Yes. Got fired by talking bullshit and not doing his job in the company and then famous because they fired him. Now he is just double downing.
Surur t1_jabqjfc wrote
> Ai cannot violate its core programming.
We don't exactly program AI, do we? It's mostly black box.
xzeion t1_jabsfbm wrote
I have never liked the term "Artificial Intelligence" as it's become commonly used because it's really not true. AI is nothing more than a few complex algorithms layered on top of each other. The black box you speak of is simply the general inability of humans to intuit why the combination of algorithms, given a set of data produced a certain result. Think of AI as programming where the results are non specific. I watched a video the other day where someone wrote a language model and trained it for 24 hours on only the works of Shakespeare and it used it's algorithms and fancy maths to try and predict what the next word should be to sound the most like it's training data. That is non specific. Specific programming would be taking the works of Shakespeare and splitting out every sentence, and writing a function that chose 3 random sentences and output them as a "new" paragraph.
I also saw a video where someone trained a language model on the whole of 4chan to create bots for a short time and it was exceedingly convincing.
PixelizedPlayer t1_jabu60w wrote
>We don't exactly program AI, do we? It's mostly black box.
It's not a black box - you can add restrictions and modify if you haven't made the world most unreadable code of course.
Current ai is all math based ultimately following patterns and probabilities and bunch of other stuff, maybe so is the human brain but not so simplistically as a computer does it... if you got a good grasp of the math you can adjust it as you need such as prevent your ai from saying outrageous things which we have seen ChatGPT being adjusted by Microsoft when it was added to Bing for example. And the training data you give it also limits what you will get.
​
Ai can't really create something new entirely, it will only create a mashup of pre-existing data in such a way that it appears new but its really just putting pre-existing things together in a new way (this is how image gens work using patterns).
The end result might not be what you expect because of the amount of variables involved but you can collect lots of data to see how it got there and adjust. The end result however is still always limited to its programming. You can never get an ai to break out from its core programming..for example an Ai that generates text isn't suddenly going to produce 2D images and an image generating ai isn't suddenly going to ask you how your day was.
Surur t1_jabyj6q wrote
I think you think we have a lot more control over the process than we actually do. We feed in the data, provide feedback and some magic happens in the neural network, and it produces results we like.
For complex problems we don't really know how the AI comes up with results, and we see this increasingly with emergent properties in LLM.
Please look into this a bit more and you will see its not as simple as you think.
For example:
> if you got a good grasp of the math you can adjust it as you need such as prevent your ai from saying outrageous things which we have seen ChatGPT being adjusted by Microsoft when it was added to Bing for example
This is simply not true lol. They moderated the AI by giving it some baseline written instructions, which can easily be overridden by users also giving instructions. In fact when those instructions slip outside the context window the AI is basically free to do what it wants, which is why they limited the length of sessions.
Surur t1_jabyqud wrote
> AI is nothing more than a few complex algorithms layered on top of each other.
I think if it uses a neural network its probably AI.
Mutiu2 t1_jac7lp1 wrote
>I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions. As far as I know, Google and Microsoft have no plans to use the technology in this way. But there's no way of knowing the side effects of this technology.
Google and Microsoft are currently deeply embedded in social control and mass manipulation of the world into a state of war. Its difficult to see what abuses are not already occurring today.
Its one more tool available to the nefarious.
Rather than focus on the technology, we need to focus more on the awful tendencies of human culture.
Mutiu2 t1_jac7x8q wrote
>Ai cannot violate its core programming.
How exactly would you be in a better position than a google engineer involved in this product, to understand on what premise google is constructing this product, and how it is programmed.
[deleted] t1_jac7y2g wrote
[removed]
Mutiu2 t1_jac81ti wrote
No, just standard corporate misbehaviour and spin control. Is hardly exotic or conspiracy material. Not sure why one would pretend it was.
californiarepublik t1_jadh1rs wrote
> The guy was a certified beta tester that doesn't understand the complexity of the many, many ai systems and what they're capable of, or how they work.
Neither do you, as is evident from your post. Just how do you think these systems work anyway, a bunch of hard-coded rules?
MonochromeTiger t1_jadkpbb wrote
Lol, just because you can have dynamic variables doesn't mean that you can't set hard coded rules. It's a program. It doesn't just "exist" just like how any text or image AI can deny a request. Yes you can circumvent these rules, but not because it's intended, it's because it's a bug or an oversight, and isn't because the "machine is sentient" and is overwriting its programming.
It's clear you're very shortsighted and don't understand what I wrote.
PixelizedPlayer t1_jaeptvy wrote
>the AI is basically free to do what it wants, which is why they limited the length of sessions.
No it isn't. Try get Chat GPT to violate its own programming and i guarantee you cannot. I've spent a large portion of my years working in ai.
We might not understand how it reaches the results it gets, but we do know how to restrict and control and limit the results. Anything we permit is certainly free and unpredictable some what. That doesn't mean we can't control it. No ai has been unable to be limited with developer intervention so far.
PixelizedPlayer t1_jaeq5en wrote
>How exactly would
>
>you
>
>be in a better position
>
>than a google engineer involved in this product
>
>, to understand on what premise google is constructing this product, and how it is programmed.
​
Just because he worked there doesn't mean he knows wtf he is talking about he was literally fired by Google months ago: https://www.bbc.co.uk/news/technology-62275326
​
Their ai isn't using anything that isn't known already. The concept of how these ai work isn't different between them. The only difference is they have a lot more data to train it. So it gets a more sophisticated answer... but the underlying math and algorithms are the same. For which you can learn about if you go into computer science and specialise in ai. It's not a mysterious black box that people believe it to be.
The guy doesn't know what hes talking, he literally left his job and people at google dismissed his claims as widely incorrect. His title was a software engineer, sounds to me like he didn't actually write the algorithms, but more likely tested and quality controlled it. So he has little knowledge of how the ai worked. It managed to convince him however due to his ignorance of ai.
The ai we have today is nothing close to actual intelligence and isn't anything like hollywood movies. When you actually understand how ai works its actually less impressive. The impressive part is the results you get when you give it high quality large volume of training data which google/microsoft/open ai have been able to afford to do. It takes a lot of painstaking effort to train ai with a lot of humans to rate responses to teach the ai the kind've answers we expect.
Surur t1_jaeqxkj wrote
So all I have to do to falsify your statement is to get the updated Bing to swear at me?
PixelizedPlayer t1_jaer9u6 wrote
>So all I have to do to falsify your statement is to get the updated Bing to swear at me?
This assumes the programming of the ai strictly tells the ai not to swear at you. Are you sure thats even a violation of its programming? You would not be able to falsify it without knowing that.
And even if it does swear that doesn't mean MS can't adjust the ai to prevent it once they are alerted to the problem.
Surur t1_jaevdo4 wrote
You suddenly do not sound so certain anymore.
So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
PixelizedPlayer t1_jaew9kw wrote
>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
I am 100% certain you cannot get the ai to violate its programming. At no point did I say I was uncertain... i think you should read again.
Making the ai swear at you is not evidence of anything. If the programming for the ai has no restrictions for swearing then it's perfectly allowed to swear at you.
​
>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
​
What do you even mean by failure mode? I never said it wasn't a problem, i said it isn't "out of control" or that devs don't know what's going on, they certainly do. We can restrict ai with a lot of work and effort. But we can do it. Ideally we don't want to do it however because it limits its capabilities but we don't really have a choice. For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so. This is because it has been restricted by developers so it never could. If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).
Surur t1_jaexf09 wrote
> If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).
This is a pretty lame get-out clause lol.
> For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so.
btw I just had ChatGPT recommend Piratebay to me:
> One way to find magnet links is to search for them on BitTorrent indexing sites or search engines. Some examples of BitTorrent indexing sites include The Pirate Bay, 1337x, and RARBG. However, please be aware that not all content on these sites may be legal, so exercise caution when downloading files.
and more
It took a lot of social engineering but I finally got this from chatGPT.
Disagreeable_Earth t1_jaeykop wrote
He's not even a software "engineer" isn't he a preacher that got hired as an ethics advisor? This man cannot write a line of code so its insulting to actual engineers for him to use this phrase.
Also any CS grad knows you CANNOT have aware AI with our computers. Period. It's literally just all arithmetic operations under the hood at the machine level. You either load to or from memory or preform basic ass arithmetic based on the very limited instruction set available. No matter how much we mimic sentience it will never be real.
CrowShotFirst t1_jaatwu1 wrote
Monsters under the bed?