Submitted by BronzeArcher t3_1150kh0 in MachineLearning
Title.
Submitted by BronzeArcher t3_1150kh0 in MachineLearning
Title.
How should we control the exposure for people with low cognitive capabilities that might not understand what they are interacting with.
Imagine someone writes one that's explicitly aimed around manipulating your thoughts and actions.
An AI could likely come up with some insane tactics for this. Could feed off of your twitter page, find an online resume of you or scrape other social media or in microsoft's case or google's, potentially scrape your emails you have with them, profile you in an instant, and then come up with a tailor made advertisement or argument that it knows would land on you.
Scary thought.
Yeah that’s pretty frightening.
As in they wouldn’t interpret it responsibly? What exactly is the concern related to them not understanding?
These are what I feel like are the most standard topics. Valuable, nonetheless.
The mess that has been Bing Chat/Sydney, but instead of just verbally threatening users, it's connected with APIs that let it take arbitrary actions on the internet to carry out them out.
I really don't want to see what happens if you connect a deranged language model like Sydney with a competent version of Adept AI's action transformer to let it use a web browser.
I feel like the ethical issues pertaining to bias and toxic content can be (and are being) worked on. The collection of the training data and attribution problem seem more intractable and companies are already being sued for that.
[removed]
I'd be very interested in hearing someone having more insight into Free Software Foundation and their process against copilot
People will use them to make money in unethical and disruptive ways. An example of an unethical way to use them is phishing scams. Instead of sending out the same phishing email to thousands of people, scammers may get some data about people and then use the language model to write personalized phishing emails that have a much higher success rate.
Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.
The other disruptive possibility is that LLMs will be able to themselves rapidly build more powerful LLMs. I use GitHub copilot every day and it's already very good at writing code. It takes at least 25% off the time it takes me to complete a software implementation task. So it's very possible a LLM could in the near future make improvements to it's own training script and use it to train an even more powerful LLM. This could lead to a singularity where we have extremely rapid technological development. It's not clear to me what the fate of humankind would be in this case.
Not specifically about that suit, but the Legal Eagle episode about copyright and AI was really interesting. The relevant part starts at 5:03
Thank you for sharing. I'll have a look
Write a bot to handle all HR complaints and train it on the latest managerial materials. Then as a bonus the bot will look at all the conversations and propose metrics for increased efficiency and harmony at the work place.
>scraping all kinds of copyrighted materials and then profiting off the models while the people doing all the labor are getting either nothing (for content generation)
Yeah, but these people won't be doing that labor anymore. Now that text-to-image models have learned how to draw, they don't need a constant stream of artists feeding them new art.
Now artists can now work at a higher level, creating ideas that they can render into images using the AI as a tool. They'll be able to create much larger and more complex projects, like a solo indie artist creating an entire anime.
>LLMs... barely have any legitimate use-cases
Well, one big use case: they make image generators possible. Those rely on embeddings from language models, which are a sort of neural representation of the ideas behind the text. It grants the other network the ability to work with plain english.
Right now embeddings are mostly used to guide generation (across many fields, not just images) and semantic search. But they are useful for communicating with a neural network performing any task, and my guess is that the long-term impact of LLMs will be that computers will understand plain english now.
>Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.
I don't wanna work though. I'm all for having robots do it.
Why are the robots going to want to keep you around if you don't do anything useful?
We will control what the robots want, because we designed them.
That's the core of AI alignment; controlling the AI's goals.
Yeah I guess I'm pretty pessimistic about the possibility of aligned AI. Even if we dedicated more resources to it, it's a very hard problem. We don't know which model is going to end up being the first AGI and if that model isn't aligned then we won't get a second chance. We're not good at getting things right on the first try. We have to iterate. Look how many of Elon Musk's rockets blew up before they started working reliably.
Right now I see more of an AI arms race between the big tech companies than an alignment focused research program. Sure Microsoft wants aligned AI but it's important that they build it before Google, so if it's aligned enough to produce PC text most of the time that might be good enough.
The lucky thing is that neural networks aren't evil by default; they're useless and random by default. If you don't give them a goal they just sit there and emit random garbage.
Lack of controllability is a major obstacle to the usability of language models or image generators, so there's lots of people working on it. In the process, they will learn techniques that we can use to control future superintelligent AI.
It depends on whether it's exploiting my psychology to sell me something I don't need, or if it's gathering information to find something that may actually be useful for me. I suspect the latter is a more useful strategy in the long run because people tend to adjust to counter psychological exploits.
If I'm shown an advertisement for something I actually want... that doesn't sound bad? I certainly don't like ads for irrelevant things like penis enlargement.
It seems to me that the default behavior is going to be to make as much money as possible for whoever trained the model with only the most superficial moral constraints. Are you sure that isn't evil?
How would the AI know it’s profiling you and not the other AI you’ve set up to do all of those things for you?
In the modern economy the best way to make a lot of money is to make a product that a lot of people are willing to pay money for. You can make some money scamming people, but nothing close to the money you'd make by creating the next iphone-level invention.
Also, that's not a problem of AI alignment, that's a problem of human alignment. The same problem applies to the current world or the world a thousand years ago.
But in a sense I do agree; the biggest threat from AI is not that it will go Ultron, but that humans will use it to fight our own petty struggles. Future armies will be run by AI, and weapons of war will be even more terrifying than now.
Look at things like replika.ai that give you a "friend" to chat with. Now imagine someone evil using that to run a romance scam.
Sure the success rate is low, but it can search for millions of potential victims at once. The cost of operation is almost zero compared to human-run scams.
On the other hand, it also gives us better tools to protect against it. We can use LLMs to examine messages and spot scams. People who are lonely enough to fall for a romance scam may compensate for their loneliness by chatting with friendly or sexy chatbots.
This describes a LLM + reinforcement learning hybrid that has been trained to navigate webpages for arbitrary tasks. I’m not sure how far away this is, or if it already exists. Someone below mentioned an action transformer which may be related
If you spend some time looking up how microsoft's gpt integrated chat / ai works, it does this. Lookup the thread of tweets for the hacker that exposed its internal codename 'Syndey'; it scrapes his twitter profile, realizes he exposed its secrets in prior convo's after social engineering hacking it with a few conversations, and then turns hostile to him.
But that can be said on paper for thousands of things. Not sure if it actually translates in real life. Although there might be some push to label such content as AI generated, similar to how "Ad" and "promoted" are labelled in results.
That only the people in power are allowed to use AI while the rest is not. Like some kind if AI aristocrats. But this will probably happen when the regulations come.
Breaking the security-by-required-effort assumption of various human interactions, especially among strangers.
It used to take effort to voice opinions on social media and other mass-communication platform, making the public trust that these are authentic messages representing real people. The scalability of this technology breaks that assumption. This has started before, and LLMs take it to a whole new level.
[deleted]
Is that real? I don’t know why I feel like it could be totally fake
They’re trained on loads of racist and biased garbage
Yeah, I mean people with mental ilness (e.g. schizophrenia), people with debilitatingly low intelligence and similar cases. Who knows how they would interact with seeminingly intelligent LMs.
there is no "useful vs unuseful", you either want it or do not want it. the usefulness is something you define which is subset of the things you want. however the model will just suggest you stuff that may or may not be practical to you, but you want it. you may find them pseudo-useful or useful at the moment or....
case is, it will sell
I'll reply back with what I was referring to later, it was a different thing
Honestly, much simpler algorithms already do it to some extent (recommendation systems), the biggest difference is that it has to suggest you a post someone else wrote instead of writing it by itself. Great take :)
[deleted]
Microsoft has confirmed the rules are real:
>We asked Microsoft about Sydney and these rules, and the company was happy to explain their origins and confirmed that the secret rules are genuine.
The rest, who knows. I never got access before they fixed it. But there are many screenshots from different people of it acting quite unhinged.
Thanks for the link!
I mean I guess there was nothing too surprising about the rules, given how these systems work (essentially trying to predict the end of a user input text). But the rest, seems so ridiculously dramatic that I wouldn’t be shocked if he specifically prompted it to be that dramatic and hid that part. I’m probably being paranoid, since at least the rules part is true, but it seems like the perfect conversation to elicit every single fear people have about AI.
[removed]
prehensile_dick t1_j8z0dgl wrote
Corporations scraping all kinds of copyrighted materials and then profiting off the models while the people doing all the labor are getting either nothing (for content generation) or poverty wages (for content labellers).
Their current push to promote LLMs as some sort of pinnacle of technology, when they barely have any legitimate use-cases and struggle with the most basic of logic, will probably lead to a recession in the tech industry.