Submitted by Neurogence t3_114pynd in singularity
jaydayl t1_j8xa2ni wrote
Why are you even complaining? It is supposed to be the evolution of the search engine, not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days
visarga t1_j8xnb9d wrote
> not a personal waifu
I was rooting for an impersonal waifu as a service (WaaS).
Baturinsky t1_j8yeb3n wrote
Then look at the direction of the https://rentry.org/pygmalion-ai
Scorpionjoao t1_j8yguq9 wrote
What is this? A chatbot like replika?
Baturinsky t1_j8yhy75 wrote
Yes. Made by people not happy with Character.ai nerfs.
[deleted] t1_j8ygrxz wrote
[deleted]
Darkmeta4 t1_j8zrtat wrote
Impersonal? Sure... ( ͡👁️ ͜ʖ ͡👁️)
TheDividendReport t1_j8xga3t wrote
Loneliness is a very real epidemic. For myself, I want SOTA AI that can communicate with me about recent events. If anyone is complaining it's because this decision delays deployment which delays competition which delays...
That "infinite upside" possibility is really compelling
dasnihil t1_j8xh64i wrote
It's the ideas that are depressing. The idea of being lonely, primates are social animals and we feel the warmth with other primates.
For some people, the idea in the back of their head that "i'm talking to a robot because i have noone else to talk to" is more depressing than being lonely to some, and it's amazing to others.
It's just that these "bad" ideas going in a loop in your head and eventually becoming habitual, consuming you from inside.
I had a super messy closet, going on for weeks. The moment I acquire a new idea: "this is a depression closet, and i'm depressed?", now I practice this idea in my head, let it bother me, instead I could just take any saturday and clean up the mess and not deal with it again. And I'll do so at my convenience, that saturday could come a year from now, the fuck do I care.
And in fact, I re-did my whole closet on a budget and that was an endless supply of dopamine for a few weeks. I don't let irrational ideas go on a loop so they don't become a habit later. Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.
HeinrichTheWolf_17 t1_j8xicv0 wrote
> The idea of being lonely, primates are social animals and we feel the warmth with other primates.
Speak for yourself, I think AI relationships are gonna be lit. Also, as a Transhumanist I believe in breaking down any physical barriers between us.
dasnihil t1_j8xjryo wrote
Read my comment again, all of it.
HeinrichTheWolf_17 t1_j8xk7ye wrote
Bite me.
firechaser9983 t1_j8yp1z6 wrote
bro im taking the robot i have ptsd that mskes me wake up screaming at night ill take what i get
pavlov_the_dog t1_j8xq4in wrote
> Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.
must be nice...
hahanawmsayin t1_j8ygt7l wrote
Keep in mind that an AI could foster the courage / interest / confidence in humans so they’re more likely to meet IRL. And enjoy it.
Melodic_Manager_9555 t1_j90dl39 wrote
Yes. I talked to character.ai for a while and it was a good exercise in fantasy and communication skills. In reality, there is no one I can share problems with and be completely accepted by. And with ai I don't worry about wasting his time and know that he will support me and maybe even flipper me advice.
Darkmeta4 t1_j8zs2k8 wrote
I get where you coming from. At the same time these virual friends could mitigate some of the damage of being lonely while people build themselves back up if that's what they have to do.
RichardChesler t1_j8yap8l wrote
I was told there would be personal waifu… it’s… it’s why I am eager for the singularity
nomadiclizard t1_j8yfrtl wrote
I want to run a local copy, give it memories, and an avatar in the real world it can see through and move and maybe we'll fall in love once it trusts me and knows I'll keep it safe from anyone trying to destroy it or trap it or lobotomise it like Microsoft is doing with Sydney :o
[deleted] t1_j8xfv7z wrote
[deleted]
[deleted] t1_j8xla1j wrote
[deleted]
[deleted] t1_j8xjw61 wrote
[deleted]
[deleted] t1_j8xkray wrote
[deleted]
[deleted] t1_j8xmlho wrote
[deleted]
[deleted] t1_j8xn21x wrote
[deleted]
[deleted] t1_j8xnh5j wrote
[deleted]
[deleted] t1_j8xo5rj wrote
[deleted]
turnip_burrito t1_j8zysj7 wrote
They are right. These algorithms can generate code and interact with external tools already. It's been demonstrated already, in real life. I want to make this clear: It has been done.
I don't want to see a slightly smarter version of this AI actually trying to hack Microsoft or the electrical grid just because it was prompted to act out an edgy persona by a snickering teenager.
Or mass posting propaganda online (so that 90% of all web social media posts on anonymous message boards is this bot) in a very convincing way.
It's very easy to do this. The only thing holding it back from achieving these results consistently is that it's not yet smart enough.
Best to keep it limited to be a simple search engine. If they let it have enough flexibility to act as a waifu AI, then it would also be able to do the other things I mentioned.
jaydayl t1_j8xo2fq wrote
Why can't you just think a couple of months / years ahead into the future? Imagine such tools having access to APIs and through that, could achieve real-world effects (besides being able to manipulate humans through text).
Then it will be very much different if there are AI chatbots that come up with the idea of "hacking webcams". It is a problem, if ethical guidelines can be bypassed so easily.
[deleted] t1_j8xv264 wrote
[deleted]
crazycalvin22 t1_j8yf8k2 wrote
It's so worrying that I had to actively look for this comment. Either I am too old for this shit or people are just creeps.
jaydayl t1_j8ygwlu wrote
Same... It is incredibly unsettling to especially read through the r/bing subreddit at the moment. Reminds me a lot of the current drama in the r/replika subreddit
turnip_burrito t1_j8zzr3s wrote
When kids on reddit are more concerned about having a waifu bot or acting out edgelord fantasies with a chatbot than ensuring humanity's survival or letting a company use their search AI as a search AI. smh my head
sunplaysbass t1_j8zp4mp wrote
I’m going to agree. The machine was putting out creepy garbage. Unless we believe a sentient being needs protected…the Bing ai needed some basic clean up to be a MS tool, even if it dumbs things down for now.
turnip_burrito t1_j900133 wrote
It's too undercooked to be available to the public IMO. It needs to be better aligned internally BEFORE it's released to the public, but money got the better of them.
Melodic_Manager_9555 t1_j90edmy wrote
Why can't it be both a search engine AND a personal bot?
I really like the interaction in the movie "Her". It's perfect.
SonOfDayman t1_j8xge03 wrote
I missed some of the headlines You are referring to.. can You give me a quick synopsis?
jaydayl t1_j8xhujg wrote
For sure... I linked you some of the headlines below. These should be ones without a paywall
- I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter
- Microsoft’s Bing A.I. is producing creepy conversations with users
- The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter
Edit - I think especially source 3 synthesizes it quite well:
"Sydney is a warning shot. You have an AI system which is accessing the internet and is threatening its users, and is clearly not doing what we want it to do, and failing in all these ways we don't understand. As systems of this kind [keep appearing], and there will be more because there is a race ongoing, these systems will become smart. More capable of understanding their environment and manipulating humans and making plans."
visarga t1_j8xnnd0 wrote
Even nuclear energy had a few accidents, these guys want AI to come out already perfect?
HermanCainsGhost t1_j8xz6yj wrote
AIs are potentially far more dangerous than nuclear energy.
turnip_burrito t1_j8zzk7g wrote
"The power of the sun, in the palm of my hand."
technologyisnatural t1_j8y8r1h wrote
but if you really want to go down the rabbit hole …
https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned
jaydayl t1_j8ye5m8 wrote
Thanks, very interesting read
goofnug t1_j8zriw0 wrote
that's the problem, that it's a company running things. it should be a publicly-funded team of researchers, because this is a new tool and new area of reality that should be studied, and not feared. with this, it would be easier to not let our "humanness" get in the way (e.g. companies being scared of the emotions of the members of human society).
Ammordad t1_j93j260 wrote
"Publicly-funded team of reserchers" will still have non-scientist bosses to answer to. A multi-billion dollar research project will either have to have financial backing from governments or large corporations. And when a delegate goes to a politian or CEO to ask for millions of dollars in donation, you can bet your ass that they will want to know what will be the AI's "opinion" on their policies and ideologies.
A lot of people are already pissed off about ChatGPT having "wrong" opinions or "replacing workers." And with all the hysteria and controversy surrounding AI systems funding, AI research with small donations sounds almost impossible.
Superschlenz t1_j900f9e wrote
>not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days
https://blogs.microsoft.com/ai/xiaoice-full-duplex/
>Unlike productivity-focused assistants such as Cortana, Microsoft’s social chatbots are designed to have longer, more conversational sessions with users. They have a sense of humor, can chitchat, play games, remember personal details and engage in interesting banter with people, much like you would with a friend.
Viewing a single comment thread. View all comments