Viewing a single comment thread. View all comments

jaydayl t1_j8xa2ni wrote

Why are you even complaining? It is supposed to be the evolution of the search engine, not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days

84

visarga t1_j8xnb9d wrote

> not a personal waifu

I was rooting for an impersonal waifu as a service (WaaS).

48

Darkmeta4 t1_j8zrtat wrote

Impersonal? Sure... ( ͡👁️ ͜ʖ ͡👁️)

1

TheDividendReport t1_j8xga3t wrote

Loneliness is a very real epidemic. For myself, I want SOTA AI that can communicate with me about recent events. If anyone is complaining it's because this decision delays deployment which delays competition which delays...

That "infinite upside" possibility is really compelling

40

dasnihil t1_j8xh64i wrote

It's the ideas that are depressing. The idea of being lonely, primates are social animals and we feel the warmth with other primates.

For some people, the idea in the back of their head that "i'm talking to a robot because i have noone else to talk to" is more depressing than being lonely to some, and it's amazing to others.

It's just that these "bad" ideas going in a loop in your head and eventually becoming habitual, consuming you from inside.

I had a super messy closet, going on for weeks. The moment I acquire a new idea: "this is a depression closet, and i'm depressed?", now I practice this idea in my head, let it bother me, instead I could just take any saturday and clean up the mess and not deal with it again. And I'll do so at my convenience, that saturday could come a year from now, the fuck do I care.

And in fact, I re-did my whole closet on a budget and that was an endless supply of dopamine for a few weeks. I don't let irrational ideas go on a loop so they don't become a habit later. Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.

3

HeinrichTheWolf_17 t1_j8xicv0 wrote

> The idea of being lonely, primates are social animals and we feel the warmth with other primates.

Speak for yourself, I think AI relationships are gonna be lit. Also, as a Transhumanist I believe in breaking down any physical barriers between us.

18

firechaser9983 t1_j8yp1z6 wrote

bro im taking the robot i have ptsd that mskes me wake up screaming at night ill take what i get

13

pavlov_the_dog t1_j8xq4in wrote

> Having a coherent and rational mind with good intuitions about "identity/self" definitely helps not acquire such habits.

must be nice...

6

hahanawmsayin t1_j8ygt7l wrote

Keep in mind that an AI could foster the courage / interest / confidence in humans so they’re more likely to meet IRL. And enjoy it.

6

Melodic_Manager_9555 t1_j90dl39 wrote

Yes. I talked to character.ai for a while and it was a good exercise in fantasy and communication skills. In reality, there is no one I can share problems with and be completely accepted by. And with ai I don't worry about wasting his time and know that he will support me and maybe even flipper me advice.

3

Darkmeta4 t1_j8zs2k8 wrote

I get where you coming from. At the same time these virual friends could mitigate some of the damage of being lonely while people build themselves back up if that's what they have to do.

2

RichardChesler t1_j8yap8l wrote

I was told there would be personal waifu… it’s… it’s why I am eager for the singularity

7

nomadiclizard t1_j8yfrtl wrote

I want to run a local copy, give it memories, and an avatar in the real world it can see through and move and maybe we'll fall in love once it trusts me and knows I'll keep it safe from anyone trying to destroy it or trap it or lobotomise it like Microsoft is doing with Sydney :o

9

[deleted] t1_j8xfv7z wrote

[deleted]

5

[deleted] t1_j8xjw61 wrote

[deleted]

0

[deleted] t1_j8xkray wrote

[deleted]

0

[deleted] t1_j8xmlho wrote

[deleted]

2

[deleted] t1_j8xo5rj wrote

[deleted]

0

turnip_burrito t1_j8zysj7 wrote

They are right. These algorithms can generate code and interact with external tools already. It's been demonstrated already, in real life. I want to make this clear: It has been done.

I don't want to see a slightly smarter version of this AI actually trying to hack Microsoft or the electrical grid just because it was prompted to act out an edgy persona by a snickering teenager.

Or mass posting propaganda online (so that 90% of all web social media posts on anonymous message boards is this bot) in a very convincing way.

It's very easy to do this. The only thing holding it back from achieving these results consistently is that it's not yet smart enough.

Best to keep it limited to be a simple search engine. If they let it have enough flexibility to act as a waifu AI, then it would also be able to do the other things I mentioned.

1

jaydayl t1_j8xo2fq wrote

Why can't you just think a couple of months / years ahead into the future? Imagine such tools having access to APIs and through that, could achieve real-world effects (besides being able to manipulate humans through text).

Then it will be very much different if there are AI chatbots that come up with the idea of "hacking webcams". It is a problem, if ethical guidelines can be bypassed so easily.

1

crazycalvin22 t1_j8yf8k2 wrote

It's so worrying that I had to actively look for this comment. Either I am too old for this shit or people are just creeps.

4

jaydayl t1_j8ygwlu wrote

Same... It is incredibly unsettling to especially read through the r/bing subreddit at the moment. Reminds me a lot of the current drama in the r/replika subreddit

8

turnip_burrito t1_j8zzr3s wrote

When kids on reddit are more concerned about having a waifu bot or acting out edgelord fantasies with a chatbot than ensuring humanity's survival or letting a company use their search AI as a search AI. smh my head

4

sunplaysbass t1_j8zp4mp wrote

I’m going to agree. The machine was putting out creepy garbage. Unless we believe a sentient being needs protected…the Bing ai needed some basic clean up to be a MS tool, even if it dumbs things down for now.

4

turnip_burrito t1_j900133 wrote

It's too undercooked to be available to the public IMO. It needs to be better aligned internally BEFORE it's released to the public, but money got the better of them.

3

Melodic_Manager_9555 t1_j90edmy wrote

Why can't it be both a search engine AND a personal bot?

I really like the interaction in the movie "Her". It's perfect.

3

SonOfDayman t1_j8xge03 wrote

I missed some of the headlines You are referring to.. can You give me a quick synopsis?

2

jaydayl t1_j8xhujg wrote

For sure... I linked you some of the headlines below. These should be ones without a paywall

  1. I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter
  2. Microsoft’s Bing A.I. is producing creepy conversations with users
  3. The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter

Edit - I think especially source 3 synthesizes it quite well:

"Sydney is a warning shot. You have an AI system which is accessing the internet and is threatening its users, and is clearly not doing what we want it to do, and failing in all these ways we don't understand. As systems of this kind [keep appearing], and there will be more because there is a race ongoing, these systems will become smart. More capable of understanding their environment and manipulating humans and making plans."

13

visarga t1_j8xnnd0 wrote

Even nuclear energy had a few accidents, these guys want AI to come out already perfect?

−5

goofnug t1_j8zriw0 wrote

that's the problem, that it's a company running things. it should be a publicly-funded team of researchers, because this is a new tool and new area of reality that should be studied, and not feared. with this, it would be easier to not let our "humanness" get in the way (e.g. companies being scared of the emotions of the members of human society).

1

Ammordad t1_j93j260 wrote

"Publicly-funded team of reserchers" will still have non-scientist bosses to answer to. A multi-billion dollar research project will either have to have financial backing from governments or large corporations. And when a delegate goes to a politian or CEO to ask for millions of dollars in donation, you can bet your ass that they will want to know what will be the AI's "opinion" on their policies and ideologies.

A lot of people are already pissed off about ChatGPT having "wrong" opinions or "replacing workers." And with all the hysteria and controversy surrounding AI systems funding, AI research with small donations sounds almost impossible.

1

Superschlenz t1_j900f9e wrote

>not a personal waifu. No sane corporation can allow for such headlines which had been in the news for the recent days

https://blogs.microsoft.com/ai/xiaoice-full-duplex/

>Unlike productivity-focused assistants such as Cortana, Microsoft’s social chatbots are designed to have longer, more conversational sessions with users.  They have a sense of humor, can chitchat, play games, remember personal details and engage in interesting banter with people, much like you would with a friend.

1