Submitted by truthwatcher_ t3_10ms336 in singularity

Opinion: I think that despite their public statements, Google was developing ai at a slower rate than they actually could've. This is due to an innovator dilemma that they find themselves in as well as there is a lot that can go wrong with ai development. They were also so far ahead that they didn't really have to hurry.

That changed with open ai and gpt. The red alert of Google and bringing back the founders clearly shows they're taking it seriously this time. I'm excited that this could speed up ai development. At the same time it gives incentives to ignore safety mechanisms for ai development: if one company follows safety protocols and the other ignores them, then clearly the one ignoring safety will be faster. I believe at this point Google is more worried about being left behind than unintended outcomes

12

Comments

You must log in or register to comment.

GayHitIer t1_j64thoi wrote

Competition is nearly always beneficial to the consumers also, while I do agree with the safety concerns. This was the best that could happen to speed up the process to AGI/ASI and also the singularity.

13

Scarlet_pot2 t1_j66kpe3 wrote

There needs to be more competition in the AI space

9

YobaiYamete t1_j66zfo7 wrote

Screw these billion dollar companies deciding what we are "allowed" to have. Open source AI without filters all the way. CharacterAI is currently being ruined by the over the top comical level "safety net", exactly like AI dungeon was.

5

redroverdestroys t1_j6hpcb9 wrote

"but but but the children, we have to protect the children from everything"

2

Lawjarp2 t1_j67v9pc wrote

That shows why the slow and safe approach won't work. Google tried to play safe and their cash cow, search, itself is in threat now. Not only is it possible for others to overtake them it's also now possible that they may never catch up if they lose search.

Same scenario but with china or other countries. Eventually you will get to a point where the only way to be safe is to be fast.

3

Embarrassed-Bison767 t1_j681dsb wrote

Good, it'll make the singularity happen faster in the end, and whatever failed states or international incidents arise from mismanaged AI in the meantime will be rectified then

3

Ortus14 t1_j6cojdl wrote

Not prioritizing safety at this stage results in a PR nightmare.

We don't yet have the compute for civilization ending ASI.

1

redroverdestroys t1_j6hp80a wrote

Good. Concern for safety is just a code word for control.

I always hate when citizens worry about "safety" and want to be protected from themselves, which ends up stifling the rest of us.

1