Viewing a single comment thread. View all comments

Primo2000 t1_j4p9vy3 wrote

Maybe from technological standpoint but it will take time for humans to adopt it. You have long term contract signed and whole business models centered around people so this will take some time and when it comes to biology, medicine etc there are blockers such as FDA that will slow adoptions of new medicines a lot. Still i think we are reaching some kind of treshold point when things will really start to take off

45

blueSGL t1_j4qrbqg wrote

> Maybe from technological standpoint but it will take time for humans to adopt it.

Look at what happened with ChatGPT. There is no adoption on-ramp, it was released to the world and now educational institutes are scrabbling to play catch up.

You can bet if a headline reads 'china cures [x]' where x is anything from aging to cancer to any much sort after medical treatment, that timelines will be shortened due to public pressure.

17

Smellz_Of_Elderberry t1_j4sh6om wrote

And it will be china and countries like it that cure xyz diseases.. Because their regulations aren't quite so... Dumb..

1

blueSGL t1_j4smtax wrote

I've heard plenty said about the societal level benefits of a one party rule country, e.g. being able to plan ahead without fear that it will get stopped or defunded when an apposing party comes to power. This has allowed for a lot more progress in terms of planning and infrastructure than otherwise would.

However the downside of such a thing is that there is lack of care to the individual, and at some level, ends justifying means.

The rules and guides for a lot of safety measures are written in blood, ways to make sure that dire mistakes can never happen again.

I feel there was a very real benefit to this at the speed everything moves at, drugs can be proscribed along with a verbose list of side effects and cofounding medication.

I do also feel that rules and guidelines need to be updated to reflect reality. e.g. as drug simulation becomes better it should be relied on more, I feel regulations should alter in lock step with how easy it is to verify drugs in silico.

5

Smellz_Of_Elderberry t1_j4sotm9 wrote

I don't like china, just fyi.

>The rules and guides for a lot of safety measures are written in blood, ways to make sure that dire mistakes can never happen again.

Admittedly. But when you get an experimental vaccine In less than a year, but at the same time, have to wait 10+ years to access new cancer therapies (even though cancer will kill you) it upsets me, primarily due to the inability for normal people to make their own decisions and take their own risks.

>I do also feel that rules and guidelines need to be updated to reflect reality.

Often, what happens instead is that the rules and guidelines are set up to dictate reality. Immunotherapy is a fine example, it's original founders were colored as quacks, and now it has become one of the most groundbreaking developments in cancer treatment.

Also, laws are very very rarely repealed or removed.. There are still laws that say you can't have a pie cooling on your windowsil in order to prevent attracting bears... Even though the bears in said location were eradicated lifetimes ago.. Adding sunset clauses to laws would be a great first step. Make it so all laws need to be renewed after a set amount of time

1

maskedpaki t1_j4qo6vh wrote

I keep hearing this "AI will take long to blend into civilisation"

I don't buy it. We already have capitalist financial markets. If an AI driven growth engine gets 9% ROI and the market gets 6% then all the worlds capital gets channeled into the 9% growth engine. Especially when it's general purpose and can do everything so to speak.

Capitalism will drive the use of these ais the moment they are past AGI level. It's just a matter of reaching it.

11

korkkis t1_j4s5cy0 wrote

Unhinged capitalism is a disease, toxic

2

maskedpaki t1_j4s6o8j wrote

I don't disagree but that wasn't the point of my comment

I was trying to demonstrate that slow takeoff scenarios post AGI are unlikely.

2

korkkis t1_j4s9cbp wrote

Sure, fair enough. Anyhow I think we’ll use AI everywhere like electricity (as it helps us to automate our daily tasks), without any AGI yet. If the AGI ever appears in this planet, it happens like an explosion but on the foundation that’s already there (like accelerated artificial evolution)

1

TheRidgeAndTheLadder t1_j4r6jfv wrote

I'll just note that this prediction hinges on capitalism being basically unassailable by AI.

Could be a totally fair bet

1

ManasZankhana t1_j4qa7ku wrote

Would companies that don’t adapt just end up becoming less profitable and go out of business

2

Electronic-Jello-633 t1_j4qbmn3 wrote

yes, but again over time. it can take years for some buisnesses to feel the effect of competition, and it can take some companies decades to go out of buisness because of a lack of adaptability.

6

AlwaysF3sh t1_j4rt99p wrote

A lot of obsolete jobs probably exist because of pdf’s.

2