Bismar7

Bismar7 t1_jedq178 wrote

Well the experts in general are wrong.

Just like one of the few who even predicted this was Kurzweil. Bostrom, Gates, Musk, or many of those with their tiny pictures in the field don't grasp the larger picture. They come to unwise conclusions or understanding often based on emotion.

The data pointing otherwise was published in 2004. The singularity is near, and earlier in 2001 with the law of Accelerating Returns https://www.kurzweilai.net/the-law-of-accelerating-returns

The book is massive and a huge amount of it is data and graph plotting of that data. Kurzweil's theory of how things will go actually matches your first point. We will achieve higher levels of productivity through use of external AI and eventually (likely with BCI's) we will move closer to a synthesis as beings of human/AI intelligence and capabilities. Our productivity in 10 years may be millions of times more productive per person than today for those who do not opt to be left behind like the Amish.

Kurzweil discusses this in his book from a few years ago "How to create a mind."

To take this further with my own theories (my college education and life's study is economics and I've written about the next industrial revolution for years now) Employment will adapt to these productivity levels, the owners will be trillionaires or quadrillionaires, and so long as social status remains tied to wealth, inequality will widen its chasm.

There will be some structural unemployment, there may be a change in tax codes or sentient rights to address AI use, but the world will keep spinning and ultimately those who use AI as an excuse to stop preparing for the future will be left behind in the wake of the singularity.

Ironically I think that these events will practically result in people spending more time at work for several reasons. 1. Longevity escape velocity is predicted to happen 2029-2033 2. Historical evidence, as you pointed out, shows increased productivity doesn't have statistical significance on reducing hours worked. 3. The greater deterministic control of the owners and concentrated wealth results in greater influence over the rest of us.

It's in the wealthy's interest for the rest of us to be productive and busy. Aside from this increasing their quality of life, idle hands might cause mischief. Curing aging along with AGI means there will be little, if any, pressure to increase the human population, and I suspect Post-Humans will derive meaning from their production. In the 2030s I think we will see 68-80 hour average work weeks (not through mandate or force either, but because that's what people will be inclined towards).

The hard question is what happens with each single human+AI becomes 10 billion times as intelligent as the average person today (2035-2040), the exponential gains become increasingly hard to predict from today as we move closer to the technological singularity.

0

Bismar7 t1_je3hsby wrote

Law of Accelerating Returns given the rate at where we are now and estimates that gpt 3 was previously classified as anywhere from age 6-9. They were not saying it was, they were saying it could complete general tasks at that rate. The current one is excellent at rote context memorization beyond the average person, however lacks in other areas.

I don't read too much into that beyond that and wouldn't recommend that you do.

2

Bismar7 t1_je3hfvu wrote

Because the automation you see around you is still human inspired, it still caters to human wants and needs, and it requires human input to function.

The advent push of the envelope will be when we merge with AI mentally. When humans become AI. The strongest computer known for the last hundred thousand years has been the human brain.

You are confusing AI for humans today, AI requires the input we give it and even AGI will not want to seek elimination of people... Other people using AGI to do that will.

Have people stopped working just because economies of scale and mass production have increased the efficiency of tasks by 10000%? No, unemployment in many places is low. People are busier than ever...

When we multiply that and one person can produce in 10 years, what the entire world does, there will still be tasks people need to do, there will likely never be enough because there is always something more we can spend on time doing.

1

Bismar7 t1_je2oa5n wrote

People are surprisingly foolish about this subject.

AI will make us more efficient it won't replace us.

When it gets to a point where we can augment our minds with it, we will, synthesis is likely the pentacle moment.

In the meantime, people, programmers, will be able to do more in less time. Demand for digital goods will keep up with the design of them.

1

Bismar7 t1_je1x098 wrote

Yup it was quite eye opening, I've read his stuff since and a lot of what he has to say is way more evidence based than what we get from people like Gates, musk, or even other futurists who just have philosophical theories... Many of which are grounded in irrational emotions and fear.

His more recent interviews on Star Talk with Tyson are also really good and I recommend them.

4

Bismar7 t1_je183xd wrote

73 comments at the time I saw this and not one of them gave much of an answer to your question...

So to start I think there are a couple foundational understandings you need to have to know what to look for. The first and most vital is exponential vs linear and experiencing gradual exponential gains through linear perception.

All of us perceive time and advancement as a gradual thing, despite the actual increasing rates of knowledge application (technology). This means that on a day to day, month to month basis, you won't commonly feel "pentacle" moments (like the GPT-4 demo) because most of the time the advancements are not presented as well or demonstrated so well, additionally the advancement for the first 30% takes longer than the last 70%. So it will always feel like nothing is happening for a long period of time, then feel like things rapidly happen at the end.

The next pentacle moment will likely be AGI, basically adult human level AI that does not require a prompt to do a variety of adult level tasks. Right now GPT and LLMs must be prompted and must have a user in order to have functionality, they operate within specific tasks at an adult level, but in practical intelligence are closer to a 10-13 year old with some pretty major limitations.

Now to the exponential trends, Moore's law was part of a much larger data set that predicted this back in 2004. Here is the precursor article and data (warning it's long and a lot)

https://www.kurzweilai.net/the-law-of-accelerating-returns

This is the actual data and projections, generally it has held true. Kurzweil wrote How to create a Mind a few years ago and some of the things to look for will be the hardware in 2025 that will be capable of close to adult brain simulation (the software will need to be done but that's when it's expected to have the hardware). Longevity escape velocity is another major metric for transhumanists, which is currently estimated at 2029ish, and superintelligent transhumans, IE beings with a synthesis of AI and Human capabilities that equate to the intelligence of hundreds, thousands, or millions of people today, is projected sometime in the mid-late 2030s.

Hardware advancements will happen first, then governments/DARPA will utilize them, then corporations, then everyone else. The run away effect is the actual exponential aspect to this, so from this point to several years until it happens, it will feel like nothing is happening (because that's the nature of exponential gains being experienced with gradual linearity.

Your best bet, everyone's best bet, would be to read Kruzweil, Micho, Bostrom, and others who have studied and written about the subject of what, how, and why. I would take most "doomers" like musk or gates, even Bostrom (as philosophy isn't exactly computer science) with a grain of salt. Kurzweil tends to be the one who speaks best to the reality even if he isn't always correct in his timeline of prediction (though he is close).

20

Bismar7 t1_ja5ypxw wrote

Well, the determination of the limits on AI is their hardware, as what we build can host more complex minds. Right now humans are better, over time they will reach where we are and moving forward their hardware will keep advancing, and likely merge with humans to be the best we can design. A hybrid of organic and electrical knowledge that is unimaginable today.

However I would say during 2027-2028 likely AI will achieve competency in the same tasks any 25 year old adult has on a commercial level, but we will have to see.

0

Bismar7 t1_ja4d62p wrote

AI is our future and the advance is exponential not linear. From 1700 to now what is the progress towards AI?

How about from 1980 to now, 2010 to now? The human genome project had nearly no progress made until after half the time spent on it. In the past three years we have seen remarkable AI since we have the hardware to support it. Human adult level AI will exist in labs in 2025, that's two years. It will be commercial by 2027, in the 30s we will achieve a level of superintelligent AI with capabilities beyond what we imagine today. Less than 10 years.

Scalability is a question of hardware to host their minds and our process with them will be one of synthesis and cooperation as all of us are better off working together. This becomes much more time consuming if we also try to build physical representation of them (compared to billions of humans), AI bodies become too much of an expense. So the reality is that likely by 2035 most remote labor will be AI, lots of paralegal, call center, managerial types of work that don't require a physical presence, data analytics, hell the stock market already uses bots.

The danger has been human. It will continue to be human. These AI will learn from us like adults but with a ferocity for learning we could never match. Who teaches and guides them determines the foundation they build from, superintelligence can easily equate super wisdom.

13