Submitted by Beautiful-Cancel6235 t3_11k1uat in singularity

We are all getting whiplash from the breakneck speed of AI development and adoption/integration. This is likely funded by an AI arms race—this arms race coupled with just general inability for our government to effectively regulate anything, explains this crazy rate.

What would be something that could slow this down? Someone mentioned the limit of transistors on chips but I don’t understand that and it seems like they’re making super chips anyway.

What about alignment? Is there a scenario where, as AI becomes more and more powerful, scientists find they can’t control it or align it and it’s power can only be used for finite (and hopefully noble) projects like solving climate change.

74

Comments

You must log in or register to comment.

angus_supreme t1_jb5jlfp wrote

China invading Taiwan, causing loss in chip production (amongst other ill effects)

89

Itchy-mane t1_jb6512v wrote

Tsmc is building next generation chip fabs in SK, Japan and US so that'd really only slow it down by a couple years

41

Savings-Juice-9517 t1_jb6y5l3 wrote

Their fab in Arizona is only a tiny plant compared to the ones in Hsinchu Science Park

15

s2ksuch t1_jb6zhkb wrote

What about South Korea and in Japan? All three of them together probably get close to the size of the factory in Taiwan

7

visarga t1_jb78ro9 wrote

AI needs the highest grade of chips that can only be produced in Taiwan, other countries can produce lower grades.

7

QuantumPossibilities t1_jb9apct wrote

The first 5nm chip in production was designed by a US company, Marvel. They are a fabless designer and yes, rely on TMSC and others as they diversify production. TMSC has understood their advantage is in production and has gone out of their way to not compete with the companies they manufacture product for. This manufacturing advantage will lessen as companies like Intel invest money in the high end lithography machines able to produce these specialized AI chips. I wouldn‘t count on chips being the limiting factor in the speed of AI adoption. As per usual, we’d have to anticipate they will continue to become more capable, more available and more affordable.

6

CypherLH t1_jbddriq wrote

A real war in Taiwan will likely disrupt sea trade routes to South Kore and Japan as well. If nothing else insurance costs will soar for shipping companies, increasing transportation costs. Worst case if the war is wide enough the broader western Pacific could be a maritime war zone and more deeply cut transportation links.

3

nillouise t1_jb8h30n wrote

China throw bomb in deepmind office may be can do that.

1

CertainMiddle2382 t1_jb61410 wrote

We are nearing self improving code IMO.

Once we get past that, we have crossed the threshold.

Seeing the large variance in the hardware cost/performance of current models, Id think the progression margin for software optimization alone is huge.

I believe we already have the hardware required for one ASI.

Things will soon accelerate, the box has been opened already :-)

48

blueSGL t1_jb6h9jc wrote

>Seeing the large variance in the hardware cost/performance of current models, Id think the progression margin for software optimization alone is huge.

>I believe we already have the hardware required for one ASI.

Yep, how many computational "ah-ha" moment tricks are we away from running much better models on the same hardware.

Look at stable diffusion how the memory requirement fell through the floor. We already are seeing similar with LLaMA now getting into public hands (via links from pull requests on Facesbooks github lol) there are already tricks getting implemented in front ends for LLMs that allow for lower VRAM usage.

13

Baturinsky t1_jb6v5pm wrote

I haven't noticed any improvement in memory requirements for 5 months on Stable Diffusion... My RTX2060 still is enough for 1024x640, but not more.

LLaMA does good on tests on small models, but small size could make it not as fit for RLHF.

There is also miniaturisation for inference by reducing precision to int8 or even4. But that does not fit for training, and I believe AGI requires real-time training.

So, in theory, AGI could be achieved even without big "a-ha"-s. Take existing training methods, train on many different domains and data architectures, add tree earch from AlphaGo and real time training - and we probably will be close. But it would require pretty big hardware. And would be "only" superhuman in some specific domains.

3

fluffy_assassins t1_jb6vmgz wrote

I had a kind of a theory.

There used to be self-modifying code in assembler because computing power was more expensive than programmers' time. So programmers took more time to get more out of the more expensive hardware.

I'm thinking, when transistors can't shrink anymore(quantum effects and all), we're going to need to squeeze out all the computing power we can get to the point where... right back to self-modifying code. Though probably done by AI this time. I don't think a human could debug that though!

3

visarga t1_jb79kst wrote

Back-propagation is self-modifying code. There is also meta-back-propagation for meta-learning, which is learning to modify a neural network to solve novel tasks.

At a higher level, language models trained on code can cultivate a population of models with evolutionary techniques.

Evolution through Large Models

4

NothingVerySpecific t1_jb92tmn wrote

I understand some of those words

3

ahtoshkaa2 t1_jb9czan wrote

Same) Haha. Thank god for ChatGPT:

The comment is referring to two different machine learning concepts: back-propagation and meta-back-propagation, and how they can be used to modify neural networks.

Back-propagation is a supervised learning algorithm used in training artificial neural networks. It is used to modify the weights and biases of the neurons in the network so that the network can produce the desired output for a given input. The algorithm uses gradient descent to calculate the error between the predicted output and the actual output, and then adjusts the weights and biases accordingly.

Meta-back-propagation is an extension of back-propagation that is used for meta-learning, which is learning to learn. It involves modifying the neural network so that it can learn to perform novel tasks more efficiently.

The comment also mentions using evolutionary techniques to cultivate a population of models in language models trained on code. This refers to using genetic algorithms to evolve a population of neural networks, where the best-performing networks are selected and combined to create new generations of networks. This process is known as evolution through large models.

7

vivehelpme t1_jb9flpl wrote

>We are nearing self improving code IMO.

Ah, the recurrent boostrap-to-orbit meme. It's just around the corner, behind the self-beating dead horse.

0

CertainMiddle2382 t1_jb9gdce wrote

Hmm, its not like 2023 is a little bit unlike 2020 AI wise.

The very concept of singularity is self improving AI pushing into ASI.

I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.

What new particular characteristic of an AI would impress you more and show things are accelerating?

I believe humans get desensitized very quickly and when shown an ASI doing beyond standard model physics will still manage to say: so what? Ive been expecting for more since at least 6 months…

4

vivehelpme t1_jb9jvtl wrote

>I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.

The person who wrote the training code already had competency in that language, that didn't make the AI-programmer duo superhuman.

And then you decide to train the AI on the output of that programmer, so the AI-programmer duo will be just the AI, but from where does it learn to innovate into a superhuman superai super-everything state? It can generalize what a human can do, well, that's good, but its creator could also generalize what a human can do.

Where is the miracle in this equation? You can train the AI on machine code and self modify until perhaps the code is completely impossible to troubleshoot by human beings but the system runs itself on 64 GPUs instead of 256. That makes it cheaper to run, it doesn't make it smarter.

​

>The very concept of singularity is self improving AI pushing into ASI.

That's an interpretation, a scenario. The core of it all comes from staring at growth graphs too long and realizing that exponential growth might exceed human capacity to follow.

Wikipedia says :

>The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

But how is that really different from:

>The technological singularity—or simply the singularity[1]—is a statistical observation of current state of society where growth at a large scale have resulted in innovation and data collection rates that exceed the unaided human attention span, some claim this might result in unforeseeable changes to human civilization. On a global scale this is generally agreed to have happened around the invention of writing thousands of years ago(as there's exits too much text for anyone to read in a lifetime) but some argue that this coincides with the more invention of the internet as only then did you have the option to interactively access the global state of innovation and progress and realize that you cannot keep up with it even if you spent 24 hours a day reading scientific articles.[2] An online subculture argues that superhuman AI would be require for this statistical observation to be really true(see: no true Scotsman fallacy), despite their own admitted inability to even follow the realtime innovation rate in just their field of worship: AI.

1

CertainMiddle2382 t1_jb9m08p wrote

I dont get your point.

The programmer doesn’t speak Klingon though the program can write good Klingon. AlphaZero programmers don’t play go though the program can beat the best human go players in the world.

By definition being better than a human at something means being « super intelligent » at that task.

Intelligence theory postulates G, and that it can be approximated with IQ test.

« Super intelligent AI » will then by definition only need to show a higher IQ than either its programmers or the smartest human.

Nothing else.

Postulating the existence of G, it is well possible that ASI (by definition again) will be better at other tasks not tested by the IQ test.

Rewriting a better IQ version of itself for example.

Recursively.

I really dont see the discussion here, these are only definitions.

5

vivehelpme t1_jbep9br wrote

>I dont get your point.

I guess my point is superintelligent by your definitions

>The programmer doesn’t speak Klingon though the program can write good Klingon.

It have generalized a human made language.

>AlphaZero programmers don’t play go though the program can beat the best human go players in the world.

https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/

It plays at a generalized high-elite level. It's also a one trick pony. It's like saying a chainsaw is superintelligent because it can be used to saw down a tree faster than any lumberjack does with an axe.

>« Super intelligent AI » will then by definition only need to show a higher IQ than either its programmers or the smartest human.

So we could make an alphago that only solve IQ test matrices, it will be superintelligent by your definition but it will be trash at actually being intelligent.

>I really dont see the discussion here, these are only definitions.

Yes, and the definition is that AI is trained on the idea of generalized mimicry, it's all about IMITATION. NOT INNOVATION.

This is all there is, you caulculate a loss value based on how far from a human defined gold standard the current iteration lands and edit things to get closer. Everything we have produced in wowy AI is about CATCHING UP to human ability, there's nothing in our theories or neural network training practices that are about EXCEEDING human capabilities.

The dataset used to train a neural network is the apex of performance that it can reach. You can at best land at a generalized consistently very smart human level.

2

CertainMiddle2382 t1_jbeq0si wrote

You are obviously mistaken.

As you know well zero shot learning algorithms beat anything else, saw a DeepMind analysis postulating that it allows them to explore part of the gaming landscape that were never explored by humans.

And you seem to be moving lampposts as you move along.

What is the testable characteristics that would satisfy you to declare the existence of an ASI?

For me it is easy, higer IQ than any living human, by defnition. Would that change something, you can argue it doesnt, I bet it will change everything.

2

vivehelpme t1_jbexk7n wrote

>As you know well zero shot learning algorithms beat anything else

It doesn't create a better training set out of nothing.

> it allows them to explore part of the gaming landscape that were never explored by humans.

Based on generalizing a premade dataset, made by humans.

If an AI could just magically zero-shot a better training set out of nowhere we wouldn't bother making a training set, just initailize everything to random noise and let the algorithm deus-ex-machina it to superintelligence out of randomness.

>What is the testable characteristics that would satisfy you to declare the existence of an ASI?

Something completely independent is a good start for calling it AGI and then we can start thinking if ASI is a definition that matters.

>For me it is easy, higer IQ than any living human, by defnition. Would that change something, you can argue it doesnt, I bet it will change everything.

So IQ test solving AI are superintelligent despite not being able to tell a truck apart from a house?

2

DungeonsAndDradis t1_jb5ek3g wrote

According to history, this will only accelerate (towards extinction, I think).

To answer your question, the only thing that would slow down AI research is a large scale, civilization-affecting issue. Massive meteor strike. Deadly plague. Nuclear war. CME (coronal mass ejection) that takes us back to the 1800s.

42

DixonJames t1_jb9gd6p wrote

it may be too late for regulation, which was the best hope of slowing non secret AI, but this may have given the secret stuff an advantage, and let's face it better our overlords are robotic vacuum clraners than terminators...

1

RabidHexley t1_jbb7h2z wrote

It being open seems unequivocally better in my eyes, even outside of being optimistic towards technological progress.

It's better for lots of actors to actually know what the cutting-edge actually is. More eyes means more solutions and scrutiny. We want all the best minds possible looking at this stuff.

Outside of actively outlawing ALL development on machine learning and neural networks (basically tracking down anything that looks remotely like neural network development and sending them to prison), and going to war with nations who don't comply, this isn't the kind of tech you can stop, only slow down and push into the shadows or other people's hands. And if you're concerned about uncontrollable AI agents that's not a remotely better situation to be in, even if you've slowed the tech's progress by however many years.

2

DixonJames t1_jd71hso wrote

yes I think you're right. I think the time that regulation could be effective has now passed. Perhaps AI is the great filter. if we get through it intact a bright future beckons.

1

MrGoodGlow t1_jb7ov2m wrote

you forgot the most likely and most present. Climate change destroying our supply chain capacity

−9

imnotabotareyou t1_jb7qj1y wrote

Lmao no it’s not

11

MrGoodGlow t1_jb7t3md wrote

Then you're blind. There have been more historic storms,floods,fires, and record breaking (in both direction) disasters and other natural disasters in the last 2 years than the last 20.

−11

Manticor3Theoriginal t1_jb863w9 wrote

The problem isn't us going full scorched earth apocalypse
geostorm-style dude, its the most delicate natural ecosystems collapsing, leading to the extinction of endangered animals, for example: due to slight temperature increase, a little more topsoil in African wilds are loosened, leading to duststorms that force rare species of lions into possible extinction. (nothing to do with supply chains or the economy) It doesn't mean that climate change is good though. We should ALL try to vote for carbon neutral policies and be even a little bit more eco-friendly.

8

Manticor3Theoriginal t1_jbc85jv wrote

You know what, thats a really good point and I did not see a lot of that before now, thanks bro, hoping I can be a little more correct with what I say in the future. But to be fair, I really care about the environment and those animals.

3

MrGoodGlow t1_jbcewa2 wrote

Appreciate your reply. I apologize for my jaded view on your stance of protected animals. If I could I'd rephrase it to "animals that most don't care about".

We live in the environment.

I'm mobile right now, so can't provide sources (but literally Google any soundbite I'm about to spew and a main stream source will cite it).

Supply chain collapse will likely occur before "Venus" by Thursday.

Our entire economic model of logistics is set up in two underlying principals over the last 50ish years.

"Just In time delivery" and consolidating regional factories into mega global factories.

Essentially we've exchanged resiliency for efficiency. This is bad because as climate change disasters ramp up they cause massive disruptions.

Example. During the Texas freeze a couple years ago the world's largest PVC supplier (somewhere around 57%) shut down for about a month and it causes a whiplash effect that impacted the globe for about six months afterwords. (1)

Last year there was a freak hurricane near Oman that had it hit about a hundred miles north would have impacted 20% of oil production in the world.

This summer major rivers in China, Europe, and U.S to name a few. The Mississippi was so low this summer that we had a massive backlog of barges that couldn't transport up and down the river and we had to expend a lot of resources dredging the river. (2)

Natural disasters are costing more and more. Something like the last 5 years of hurricanes alone have cost as much as the previous 20 years before that.

In addition our energy return on investment for oil (what our entire global economy is built on, and renewables will take decades to even possibly replace) is diminishing .

Canada had major roads wiped out, Pakistan flooded, the heat dome over Canada that killed over a billion sea creatures.

It really is a math equation. There will be a point where the cost of repairing and rebuilding will not exceed the damage natural disasters will cause.

We won't be able to focus on building new and better technology as we're simply trying to survive the next disaster right around the corner. Our technology systems require massive global efforts and factory specialization.

(1) https://www.businessinsider.com/plastics-shortage-texas-freeze-storm-uri-fight-for-materials-2021-3#:~:text=The%20freeze%20in%20Texas%2C%20which%20is%20one%20of,shut%2C%20the%20Journal%20said%2C%20citing%20S%26P%20Global%20Platts

2 https://www.reuters.com/world/us/us-barge-backlog-swells-parched-mississippi-river-2022-10-04/

2

Manticor3Theoriginal t1_jbcnza0 wrote

Jesus Christ, seeming very unlikely that we will avoid a collapse! I've heard that a really effective way would be for governments to suddenly embrace scientific progress, basing laws off of social studies and technologies.

1

Lawjarp2 t1_jb66hv1 wrote

The things that can slow it down are already in motion but they can only push it down so far.

(1) A recession causing a drain on the companies trying to build AI. A recession is here.

(2) A war or other critical event causing interest rates to go high, leading to defaults in startups and even established companies. Interest rate will go all the way to 6% this year.

(3) Hardware/cost limits being hit. Better hardware will ofcourse be available soon but it's harder now to just scale by pumping money. It's already reaching hundreds of millions of dollars, more is only possible by governments or high return rate on these AI models.

(4) Isolation of a large country like China from chip manufacturing and procuring for AI.

Other things that could happen

(*) GPT-4 being a bust and thereby eroding confidence.

(*) OpenAI and other companies fail to monetize.

(*) Scaling may have reached it's limits. Newer architectures take time.

But even with all this, it can only slow it down by 5-10 years. We will still likely have AGI in 2030s.

31

TopicRepulsive7936 t1_jb6shpt wrote

Now do the exercise the other way. What if cost of chips approach zero which is very likely looking at trends.

9

visarga t1_jb794uu wrote

The more advanced these chips get, the harder to make. So advancement in capability amplifies the cost.

2

ihateshadylandlords t1_jb5ktsj wrote

We still haven’t been able to get through the bottleneck that is R&D and making products available to the masses once proof of concept is established. I see a lot of posts on here that involve proof of concept for great products. But they still have to test the products to make sure they don’t malfunction over a period of time. The products also have to be at a price to where the average person can afford them. A lot of things here will get shelved because they’re either not able to get the price down or it malfunctions too often and they can’t fix it.

I think it’ll be a long time before we can accelerate/refine that part of the production process.

10

visarga t1_jb7a656 wrote

> A lot of things here will get shelved because they’re either not able to get the price down or it malfunctions too often and they can’t fix it.

You just described about 99% of all AI products. They all malfunction. All of them. "Errare humanum est", but for now "errare machinale est".

4

TopicRepulsive7936 t1_jb6t9w6 wrote

No actual thought was found in that drivel.

1

ihateshadylandlords t1_jb6wg0u wrote

Sounds like you have a reading comprehension problem then.

3

TopicRepulsive7936 t1_jb6z16z wrote

The spammer can reply.

−5

ihateshadylandlords t1_jb71d0f wrote

The block button is available, unless you struggle with that too.

2

TopicRepulsive7936 t1_jb745or wrote

I enjoy your fixation.

−5

DragonForg t1_jb8wibd wrote

The government cracking down on AI research for fear of AI. Kinda like the drug movement and how it stifled research into shit like psychedelics.

7

RabidHexley t1_jbb8lgk wrote

Would be surprised given this isn't isolated to the States. Anyplace that can acquire GPUs can theoretically perform AI research. And the potential bad outcomes of AI development don't really care about geographic location, so there's not any benefit to stopping the research being done here.

2

No_Ninja3309_NoNoYes t1_jb8lduv wrote

There are many different types of roadblocks that could occur in varying degrees of likelihood:

  1. Lack of data. Data has to be good and clean. Cleaning and manipulation takes time. Purportedly Google research claims that compute and data have a linear relationship, but I think that they are wrong. Obviously, this is more of a gut feeling, yet IMO their conclusions were premature based on too few data points and self-serving.

  2. Backprop might not scale. The thing is that you go down or back to propagate errors and try to account for them. That's like that game that some of you might have played where you whisper a word to someone else and he or she passes them on. IMO this will not work for large projects.

  3. Network latency. As you add more machines the latency and Amdahl's law will limit progress. And of course hardware failure, round-off errors, and overflow can occur.

  4. Amount of information you can hold. Networks can compress information but if you compress it too much, you will end up with bad results. There's exabytes of data on the Web. Processing it takes time and with eight bytes or less per parameters, you can have an exa parameters model in theory. However irl that isn't practical. Somewhere along the path, probably at ten trillion parameters, networks will stop growing.

  5. Nvidia GPUs can do 9 teraflops. A trillion parameters model would allow about nine evaluations per second. Training is magnitudes more intense. As the needs for AI grow, supply and demand of compute will be mismatched. I mean, I was using three multi billion parameters models at the same time yesterday. And I was hungry for more. One of them was slow, the second gave insufficient output, and the third was hit and miss. If you upscale 10x, I think that I still would want more.

  6. Energy requirements. With billions of simultaneous requests a second, you require a huge solar panels farm. That's maybe as many as seven solar panels, depending on conditions, per GPU.

  7. Cost. GPUs could cost 40K each. Training GPT costs millions. With companies doing independent work, billions could be spent annually. Shareholders might prefer using the money elsewhere. It's not motivating for employees if the machines become the central part of a company.

3

IluvBsissa t1_jb9rtlm wrote

I don't think we will need more computing power to reach AGI in 10-20 years.

1

MSB3000 t1_jb8o7u8 wrote

We already can't align our AI systems, or any technology for that matter. Right now it's actually a very familiar problem; machines don't do what you intend, they do what they're made to do. And this is basically fine because as of right now, there is nothing smarter in the known universe than human beings, and so we're still in charge.

But when the machines gain more intelligence than humans? Actual alignment is a totally unsolved problem, so we really do need that solved before we inadvertently create a superintelligent chatbot.

3

Yomiel94 t1_jbcxfi8 wrote

>machines don't do what you intend, they do what they're made to do.

It seems like, whether you use top-down machine-learning techniques to evolve a system according to some high-level spec or you use bottom-up conventional programming to rigorously and explicitly define behavior, what’s unspecified (ML case) or misspecified (conventional case) can bite you in the ass lol… it’s just that ML allows you to generate way more (potentially malignant) capability in the process.

There’s also possible weird inner-alignment cases where a perfectly specified optimization process still produces a misaligned agent. It seems increasingly obvious that we can’t just treat ML as some kind of black magic past a certain capability threshold.

0

ManosChristofakis t1_jb9b8fc wrote

  1. ai alignment. If a large scale attack is launched that tries to interfere with US nukes, you can bet your ass that ai will disappear from everyday life overnight. Obviously we dont have to get to such extreme cases for ai to be regulated or straight up not leaving the lab in the first place

  2. human alignment. If ai progresses so fast that everyone loses their jobs, bussiness wont have any customers at all and all will go bankrupt including the ai making bussinesses themselves.

  3. lack of training data , obviously

  4. in case our hardware has reached or is close to reaching its limit in terms of efficiency, providing more computational capacity might require more hardware which might be less efficient in its use and makes computational power increase linearly instead of exponentially (also in such a case, cost might increase on par or faster than computational power)

  5. limits of current architectures. Problems like hallucination. Also i read a paper that LLMs model their output to match the prompt given by its user. That is it will reply like a neuroscientist to a neuroscientist or a philosopher to a philosopher. This may limit many of its uses in places like healthcare because biases and people not knowing what they are talking about can make the ai reach wrong conlcusions. There may be other limitations which i/scientists themselves arent aware of yet.

  6. costs. Obviously it takes a lot to buy and maintain the infastructure : cloud , GPUs , electricity and training are all significant costs right now with current LLMs which have parameters in the billions and deal only with text, but right now these costs are doable. Imagine if we try to create a multimodal ai that does the job of the engineer. It will require years or decades of training (because you cant speed up the training processes by cramming decades of training in human time to days like you do in the pc). it will maybe require hundreds of trillion (if not petas) of parameters and it would propably have to process information in real time which would propably be very expensive . You would also have to pay and maintain its robot body and accomodating infastructure. There propably are limits even with current LLMs. Current LLMs bill you per token of the robots reply aswell as its context. Best current LLMs have thousands of words of context and right now for every few replies you get it propably costs pennies (or less). But if you try to create a LLM that contains context of millions of words (for example a personal assistant or a robot friend ) the cost for every single reply , let alone continual replies, will be too prohibitive. This, assuming that these things are even possible

3

Baturinsky t1_jb6tt6j wrote

People starting to use it for evil. Frauds, terrorism, etc.

2

play_yr_part t1_jbepz8j wrote

fraud isn't going to stop it. Terrorism depending on the scale of attack may halt it though.

1

techhouseliving t1_jb7oku3 wrote

Although it takes a supercomputer to initially train a model, it can run in a very small amount of memory and processing. Like 2 gigs of data is required for stable diffusion which can in theory create any 2d art conceivable. Similar for language models. It's the ultimate compression algorithm.

M1s and m2s are designed to run these models very efficiently. And those are pretty widely distributed.

2

Tiamatium t1_jbdk7us wrote

Few things, depending on what you mean by "it". If you're talking about AGI, then I could come up with a small list actually:

  1. Funding and cost of AI in terms of work-results. If we realize that AI of intelligence of a mouse or a stupid dog can do everything and anything we need, and it's rather simple to create an AI like that, but it's a lot harder to create an AI of human level intelligence, there simply won't be any financial insensitive to create a smarter AI, and frankly, I see this as most likely possibility.

  2. Large scale military conflict in Eastern Asia, say if China invades Taiwan or North Korea invades South. Our chip manufacturing capabilities are concentrated in that one small reagion, and this is in a way Taiwan's insurance policy.

  3. Now this is the interesting stuff. It's perfectly possible that consciousness is more complex that we thing. There are few very well respected scientists that believe consciousness might be a result of weird quantum effects (in a way, a biological quantum computer), in which case our AI is further from AGI than most people thing. It's important to move that quantum effects emerge all the time in biochemistry, for example in the unholy union of physics, chemistry and biology known as Photosynthesis, where in each step of the process, from the moment energy is collected in antenna complex, it uses quantum effects.

2

freeThePokemon246 t1_jb5fb9l wrote

I foresee a dead end in LLMs. Their core limitations are plainly visible, if one takes of their hype glasses. Once the spark of hope that is the LLMs wink out of existence we shall once again be back in a hopeless AI darkness. Maybe the next generation after us will be luckier.

1

94746382926 t1_jb5xztj wrote

Man this sub can be something else sometimes. The thread is literally asking for what people think could slow things down and you get downvoted for stating your opinion lol. People need to lay off the hopium a little bit.

To be clear I don't even really agree with your opinion (I think LLM's could possibly see a slowing of improvement soon, but think they will be quickly replaced). Regardless, we should want dissenting opinions, especially when we're asking for them.

16

challengethegods t1_jb60zd1 wrote

>you get downvoted for stating your opinion lol.
>
>To be clear I don't even really agree with your opinion

yea that pretty much summarizes how reddit voting works.

10

94746382926 t1_jb61csk wrote

I mean I up voted him even though I disagreed, but yeah most people just downvote whatever they disagree with not necessarily what promotes discussion.

5

thatdudejtru t1_jb6d4xz wrote

Right? Is it not the job of a comment to...ya know...comment on the idea, whether you deem it wrong or right in the context. Upvoting and downvoting should be about filtering out lack of contributing content.

6

Frumpagumpus t1_jb6efow wrote

soon AI will be able to identify people that vote this way and eliminate them, er I mean, shadow vote ban them.

4

challengethegods t1_jb6szip wrote

"yea man, I totally agree with this. [downvotes it anyway]"
some kind of neurotoxin for AI training data, probably

3

SgathTriallair t1_jb7js51 wrote

Agreed. It's a stupid opinion but they did ask for what would possibly cause it to slow down and, of they were right, this would slow it down.

The down votes are probably for saying it's true rather than saying it's a method by which a show down could happen.

2

TopicRepulsive7936 t1_jb6v19g wrote

That guy doesn't know how much machine learning is used and neither do you.

0

94746382926 t1_jb6wtbj wrote

Did you miss the part where I said I didn't agree with him?

3

TopicRepulsive7936 t1_jb6yx3r wrote

You said you don't agree with his opinion. Lies aren't an opinion.

−1

94746382926 t1_jb6z0sg wrote

Can you see the future? Nobody knows for sure how far LLM's will take us. A prediction is not a lie even if it turns out to be false.

2

TopicRepulsive7936 t1_jb6z9ss wrote

And is LLM's the only thing we're doing?

−1

94746382926 t1_jb71arm wrote

No, and I never said that. It was his opinion that LLM's will stall out and we'll have another AI winter, not mine. I don't see the issue here.

I think we have plenty of other types of models that will find success, LLM's seem to only be one piece of the puzzle.

2

Zer0D0wn83 t1_jb5jm27 wrote

Yeah, the same thing happened with that internet thing that everyone was banging on about in the late 90s. I wonder what happened to that...

14

Cryptizard t1_jb5t74z wrote

The more direct comparison is when the perceptron was invented and everyone said it’s just a couple years of tuning it until we get AGI. That was in the 1960s.

7

challengethegods t1_jb60c3b wrote

That explains why politics has had an AI-Generated vibe for the last 50 years.

3

Neurogence t1_jb60puu wrote

To play devil's advocate, I think it would be extremely foolish to rely on LLM's to take us all the way.

7

Zer0D0wn83 t1_jb62bai wrote

Agreed. You never rely on one tech, same as the Internet hasn't

2

phillythompson t1_jb5jysg wrote

And what limitations do you see with LLMs that wouldn’t be “solved” as time goes on?

2

Silly_Awareness8207 t1_jb5knx0 wrote

I'm no expert but the hallucination problem seems pretty difficult

14

[deleted] t1_jb8niwt wrote

I don't see how any other architecture would solve that problem, that's just an issue of how current LLMs are trained

1

Caring_Cactus t1_jb5o5aq wrote

We are at narrow AI right now, this is only the beginning as AGI is being developed.

1

jungleboyrayan t1_jb608cr wrote

ASML a Dutch company that provides chip making equipment. They are the leading one. they agreed with the USA not to sell equipment to China etc.. this will put these countries 10 years behind in development of super tiny chips.

1

Beautiful-Cancel6235 OP t1_jb66ftv wrote

There are agreements and there are shady markets. If China wants to get their hands on these chips, they likely will.

1

s2ksuch t1_jb6zsbn wrote

Sure but probably wouldn't be as many as if they had an actual deal

1

claushauler t1_jb74b0a wrote

Industrial espionage is a thing you know. The agreement won't slow them much.

1

LymelightTO t1_jb7n1xc wrote

> The agreement won't slow them much.

It absolutely will. They've been trying since 2013 to develop a cutting-edge indigenous semiconductor industry (the "Big Fund"), under fewer constraints, and haven't succeeded at anything but burning a lot of cash, by focusing on many of the "wrong" things (less fundamental science, chemical supply chain, and manufacturing equipment, more spent on value-add at far later stages of the manufacturing process, which has not helped them progress toward become self-reliant, but further deepened their reliance on the same players from Japan, Netherlands, the US, etc). This basically comes down to the fact that Chinese firms are extremely good at figuring out how to position themselves to take advantage of well-funded political priorities, like semiconductors were, and less good at.. uh.. doing the thing they say they're going to do. Many, many words have written about this subject, especially after the Gao Songtao was "disappeared" in a CCDI crackdown in 2021, and the government's financial commitment to the fund began wavering in January.

More constraints will certainly slow them down further, which is why the US did this. Frankly, if the US wanted to hurt them even more in this area, they absolutely could, and there have been numerous good suggestions for how they could severely restrict their access to even 5nm, 7nm and 14nm processes. I suspect the reason they haven't is more down to the fact that the US doesn't want to make this an "existential" issue for China, at least at this point.

My point is, it's not like theses sanctions flipped a switch, and now they'll start trying industrial espionage, or start "really for srs" trying to stop importing 90% of the value-add of their semiconductors. They've been stealing shit forever, it doesn't really put a dent in the fact that they just don't have the domestic expertise necessary to do this, or the economic or political environment to make it an achievable goal for them, even if the government makes it reeeeeally super clear to everyone that it is.

It's the same with lots of complex, high-tolerance electronics. If the US government woke up tomorrow and seriously set about the business of preventing China from accessing avionics, I doubt China would be able to build a modern airplane by themselves.

3

gay_manta_ray t1_jb7elk7 wrote

it will not put them 10 years behind. 10 years old chips are like 22nm. SMIC is shipping 7nm chips, and they've been making 14nm for years.

1

Five_Decades t1_jb6tyfa wrote

Aside from war with China, and/or massive cuts in financial investments in AI, I don't think anything would slow it down.

1

iwasbatman t1_jb7346w wrote

Intentional slow down. In a few years, when unemployement raises because automation, laws will need to be drafted so only certain percentages can be automated... Until the economic models are updated I don't think there is a way around that.

1

RodgerRodger90 t1_jb76iz4 wrote

What's mad is how can it get any quicker? I can't keep up

1

Gu1l7y5p4rk t1_jb7aato wrote

>We are all getting whiplash from the breakneck speed of AI development and adoption/integration.

I'm not, and I think your prone. ;)

1

hassan789_ t1_jb7hzjx wrote

Lack of quality information. There's a max of 12 trillion high quality token for LLMs to learn from. After that, the returns could diminish (maybe 10% new quality info is added per year). Right now, largest models are trained on 1T tokens..

1

MuseBlessed t1_jb7ngo8 wrote

Goverment crackdown maybe? Or maybe the physical ability to build machines works, but there becomes an economic bottleneck where a more powerful AI stops being worth the expended resources, diminishing returns.

1

Liberty2012 t1_jb88apt wrote

As long as the hallucinations exist, it is going to come far short of the current hype. There can be no "trusted" applications of such AI's. I expect the hallucination problem is going to be very difficult to solve, some are suggesting we may need different architectures entirely.

The other issue, is that the bad and nefarious uses of AI are exploding and are going to be hard to contain. Hallucinations don't really hurt such cases, as when you are scamming with fake information, it is not an inhibitor.

This creates an unfortunate imbalance with far more destructive uses than we would like and no clear means to control them. This may lead to a real public disaster in the terms of favorable opinions on future AI development.

Especially if the deep fakes explode during election season. AI is going to be seen as an existential crisis for truth and reason.

1

raccoon8182 t1_jb8ajjk wrote

Ironically, COVID sped up the development of Ai. For once, we were in the comfort of our home not going into pointless meetings. And actually able to be productive.

1

Wedongfury t1_jb8kb30 wrote

I don't think anything can slow down significantly the advent of AGI, if the US regulate it, other countries won't, it's doomed to happen. Think of China and India, their standards of living are constantly rising, at one point if you combine China and India, for every Andrej Karpathy born in the US, you will have 10 born in Asia.

1

sungokoo t1_jb8m9jf wrote

Apparently a lack of data. There’s a paper that hasn’t been peer reviewed that states we may run out of good data to train AI by 2026

1

SWATSgradyBABY t1_jbaeyet wrote

The govt can't regulate because the govt is controlled by the companies making the tech. You're expecting the companies involved in the race to regulate themselves. That's not a very logical expectation.

1

Rofosrofos t1_jbf7eh5 wrote

Hopefully government regulation will slow progress down to a point that we can focus on AI safety and alignment, let's get those sorted so that we don't get ourselves killed in a couple of years.

1