Viewing a single comment thread. View all comments

GoldenRain t1_j5fx0wy wrote

If we want effective automation or make general human tasks faster we certainly do not need AGI.

If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI. If we want technology human intelligence is unable to comprehend, we need ASI. The step between those two is likely quite short.

26

drsimonz t1_j5g9533 wrote

Depends on the nature of the invention. A lot of research involves trial and error, and this is ripe for automation. A really cool example (which as far as I know doesn't involve any AI so far) is robotic biochemistry labs. If you need to test 500 different drug candidates in some complicated assay, you can just upload the experiment via web API and the next thing you know, dozens of robots come to life mixing reagents and monitoring the results. In my view, automation of any kind will continue to accelerate science for a while, even without AGI.

I would also argue that in some narrow fields, we're already at a point where humans are totally incapable of comprehending technology that is generated by software. The obvious example being neural networks (we can understand the architecture, but not the weights). Another would be the hardware description languages used for IC design. Sure, a really smart computer engineer with an electron microscope could probably reverse engineer some tiny block of a modern CPU, but it would be nearly impossible to map the entire thing. They have billions of transistors. When we design these things, it's simply not possible without the use of sophisticated software. Similarly when you compile code to assembly, you might be able to understand tiny fragments of assembly, but the entire program would take a lifetime to get through. Without compilers and interpreters, software would still see extremely limited use in society, and we literally wouldn't be having this discussion.

Edit: forgot to say, of course AGI will be a completely different animal since it will be able to generate new kinds of ideas where even the concept is beyond the reach of a human brain.

9

SoylentRox t1_j5h5lxz wrote

This. And there are bigger problems unsolved that scale might help with.

For example, finding synthetic cellular growth serums. This is a massive trial and error effort - which molecules in bovine plasma do you actually need for full development of structures in vitro.

Growing human organs. Similarly there is a vast amount of trial and error, you really need to have millions of attempts.

Even trying to do the above rationally, you need to investigate in parallel the effect of each unknown molecule. And you need a lot of experiments, not just one - you don't want to develop a false conclusion.

Ideally I can imagine a setup where scientific papers stop being encrypted with difficult to parse text, but are essentially in a standard machine duplicable form. So the setup and experiment sections are a link to the actual files used to configure the robotics, the results are the unabridged raw data. The analysis was done by an AI who was prompted on what you were looking for, so there can't be accusations of cherry picking a conclusion.

And journals don't accept non replicated work. What has to happen is the paper has to get 'picked up' by another lab, with a different source of funding (or the funding is done in a way that reduces COI), ideally using a different robotic software stack to turn the high level 'setup' files into actionable steps, a different robotics AI vendor, and a different model of robotic hardware.

Each "point of heterogeneity" above has to be part of the data quality metrics for the replication, and then depending on the discovered effects you only draw reliable conclusions on high quality data.

Also the above allows every paper to use all prior data on something rather than stand alone. Your prior should always be calculated from the set of all prior research, not split evenly between hypotheses.

Institutions are slow to change, but i can imagine a "new science" group of companies and institutions who uses the above, plus AGI, and they surge so far ahead of everyone else in results that no one else matters.

NASA vs the Kenyan space program.

6

User1539 t1_j5grqw0 wrote

> If we want effective automation or make general human tasks faster we certainly do not need AGI.

Agreed. We're very, very, close to this now, and likely very far away from AGI.

> If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI.

This is where we disagree. I have many contacts at universities, and most of my friends have a PHD and participate in some kind of research.

In their work, they were evaluating Watson (IBMs LLM style AI) years ago, and talking about how it would help them.

Having a PHD necessarily means having tunnel vision. You will do research that makes you the single person on earth that knows about the one cell you study, or the one protein you've been working with.

Right now, the condition of science is that we have all these researchers writing papers to help other scientists have a wider knowledge on things they couldn't possibly dedicate time to.

It's still nowhere near wide enough. PHDs aren't able to easily work outside their field, and the result is that their research needs to go through several levels of simplification before someone can find a use for it, or see how it effects their own research.

A well trained LLM can tear down those walls between different fields. Suddenly, you've got an infinitely patient, infinitely knowledgeable assistant. They can write code for you. You can ask it what effect your protein might have on a new material, without having to become, or know, a material scientist.

Everyone having a 'smart' assistant that can offer an expert level understanding of EVERY FIELD will bridge the gaps between the highly specialized geniuses of our time.

Working with the sort of AI we have now will take us to an entirely new level.

9

Baturinsky t1_j5iq32y wrote

And how safe is to give that tools into the hands of, among others, criminals and terrorists?

1

User1539 t1_j5jjimd wrote

The same argument has been made about google, and it's a real concern. Some moron killed his wife a week or so ago, and the headline read 'Suspect google history included 'How to hide a 140lb body''

So, yeah. It's already a problem.

Right now we deal with it by having Google keep records and hoping criminals who google shit like that are just too stupid to use a VPN or anonymous internet.

Again, we don't need AGI to have that problem. It's already here.

That's the whole point of my comment. We need to stop waiting for AGI before we start to treat these systems as being capable of existential change for the human race.

1

Baturinsky t1_j5jl5nt wrote

I agree, human + AI working together is already and AGI. With only limit of the human part being unscaleable. And can be extremely dangerous if AI part is very powerful and both are non-aligned with fundamental human values.

1

Artanthos t1_j5j1tiy wrote

We already have that.

Machine Learning algorithms are already making advances in mathematics and medicine.

1