Viewing a single comment thread. View all comments

drsimonz t1_j5g9533 wrote

Depends on the nature of the invention. A lot of research involves trial and error, and this is ripe for automation. A really cool example (which as far as I know doesn't involve any AI so far) is robotic biochemistry labs. If you need to test 500 different drug candidates in some complicated assay, you can just upload the experiment via web API and the next thing you know, dozens of robots come to life mixing reagents and monitoring the results. In my view, automation of any kind will continue to accelerate science for a while, even without AGI.

I would also argue that in some narrow fields, we're already at a point where humans are totally incapable of comprehending technology that is generated by software. The obvious example being neural networks (we can understand the architecture, but not the weights). Another would be the hardware description languages used for IC design. Sure, a really smart computer engineer with an electron microscope could probably reverse engineer some tiny block of a modern CPU, but it would be nearly impossible to map the entire thing. They have billions of transistors. When we design these things, it's simply not possible without the use of sophisticated software. Similarly when you compile code to assembly, you might be able to understand tiny fragments of assembly, but the entire program would take a lifetime to get through. Without compilers and interpreters, software would still see extremely limited use in society, and we literally wouldn't be having this discussion.

Edit: forgot to say, of course AGI will be a completely different animal since it will be able to generate new kinds of ideas where even the concept is beyond the reach of a human brain.

9

SoylentRox t1_j5h5lxz wrote

This. And there are bigger problems unsolved that scale might help with.

For example, finding synthetic cellular growth serums. This is a massive trial and error effort - which molecules in bovine plasma do you actually need for full development of structures in vitro.

Growing human organs. Similarly there is a vast amount of trial and error, you really need to have millions of attempts.

Even trying to do the above rationally, you need to investigate in parallel the effect of each unknown molecule. And you need a lot of experiments, not just one - you don't want to develop a false conclusion.

Ideally I can imagine a setup where scientific papers stop being encrypted with difficult to parse text, but are essentially in a standard machine duplicable form. So the setup and experiment sections are a link to the actual files used to configure the robotics, the results are the unabridged raw data. The analysis was done by an AI who was prompted on what you were looking for, so there can't be accusations of cherry picking a conclusion.

And journals don't accept non replicated work. What has to happen is the paper has to get 'picked up' by another lab, with a different source of funding (or the funding is done in a way that reduces COI), ideally using a different robotic software stack to turn the high level 'setup' files into actionable steps, a different robotics AI vendor, and a different model of robotic hardware.

Each "point of heterogeneity" above has to be part of the data quality metrics for the replication, and then depending on the discovered effects you only draw reliable conclusions on high quality data.

Also the above allows every paper to use all prior data on something rather than stand alone. Your prior should always be calculated from the set of all prior research, not split evenly between hypotheses.

Institutions are slow to change, but i can imagine a "new science" group of companies and institutions who uses the above, plus AGI, and they surge so far ahead of everyone else in results that no one else matters.

NASA vs the Kenyan space program.

6