Comments

You must log in or register to comment.

Science_is_Greatness t1_j6hkftm wrote

most of the code (if more than a few lines and requiring some originality/customization for the particular situation) is unusable without a trained software engineer actively checking and tweaking it. And code requiring the use of newer frameworks or library versions is also useless. The potential for self-programming AI is coming but is not here at present.

7

DrifterInKorea t1_j6hp8sa wrote

Yes in some sense : the ability to have a deep understanding of a language, be it human or computer, is a prerequisite to build and optimize logical blocks like in programming.

The human languages are way more complex than computer languages and they have way more context, requiring to memorize a lot of a text to follow what the subject is.

ChatGPT is really good at discerning what the subject and the intent are while also good a finding related topics.

I would not be surprised if it was revealed that they use similar models to develop, debug or improve their existing code base.

ChatGPT also makes a lot of mistakes which most likely is preventing it to improve without supervision and tweaking from actual developers.
But we are definitely going more and more into what was described as science fiction a few years ago.

2

dlrace t1_j6hsgnj wrote

This is the key thing for 'accelerated returns' - recursive ai acting like compounding interest on a bank account. Im not sure we've seen anything like that in a clear cut manner just yet?

2

andresni t1_j6jwwq5 wrote

If you look at how much energy and computational resources it took to make ChatGPT, then it's pretty obvious that even if ChatGPT4 (or whichever version) could in principle bootstrap itself into the intelligence stratosphere, it wouldn't have the resources to do so. Neither do we have that kind of resources hanging around unused for the AI to tap into without or explicit knowledge and consent.

And even if we dedicated resources to it, the next iteration would demand even more. The time it takes us to build the super computers, gather data, and provide the requisite energy, is measured in months if not years. A self-improving AI wouldn't be able to improve faster than we are able to allocate resources to it.

Unless, of course, it manages to tap into all our phones and computers and gaming consoles and serves and the like. That'll give it the juice it needs, perhaps. Question is, could it even do so? How smart would it have to be to do so without our collective consent and collaboration?

2

journalingfilesystem t1_j6ktlz2 wrote

Software engineering is only partially about writing code. Computer science isn’t really about writing code at all. We need breakthroughs on both to get closer to AGI. AFAIK GPT and it’s ilk haven’t been able to do anything original in either of those areas.

2

bce69 t1_j6n2xgb wrote

software is only part of the equation. Hardware is the other part. AI will improve one and then the other, back and forth until it reaches some type of equilibrium or plateau.

2