Submitted by Particular_Leader_16 t3_xwow19 in singularity
matt_flux t1_irak5er wrote
Reply to comment by 3Quondam6extanT9 in The last few weeks have been truly jaw dropping. by Particular_Leader_16
Fair enough, I share the same view. Often manual(?) setting up of automation is more practical than AI though.
3Quondam6extanT9 t1_irando4 wrote
We automate most systems currently through manual setup so I can only assume this will continue on until AI has developed enough to self program, at least at limited scale.
matt_flux t1_irat0b5 wrote
Pure speculation. How would the AI know if it made an improvement to, or worsened its code? Human reports? If that’s the case it will perform no better than humans do.
3Quondam6extanT9 t1_iravwfo wrote
You're right, it is speculation, and initially it would likely be no better than human influence.
However limited improvement itself should be able to be written into code that at the very least is given the parameters to analyze and choose between the better options.
The AI that goes into deep faking, image generation, and now video generation is essentially taking different variables and applying them to the outcome through a set of instructions.
So it wouldn't be beyond the realm of possibility to program a system that can choose between fewer options with a given understanding that each variable outcome has with it an improvement of some sort.
That improvement could alter the speed at which its calculating projections or increasing it's database.
Call it handholding self-improvement to begin. I would like to think over time one could "speculate" that an increasingly complex system is capable of these very limited conditions.
Viewing a single comment thread. View all comments