Submitted by Malachiian t3_12348jj in Futurology
jetro30087 t1_jdu9slz wrote
Reply to comment by Silver_Ad_6874 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
How's that different from any Star Trek episode where a crew member goes to the holodeck and instructs the Enterprise's computer to build a program?
It's not inventing a program, it's completing a command using the information stored in its programming, according to the rules set by its programming. It codes because its trained-on terabytes of code that perform task. When you ask for code that does that task it's just retrieving that information and altering it somewhat based on the rules that dictate its response. Unlike humans however, it's not compelled to design a program that does anything without being prompted.
Silver_Ad_6874 t1_jduf07p wrote
The difference is emerging behaviour. If a sufficiently complex, self adapting structure can modify itself to perform more than it was trained for, the outcome is unknown. Unknown outcomes scare people.
Viewing a single comment thread. View all comments