Submitted by ken81987 t3_10k0fx0 in singularity
Thinking a lot about what will happen in the next decade. Some assumptions
- ai will be able to "comprehend" like a human, access to all knowledge, and perfect computational powers.
- ai will not have desires or emotions. Is still a tool
Imagine something that can perfectly engineer products and services. Would businesses even need to use other businesses anymore? If your ai can perform all administrative, managerial, financial duties, I'd envision a single company being vertically integrated at every level, or possibly see conglomerates of greater scale than ever.
If you have an engineer that can understand all sciences at once, and perfectly simulate product testing, would you see ai creating unimaginable products? It took human businesses decades to improve cars, electronics, batteries,. computers. Would technology advance exponentially? What types of things would be created
just-a-dreamer- t1_j5nv8ay wrote
An AGi is not tested against the forces of evoution and therin lies the problem and maybe our doom. Humans are imperfect for a reason, it allows us to survive.
We humans follow a system called heuristics, a method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term.
Suppose you are a doctor and face 10.000 patients in your life. The "code" for a doctor as general instruction is the Hippocratic Oath, it works good enough. Yet there may be cases where a doctor should not keep people alive at all costs.
An AGI would be "god like" in computing powers and calculate eventualities a human is not capable to comprehend.
An AGI might give people what they wish, not what they want.
One example is the famous paperclip maximizer experiment. An ASI is given the goal to produce as many paperclips as possible and proceeds by converting every atom of the observable universe into paperclips. Again, constraints can be added to the goal, but an ASI would always find a loophole:
Goal: produce at least one million paperclips. Solution: the ASI will still build infinite paperclips, because it would never assign exactly 100% probability to the hypothesis that it has achieved its goal.
Goal: produce exactly one million paperclips. Solution: the ASI will build one million paperclips, but it can never be 100% sure that they are exactly one million. It would count them again and again. To become more efficient at counting, it will increase its computational power by converting atoms into computronium (computing matter), eventually destroying the observable universe.
Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. Solution: the ASI will build one million paperclips, but it will produce a perverse instantiation of the criterion to compute the probability.
Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. The probability is assigned by an external device. Solution: the ASI will hack the device.
I could continue this game forever, and I would eventually find a satisfying solution that seems flawless. The problem is that I’m human, an ASI would be way smarter than me and it would find flaws that I couldn’t even imagine.
The problen is laid out at https://medium.com/i-human/why-we-should-really-be-afraid-of-ai-an-analysis-of-obvious-and-non-obvious-dangers-cb2dfb8f905d