Submitted by ItsTimeToFinishThis t3_1019dd1 in singularity
ItsTimeToFinishThis OP t1_j2mgc06 wrote
Reply to comment by Ortus14 in Why can artificial intelligences currently only learn one type of thing? by ItsTimeToFinishThis
If so, why then is the creation of an agi treated as a mystery?
Ortus14 t1_j2nlbxk wrote
Click Bait articles and the desire for humans to feel special. Reading books and papers by those in the field and those who dedicate their lives to studying it, will give you a clearer perspective.
It's predicated on a semantic labeling mistake. The mistake being, labeling intelligences as either being "narrow" or "general", when in reality all intelligences fall on a spectrum in how broad the problem domains they can solve are. Humans are not general problem solvers but lie somewhere on this spectrum. The same goes for all other animal species and synthetic intelligences.
As compute costs predictable diminish over time do to a compounding effect of multiple exponential curves interacting with each other such as decreasing solar costs (energy costs), decreasing Ai hardware costs (advancing more rapidly than gaming hardware now), exponential increase in available compute (each super computer built is capable of exponentially more compute than the last), and decreasing software implementation costs (improvement in Ai software libraries and ease of use), the computation space for Ai's increases at an exponential rate.
As this computation space increases there is room for intelligences capable of a wider and wider range of problems. We already have algorithms for the full range of spaces, including an algorithm for perfect general intelligence (far more general than humans) that would require extremely high levels of compute. These algorithms are being improved and refined but they already exist, and the things we are doing now are refined implementations of decades old algorithms now that the compute space is available.
What the general public often misses is that, that compute space is growing exponentially (sometimes they miss this by hyper focusing on only one contributing factor such as the slow down of mores law missing the greater picture), and that Ai researchers have already effectively replicated human vision which accounts for roughly 20% of our compute space. When available compute increases by more than a thousand fold a decade, it's easy to see humans are about to be dwarfed by the cognitive capacity of our creations.
Viewing a single comment thread. View all comments