Submitted by sigul77 t3_10gxx47 in Futurology
feelingbutter t1_j559vus wrote
I really wish we would stop using the term Singularity. It's an overloaded term that has lost all meaning IMHO. Projecting a trend without discussing the underlying conditions that affect it isn't very useful.
Key-Passenger-2020 t1_j55bn1t wrote
Yeah like, the first question I have here is "what is the scientific definition of Singularity used by this study?"
MarkNutt25 t1_j57hq42 wrote
Does the study actually use the term "Singularity?" Or was that an addition from the journalist who wrote this news article?
Surur t1_j55dlw4 wrote
They suggested that improvements seems almost independent of the underlying technology, much like Moores Law does not appear to depend on any specific technology.
> Our initial hypothesis to explain the surprisingly consistent linearity in the trend is that every unit of progress toward closing the quality gap requires exponentially more resources than the previous unit, and we accordingly deploy those resources: computing power (doubling every two years), data availability (the number of words translated increases at a compound annual growth rate of 6.2% according to Nimdzi Insights), and machine learning algorithms’ efficiency (computation needed for training, 44x improvement from 2012-2019, according to OpenAI).
> Another surprising aspect of the trend is how smoothly it progresses. We expected drops in TTE with every introduction of a new major model, from statistical MT to RNN-based architectures to the Transformer and Adaptive Transformer. The impact of introducing each new model has likely been distributed over time because translators were free to adopt the upgrades when they wanted.
LeviathanGank t1_j56o78t wrote
eli5? plz i need to sleep but im interested
Surur t1_j56p80y wrote
They have noticed that text that has been machine translated gets more and more accurate over time, in what appears to be a very linear and predictable manner.
They predict perfect human-level translation by 2027 based on that, and believe that an AI that can translate as well as a human will be presumably know as much about the world as a human.
Their explanation of the smooth linear improvement is that the underlying forces are also constantly improving (computing power, AI tools, training data).
It suggests there seems to be an inevitability towards the conditions being right for human-level AI in the near future.
fwubglubbel t1_j56y0lo wrote
>believe that an AI that can translate as well as a human will be presumably know as much about the world as a human.
This sounds like nonsense. Just because a machine can translate doesn't mean in "knows" anything. (see Searle's Chinese Room)
currentpattern t1_j575ja4 wrote
It would be nonsense to posit that "AGI" means it's a system that "understands/knows" language, in this case. What these projections seem to be saying is that around 2027, we're likely to have systems that are just as capable as humans at utilizing language. I.e., Chinese Rooms that are indistinguishable from humans in regards to language use.
BitterAd9531 t1_j57z7zd wrote
Chinese Room is once again one of these experiments that sound really good in theory but has no practical use whatsoever. It doesn't matter if the AI "understands" or not if you can no longer tell the difference.
It's similar to the "feeling emotions vs emulating emotions" or "being conscious vs acting conscious" discussion. As long as we don't have a proper definition for them, much less a way to test them, the difference doesn't matter in practice.
Surur t1_j57gd78 wrote
> Just because a machine can translate doesn't mean in "knows" anything
You could say the same thing of a translator then. Do they really "know" a language or are they just parroting the rules and vocabulary they learnt?
songstar13 t1_j57nunw wrote
You can ask a translator a question about the world and if they have knowledge on that topic then they can answer you with certainty.
Current GPT models are basically a super-powered predictive text bot that answers questions. It would be like trying to answer a question using the suggested words on your phone keyboard but far more sophisticated.
They are fully capable of lying to you or giving inconsistent answers to the question because they don't "know" anything other than patterns of word association and grammar rules.
At least, this was my understanding of them fairly recently. Please correct me if that has changed.
Surur t1_j57w5tj wrote
I imagine you understand that LLM are a bit more sophisticated than Markov chains, and that GPT-3 for example has 175 billion parameters, which corresponds to the connections between neurons in the brain, and that the weights of these connections influences which word the system outputs.
These weights allows the LLM to see the connections between words and understand the concepts much like you do. Sure, they do not have a visual or intrinsic physical understanding but they do have clusters of 'neurons' which activate for both animal and cat for example.
In short, Markov chains use a look-up table to predict the next word, while LLM use a multi-layer (96 layer) neural network with 175 billion connections tuned on nearly all the text on the internet to choose its next word.
Just because it confabulates sometimes does not mean its all smoke and mirrors.
songstar13 t1_j58shuh wrote
Thank you for the more detailed explanation! I was definitely underestimating how much more complex some of these AI models have become.
stupidcasey t1_j57jn7v wrote
Yeah, especially if this article is correct and machine intelligence increases linearly, their won’t be a “Singularity” just the slow obsolescence of humanity, also if we truly are reaching a hard limit on silicon we may never even see that.
dehehn t1_j58pmoj wrote
There is quantum computing to potentially get us past that limit. And we have distributed cloud computing capacity that means we're no longer limited by the local computing capacity of a small confined space within a single computer.
And the fact that increasing sophistication of software doesn't necessarily require constant increases in computing power to get better results. Our brains aren't that large and have general intelligence and consciousness.
I don't necessarily agree with the conclusions of the article's premise, but I don't see us hitting a brick wall in progress soon.
stupidcasey t1_j58qtc1 wrote
Well if AI takes exponential growth in processing to maintain linear growth in utility like the artificial proposes the amount of processing power on earth will quickly not be enough without exponentially more transistors that’s just math.
As far as quantum computing quantum supremacy has yet to be demonstrated, also so far increasing the number of cubits in a quantum computer looks like it’s going to be exponentially more difficult as you increase the numbers nullifying any gain you get from quantum computing this is definitely not certain but all that is to say the “Singularity ™” is also definitely not certain.
[deleted] t1_j58yqx6 wrote
[removed]
Viewing a single comment thread. View all comments