Submitted by GeneralZain t3_zc4py8 in singularity
Head_Ebb_5993 t1_iyyu8wr wrote
Reply to comment by SoylentRox in bit of a call back ;) by GeneralZain
How exactly outdated ? enlighten me , ideally with sources. Because I don't think so
Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi
In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus
SoylentRox t1_iyyvlhg wrote
https://www.deepmind.com/blog read all these.
The most notable ones : https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html
https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html
For an example of a third party scientist venturing an opinion on their work:
To succinctly describe what is happening:
(1) intelligence is succeeding at a task by choosing actions that have a high probability in the agent seeing future states that have a high value to the agent. We have tons and tons of simulated environments, some accurate enough to immediately use in the real world - see here for an example https://openai.com/blog/solving-rubiks-cube/ - to force an agent to develop intelligence.
(2) neuroscientists have known for years that the brain seems to use a similar pattern over and over. There are repeating cortical columns. So the theory is, if you find a neural network pattern you can use again and again - this one is currently doing well and powers all the major results - and you have the scale of a brain, you might get intelligence like results. Robust enough to use in the real world. And you do.
(3) where the explosive results are expected - basically what we have now is neat but no nuclear fireball - is putting together (1) and (2) and a few other pieces to get recursive self improvement. We're very close to that point. Once it's reached, agents that (1) work in the real world better than humans do (2) are capable of a very large array of tasks, all at higher intelligence levels than humans, will happen.
Note that one of the other pieces of the nuke - the recursion part - actually has worked for years. See: https://en.wikipedia.org/wiki/Automated_machine_learning
To summarize: AI systems that work broadly, over many problems, and well, without needing large amounts of human software engineer time to deploy them to a problem, are possible very soon through leveraging already demonstrated techniques and of course stupendous amounts of compute, easily hundreds of millions of dollars worth to find the architecture for such an AI system.
Umm to answer your other part, "how can this work if we don't know what intelligence is". Well I mean, we do know what it is, but in a general sense, what we mean is "we simulate the tasks we want the agent to do, including tasks that we don't give the agent any practice on but it uses skills learned in other tasks and receives written instructions as to the goals of the task". Any machine that does well on the benchmark of intelligence described is intelligent and we don't actually care how it accomplishes it.
Does it have internal thoughts or emotions like we do? We don't give a shit, it just needs to do it's tasks well.
SoylentRox t1_iyywjbs wrote
>Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi
>
>In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus
? so...deepmind and AI companies aren't real? What scientific consensus? All the people who have the highest credentials in the field are generally working machine learning already, those AI companies pay a million+ a year TC for the higher end scientists.
Arguably the ones who aren't worth 1m+ are not really qualified to be skeptics, and the ones I know of, Gary Marcus, keeps getting proven wrong in weeks.
Head_Ebb_5993 t1_iyyy2f1 wrote
But that's obvious straw man i wasn't and we weren't talking about AI , but AGI , Just because somewhere in AI industry are money doesn't imply that concept of AGI is valid and will be there in few years
PhDs with +1 million sallary or what ? That seems like the biggest BS I've ever heard
And you can be skeptic no matter your salary , if you have expertise in the field , I don't understand how your salary is in any way relevant to your crtique
You really seem to treat this as a religion and not science
I will look at your sources maybe tommorow , because I am going to sleep , but just from skimming I am already skeptical
SoylentRox t1_iyyz3vi wrote
>ut that's obvious straw man i wasn't and we weren't talking about AI , but AGI ,
The first proto AGI was demonstrated a few months ago.
https://www.deepmind.com/publications/a-generalist-agent
Scale it up to 300k tasks and that's an AGI.
I am saying if industry doesn't think someone is credible enough to offer the standard 1 million TC pay package for a PhD in AI, I don't think they are credible at all. That's not unreasonable.
Viewing a single comment thread. View all comments