Submitted by rretaemer1 t3_10yz6uq in Futurology
Bewaretheicespiders t1_j80mm5w wrote
I work in AI. Pretty much all of AI is open sourced and open research too. Google's deep learning framework, Tensorflow, is free and open source. Same with the (IMO superior) Meta's Torch. Its in large part because these two framework are open source that AI is currently thriving. They all publish their innovations too.
But to train large AI you need a lot of data. In a scale that most people can't comprehend. And the network and compute capability to go along.
rretaemer1 OP t1_j80ohbl wrote
I wasn't aware that AI was already open source to a large degree. Thank you for your response. How far away from an AI program that can maintain itself are we? i.e. can update itself and train itself without intervention. I apologize for any ignorance on my part. I'm just a normal that is fascinated.
Bewaretheicespiders t1_j80tjfj wrote
When we say AI, we dont mean AI in the way you are thinking. When we say AI, we mean a software that can get its behavior from data, instead of being programmed instruction by instruction.
It doesnt imply intelligence, not in the way you think. Those chatbots that are in the news lately, they dont do anything like "reason". They are sophisticated parrots. They are statistitical models of what are believable things you can say in certain situations. But just like a parrot doesnt understand 17th century economics when it repeats "pieces of eight!", these chatbots dont reason. They just deduced from pas conversations what are believable things to say.
So
>How far away from an AI program that can maintain itself are we?
I dont know. We dont even have "an AI program", not in the way you think. We have software that deduces from data how to perform some tasks.
mentive t1_j8157yq wrote
That's what the singularity wants us to think.
rretaemer1 OP t1_j80v04z wrote
For sure. As a normie who's just fascinated I know that I know very little about AI. I know there's nothing that could be considered "conscious" in any way in AI's current state, and a lot of it is not too far off from something like a hyper sophisticated Skyrim npc.
I know that something like that GPT can produce coding if it's asked to though, and that in some cases it's even produced some things that could serve as the basis for apps. If it's capable of producing contextual coding then i don't see how it could be too far off from doing things like "updating itself" on a software front
Thank you for your response.
MysteryInc152 t1_j81e986 wrote
Calling Large Language models "sophisticated parrots" is just wrong and weird lol. And it's obvious how wrong it is when you use the se tools and evaluate without any weird biases or undefinable parameters.
This for instance is simply not possible without impressive recursive understanding. https://www.engraved.blog/building-a-virtual-machine-inside/
We give neural networks data and a structure to learn that data but outside that, we don't understand how they work. What I'm saying is that we don't know what individual neurons or parameters are learning or doing. And a neural networks objective function can be deceptively simply.
How you feel about how complex "predicting the next token" can possibly be is much less relevant than the question, "What does it take to generate paragraphs of coherent text?". There are a lot of abstractions to learn in language.
The problem is that people who are telling you these models are "just parrots" are engaging in a useless philosophical question.
I've long thought the "philosophical zombie" to be a special kind of fallacy. The output and how you can interact with it is what matters not some vague notion of whether something really "feels". If you're at the point where no conceivable test can actually differentiate the two then you're engaging in a pointless philosophical debate rather than a scientific one.
"I present to you... the philosophical orange...it tastes like an orange, looks like one and really for all intents and purposes, down to the atomic level resembles one. However, unfortunately, it is not a real orange because...reasons." It's just silly when you think about it.
LLMs are insanely impressive for a number of reasons.
They emerge new abilities at scale - https://arxiv.org/abs/2206.07682
They build internal world models - https://thegradient.pub/othello/
They can be grounded to robotics -( i.e act as a robots brain) - https://say-can.github.io/, https://inner-monologue.github.io/
They can teach themselves how to use tools - https://arxiv.org/abs/2302.04761
They've developed a theory of mind - https://arxiv.org/abs/2302.02083
I'm sorry but anyone who looks at all these and says "muh parrots man. nothing more" is an idiot. And this is without getting into the nice performance gains that come with multimodality (like Visual Language models).
Bewaretheicespiders t1_j80vky8 wrote
>I know that something like that GPT can produce coding if it's asked to though
Programming languages are meant to be super explicit and well structured, right? So for simple procedures, problem definition to python is just a translation problem.
But most of a programmer's work is "figure out what the hell is wrong with that thing", not "write a method that invert this array"
rretaemer1 OP t1_j815o22 wrote
Thank you for sharing your insight.
As someone who works in AI, what's your take on all the Bing vs. Google news lately?
Bewaretheicespiders t1_j816mcu wrote
The thing with Google was a silly, massive overreaction. Its trivial to get any of these chatbots to say factual errors, because they are trained on massive amount of data that contains factual errors.
rretaemer1 OP t1_j81alg7 wrote
Do you think Microsoft is being intentional in challenging google with their confident messaging, potentially forcing Google to misstep? Or is it a happy accident for them? Or is this another "funeral for the iPhone" moment lol?
Bewaretheicespiders t1_j81x5io wrote
I dont know. Its all a dick parade to me.
Baturinsky t1_j83ewig wrote
There are also open source trained LLM checkpoints, such as https://huggingface.co/docs/transformers/model_doc/gpt_neo or https://huggingface.co/bigscience/bloom
Viewing a single comment thread. View all comments