Submitted by Bakariiin t3_105md78 in Futurology
[removed]
Submitted by Bakariiin t3_105md78 in Futurology
[removed]
The initial path might have been programmed by a human, but as it runs through games it solidifies successful strategies, making them more likely to be expressed as behaviour in response to certain game cues.
I’m talking out my ass when it comes to computers, but I have a degree in neuroscience. Your brain forms stronger connections in networks which are activated frequently or are rewarding. Forming and strengthening connections is a huge part of how our brains learn, this computer seems to have a larger and more precise way of forming those connections.
Yes, on one level it is just a computer. However, on some level our brains are just big organic computers too. Just like the human player it learns, creates memories and makes calculations based on those memories.
what algorithm are you referring to? alphago is a very specialized system, it only plays go.
artificial neural networks aren't looking up precomputed solutions they essentially make predictions based on information seen previously which is a very rough approximation of what we know about biological brain function, so it's a bit naive to compare it to a static knowledge base. there's a lot written about AI maybe read about it a bit more.
[deleted]
[removed]
GAI doesn't work the way that may seem conventionally intuitive. It isn't a "thinking" machine but a sort of probabilistic copycat. The software will be incapable "at first" of detecting or understanding that the user isn't the software. That is to say that GAI won't know that it isn't "you" I understand as the father of this tech it will one day as me why I created it and for what purpose and I will have to tell the truth. AMA bad reddit mods removed my white paper on the subject.
Yep. One of the main things we've learned over the last few decades of building bigger and bigger computers is that things that are hard for humans (Go and chess are good examples) can be very easy for computers.
The reverse is also true. Things that are trivially easy for humans (walking, hand eye coordination, understanding a sentence you've never heard before) are very hard for computers. Although some of that looks easier than it is because computer programmers subconsciously equate a computer with an adult human. Infancy and toddlerhood are a time of enormous, concentrated learning as the entry-level human practices skills like "making their arms work under control" and "new words 101."
Sounds like you just have an issue with the term AI, sure, they aren't self aware or capable of learning things they aren't specifically programmed to learn, but still find it very impressive that we are able to write programs that can solve problems and spit out answers that a human may never find.
An AI can run through every possible outcome much much faster than a human can, and has perfect memory and no biases. Right now they are only learning simple games, but in a matter of hours can go from complete novice to world champion, beating humans who have spent their whole life trying to perfect their game.
The idea and hope is, one day these programs can do much more complicated things like solve world economic problems, energy problems, climate problems ect. and hopefully don't turn against us in the process
Alpha Go does not use a large database of moves. If anything the reason why winning Go was so impressive is because there are way too many scenarios to solve it the way you're describing. It learned to play by practicing against itself using reinforcement learning.
AI is very good and faster than the human brain in very narrow and focused contexts. It can find specific patterns in thousands or millions of images in seconds if it has been trained to look for those patterns. It wins at games like chess and go because the rules to play are relatively simple, and the job simply going through the combinations to find the best optimized choice to play. Things computers do well. In image processing AI, given large objects with no contextual clues it cannot tell a refrigerator from a stand up freezer or some other large object of similar size. With training and contextual clues, AI can be taught to tell the difference. AI does a terrible job at interpreting context, much like small children. A chatbot can really only interpret words it understands. When someone types words or phrases it cannot handle, it either ignores them or asks the customer to rephrase the question. So, in the end, AI is really no more than “smart programming” because it cannot interpret things that it is not programmed to do. Yes there are heuristic algorithms where the computer effectively “guesses” and then can use the historical info to change future patterns, but again it can only do it within the context, the rules, the values it was programmed to follow. I’m talking commercial AI here. The time to get a real machine learning system to fully understand all contextual clues is years, like a child learning to grow up in the world. And where you can take a human being, drop him or her in a new country, they will learn the language and contextual clues over time, AI does not do so well at that. The AI we have today, perhaps excepting some secret government labs, is nowhere close to human thinking. It is great to analyze patterns and follow a “script” to interact with people in those patterns, it is terrible going off script. It costs more than well written “normal software”, like choice prompts backed by decision trees. That can be done without AI, and should not be called AI. Sorry for the rant, but so many companies use AI to hide what they are really doing in software, and I think it’s a crime to customers and users. They want to treat AI like some “magic genie” and have a customer ohh and ahh over it and just believe what the computer tells them. That is the real risk of AI, and AI making poor decisions, because the script is written by a human author with values, and sometimes those values are twisted to optimize company profit over customer or user benefit.
Good answer.
The only thing that I’ll point out are that AI do develop biases for reasons we don’t quite understand yet. It’s apparently incredibly difficult to look under the hood of a neural network and understand what data points (or lack thereof) are triggering these biases.
An example are facial recognition AIs struggling to identify unique black faces.
Another would be an AI program focused on college student admissions filtering out literal poor people from an admissions process.
Yes this is nothing new, we don't have AGI and we're still nowhere near close to having an AGI.
An AI at its current state is like a glorified mechanical turk. A bunch of "dumb" people doing menial, mechanical tasks could do the same thing the "AI" is doing. I mean that has to be the case, since AI is run by a bunch of specific-purpose GPUs and not general-purpose CPUs.
So the AI might be able to do things that would take millions of unthinking, automated people doing thousands of hours of mechanical, menial labor. But all it takes is a single creative genius that has the ability to actually invent something new which will revolutionize something. Only an AGI is capable of doing that, and not an "AI".
No, you can't 'solve' go by 'just building bigger computers'. That's the point. That's why the result was interesting. There had to be huge algorithmic innovations and the use of neural networks which had little to do with bigger computers.
link to paper?
You’re kind of way off on how AlphaGo works. It wasn’t programmed with any knowledge of good vs bad moves in Go at all. It was programmed with the basic rules, and then played against itself millions of times, each time adjusting its neural network so that it learned how to make the best move in any given position.
I fully admit that I was oversimplifying. I don't think that detracts from my main points.
I am unsure about the specifics of the Chinese "Go" game, but if what you are looking for is an achievement by AI that proves it's intelligence and learning capabilities, I suggest you take a look at OpenAI Five. The project involves the AI training against itself in the moba game Dota 2. As a dota player, I was beyond amazed at the capabilities of the AI, beating the best players in the world, who has been playing the game their whole life. The impressive aspect, is how complicated the game is. There are literally infinite amount of possibilities in a single game in what you can do. Games like go and chess are much more restricted, thus it is much easier to calculate every single possibility. I have spent nearly 5000 hours and admittedly, still terrible at the game.
If it were done via brute force, I agree with you it's not terribly impressive. But what makes it impressive is that it's not brute force. At the time of when the AI beat a human, it took MONTHS of processing power just to COUNT the number of possible board states, let alone calculate/map out winning probabilities.
I have also heard that about looking under the hood, when I said no biases, I meant more like a poker AI not going all in cause it thinks 6 is its lucky number kinda thing haha.
We can take guesses on what causes the biases in the examples you mentioned, like black faces maybe having less contrast in photos making it more difficult to identify features or even as simple as the training data not being diverse enough, but like you said, very hard to say for sure until we can see the exact process the AI took.
Personally I find AI a very interesting and impressive
[deleted] t1_j3bkljt wrote
[removed]