Comments
AcademicGuest t1_ixk6r37 wrote
Nope, it’s all invalid because it is not performed by a human.
izumi3682 OP t1_ixk7qay wrote
What is not performed by a human? The game play? I thought the point was that the AI was learning to outplay humans in highly sophisticated incomplete information games. If I am misunderstanding your point, please explain to me what you mean.
AcademicGuest t1_ixk7wo6 wrote
“Outplay a human” you answered your own statement. AI represents, unless strictly curtailed and linear, an inherit violation of human will and freedom
Coachtzu t1_ixk8o3i wrote
Strict logic doesn't always work in the real world though. Sometimes you need an empathetic voice in the room. We have plenty of faults governing ourselves, but I'm not sure the AI should be trusted to find answers that aren't necessarily for the greater good, but benefit those not in charge.
izumi3682 OP t1_ixkb038 wrote
>AI represents, unless strictly curtailed and linear, an inherit violation of human will and freedom
I am still not sure what this has to do with the development of ever more powerful AI technology. But ok.
FuturologyBot t1_ixkdqvd wrote
The following submission statement was provided by /u/izumi3682:
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
Here is the research paper.
https://www.science.org/doi/10.1126/science.ade9097
From the article.
>To create Cicero, Meta pulled together AI models for strategic reasoning (similar to AlphaGo) and natural language processing (similar to GPT-3) and rolled them into one agent. During each game, Cicero looks at the state of the game board and the conversation history and predicts how other players will act. It crafts a plan that it executes through a language model that can generate human-like dialogue, allowing it to coordinate with other players.
>Meta calls Cicero's natural language skills a "controllable dialogue model," which is where the heart of Cicero's personality lies. Like GPT-3, Cicero pulls from a large corpus of Internet text scraped from the web. "To build a controllable dialogue model, we started with a 2.7 billion parameter BART-like language model pre-trained on text from the Internet and fine tuned on over 40,000 human games on webDiplomacy.net," writes Meta.
>The resulting model mastered the intricacies of a complex game. "Cicero can deduce, for example, that later in the game it will need the support of one particular player," says Meta, "and then craft a strategy to win that person’s favor—and even recognize the risks and opportunities that that player sees from their particular point of view."
So, my question is, is this an "incremental improvement" in our AI development efforts, or is this more like the "AI significantly improves every three months" level of improvement.
https://www.ml-science.com/exponential-growth
Are we seeing any evidence that AI of any form is improving significantly every 3 months?
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z36el2/training_our_future_rulers_meta_researchers/ixk6ojd/
VenatorDomitor t1_ixklvy6 wrote
Well considering it’s a classic board game from 1959 I’m not really sure what you’re expecting
VenatorDomitor t1_ixkm0ee wrote
I don’t think they understood the question to be honest
imhere2downvote t1_ixkrdex wrote
until they build too many too fast and turn us into batteries
ATLHawksfan t1_ixkx4lb wrote
Just sterilize the stupid and poor, euthanize the elderly and other drains on society, and nuke all other countries.
USA #1!!!
Coachtzu t1_ixkxba0 wrote
You forgot remove child labor laws and exploit 3rd world nations for the sake of "economic process"
ABrokenBinding t1_ixl2j5l wrote
Somehow a tool that can trick humans with natural language, placed in the hands of DJ Markie Z, doesn't seem like something's scholars would call "good".
Just me?
SailboatAB t1_ixl3p8d wrote
Also, they're literally training it to take over the world.
Swordbears t1_ixl5yc0 wrote
The AIs will be owned and controlled by the wealthy if we don't fix this shit first. Our AI overlords are most likely going to be better at oppressing and exploiting us for the sake of the few.
leapdayjose t1_ixl6q0o wrote
Kinda reminds me of avenue 5
rixtil41 t1_ixlgzgk wrote
But I would trust a logical AI than an emotional person on average.
Roqwer t1_ixlhc2n wrote
As long I can eat steak in the simulation, no problem.
Sidoplanka t1_ixluzp5 wrote
Using "Meta", "future rulers" and "diplomacy" in the same sentence is just beyond silly 😂
t0slink t1_ixlvwji wrote
They will probably use it to make realistic NPCs in VR games. TBH we are about to see games that will make the best open-world games today look like Pac-Man.
Nyarlathotep854 t1_ixlxnh3 wrote
Honestly, as stupid as this narrow application sounds, i am excited for what this means for strategy games
hungrycryptohippo t1_ixma79t wrote
So isn’t the title of this post misleading? The AI trained on existing data and used simulators so it’s only good at learning how to play diplomacy specifically and would need more data to be applied to any other domain.
Still an impressive result but geez, it’s not like we have a general system here that can be interfaced to a new problem and do well, especially if there isn’t data like there was for diplomacy from other humans.
DragoonXNucleon t1_ixmbucl wrote
I think you misread. The AI trained on data, but played against real humans in real online leagues. In those leagues it performed well.
Sexycoed1972 t1_ixos28a wrote
Really? Imagine if Donald Trump and his army of clowns were all hyper-efficient super-geniuses.
No thank you.
DyingShell t1_ixrsaoz wrote
AI will replace Homo Sapiens, this is the purpose of our existence.
izumi3682 OP t1_ixk6ojd wrote
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
Here is the research paper.
https://www.science.org/doi/10.1126/science.ade9097
From the article.
>To create Cicero, Meta pulled together AI models for strategic reasoning (similar to AlphaGo) and natural language processing (similar to GPT-3) and rolled them into one agent. During each game, Cicero looks at the state of the game board and the conversation history and predicts how other players will act. It crafts a plan that it executes through a language model that can generate human-like dialogue, allowing it to coordinate with other players.
>Meta calls Cicero's natural language skills a "controllable dialogue model," which is where the heart of Cicero's personality lies. Like GPT-3, Cicero pulls from a large corpus of Internet text scraped from the web. "To build a controllable dialogue model, we started with a 2.7 billion parameter BART-like language model pre-trained on text from the Internet and fine tuned on over 40,000 human games on webDiplomacy.net," writes Meta.
>The resulting model mastered the intricacies of a complex game. "Cicero can deduce, for example, that later in the game it will need the support of one particular player," says Meta, "and then craft a strategy to win that person’s favor—and even recognize the risks and opportunities that that player sees from their particular point of view."
So, my question is, is this an "incremental improvement" in our AI development efforts, or is this more like the "AI significantly improves every three months" level of improvement.
https://www.ml-science.com/exponential-growth
Are we seeing any evidence that AI of any form is improving significantly every 3 months?