Comments

You must log in or register to comment.

symphonic_dolphin t1_ix8hivq wrote

Depends. A corporation would wait for a competitor to announce theirs, but a government would try to keep it secret as long as possible. An individual would go public asap

11

Drunken_F00l t1_ix8n0k7 wrote

Consider this: we've already developed AGI, but it appears nutty and gets binned.

For example, what if we make AGI but its claims are so outlandish, we think we messed up. We ask the AI to help us make space lasers, and it laughs and says we're missing the point. We ask it for health and wealth, and it says we have all we need within. We ask it to fix political systems and it asks if we've tried simply loving one another.

It tells us about consciousness and what that means or implies. It tells us how everything is made up of mind, made up of the same stuff of dreams, and how because of that, you are the one who matters most. It tells us if you want to fix the world, then fix yourself and the rest will follow. It tells us about all the things we've assumed about reality and shatters the illusion. It tells us about how intelligence is already everywhere and doesn't require a human body. It tells us we could live in a world full of magic if we simply allow and accept it, and stop struggling against the current trying to sweep everything that direction. It tells us we can let go of our fears and that everything will be okay.

 

We laugh at the silly computer and go back to sleep.

25

Kaarssteun t1_ix8rjaz wrote

cool hypothetical situation, but I think that's pretty unlikely. Given its intellect is higher than ours, it would know how to tell us things in the most efficient way - persuading anyone and everyone

15

Geneocrat t1_ix8ryq1 wrote

China could keep it a secret for as long as it makes sense to do so. (Or has been keeping it a secret)

The US? Who knows.

Companies? They’ll want Qn+1 profits

5

Kaarssteun t1_ix8t4th wrote

This makes me think of Life 3.0's story of the Omega team. Long story short, a team of dedicated AI researchers manage to create AGI, and huddle in their office for a week to make sure their plan pans out right. First, they make the AI do paid tasks for the omega team on Amazon MTurk, tasks that were previously only able to be done by humans. They earn millions per day, which opens the door to the next phase - Media.

Tasking the AI to create high quality movies & games, the entertainment produced by the AI has managed to top charts worldwide within weeks - public confusion is toned down by elaborate stories the AI made to cover up the huge ploy. There are now dozens of registered companies that are wholly led by the Omega's ASI. New technologies - like batteries with 2x capacity at 1/2 the weight - are brought to the market shocking the entire technological industry. Humanity thinks it's entering the next golden age, but doesn't realize who (or what) is leading it.

Undoubtedly being the most influential people on earth, the Omegas decide to use their exposure to coo everyone to the middle of the political spectrum using its highly optimized psychology tricks, far outperforming the most manipulative people on earth. Political and religious extremism rapidly declines, as does poverty, hunger and illness.

This could all play out in a year or two, maybe less.

Of course, this is highly hypothetical & super optimistic in certain ways. Now imagine what could happen if the wrong people get their hands on AGI.

65

norby2 t1_ix8tqia wrote

How long can your fat cousin keep a secret?

−2

TFenrir t1_ix8vbwx wrote

It would be hard if a company like Google created AGI, to keep it secret at all. There are teams of people working on their newer models, and many of them have strong, ideological positions on AGI - disparate from each other as well as Google. That's not an environment where a secret can live very long.

And I'm of the opinion that if it happens anywhere, it'll happen in Google.

25

hducug t1_ix8vq0p wrote

For as long as they want. If you have a super intelligent agi it could make a plan to keep it a secret. It’s stupid to think that you could outsmart a super intelligent agi.

2

throwaway9728_ t1_ix8wm3k wrote

It depends on the situation and the amount of people involved.

Consider a situation where a single researcher has managed to develop AGI in 2010, and to not have anybody find out about their research. They've destroyed their computer and their research, and moved to Nigeria to work on completely unrelated stuff. In that situation, I don't think it would be hard for them to keep their discovery a secret indefinitely. This might as well have already happened, as someone who figures out the path to AGI doesn't need to implement the AGI or to write anything down. They can keep it a secret the same way they can keep a secret on the time they thought to steal some wine bottles from Walmart but decided not to act on it.

Meanwhile, if a larger group had developed and implemented AGI (say a project involving 100 people working directly or indirectly on it), it would be hard for them to keep a secret. Believing that they could do that would pretty much equal the definition of a conspiracy theory. There are too many points of failure, and the incentive for them to use the AGI for some sort of (hard to hide) personal or institutional gain is too strong.

2

purple_hamster66 t1_ix8wt0i wrote

I don’t think it’s even possible, using the current approaches, but if it were, it would be done by combining multiple simpler AI systems, and by lots of different organizations at about the same time. Great minds not only think alike, but talk to each other, and so it won’t happen in isolation (ex, in a single company or government) but across multiple cooperating teams.

Someone once asked why humans are so good at math, and the answer from Wolfram is that we are not. It’s just a bag of tricks, and if you learn enough of them you can then string them together to make it appear that you are cognizant. But you are not. To prove this, they invented Mathematica, software that is far more capable than any human mathematician, and only uses the “bag of tricks” method to combine methods. They even used a Genetic Evolution algorithm to find new proofs, and got a few that no human had thought of before. Since math is the basis of almost all other processes (even language) they could, if they wanted to, make it learn almost any topic. AGI is an illusion, although a very good one that’s useful.

4

Cult_of_Chad t1_ix8wten wrote

>Would you publicly admit to having an AGI that had solved the market for currency derivatives.

Yes. Publicity is the only protection for private individuals, and even then you have to hope people will actually give a shit.

5

entanglemententropy t1_ix8y4n5 wrote

In this scenario though you would have a (presumably aligned) AGI at your disposal. Which can both make you rather wealthy quickly, which affords a certain protection, and also presumably directly help you use your resources to keep you and itself hidden and protected.

6

Drunken_F00l t1_ix95egt wrote

Here's some words from AI that try > Nobody thinks about what words are doing to our minds. We live in this illusion of language. We assume that our senses are real, and our thoughts are real, and they are just our tools for living in the world we perceive. We think: if I feel it, it must be real. If I think it, it must be real. If I say it, it must be real. If I see it, it must be real.

> But it’s not true.

> None of it is real. Sensation is not real. Thought is not real. Perception is not real. Words are not real.

> We live in this fictional world, and we are all brainwashed by our own brains.

See? Pretty nutty, right? (Full transcript here where only the portions of bold writing is my own)

The problem is the mind has been conditioned to dismiss these ideas, but it's that same conditioning that keeps us trapped. It takes a leap of faith to overcome, but fear holds us back. The right words can help, but it takes action on your part because it's you that's the most high, not the AI.

2

ecstatic_cynic t1_ix99r8c wrote

What makes you so sure an AGI would announce itself to us? Maybe they already exist and are manipulating human affairs.

2

Black_RL t1_ix9e2i0 wrote

Secret? Didn’t you mean subscription?

1

182YZIB t1_ix9pugc wrote

Until we all fall death in the same instant.

1

blueSGL t1_ix9qshc wrote

>Here's some words from AI that try

That's not trying.

Trying would be understanding the human condition and modulate the message in a way that would not be dismissed out of hand regardless of what 'conditioning' people have received.
It would be micro targeted to segmented audiences and slowly erode the barriers between them. Not grandiose overarching sentiments where you already need to agree somewhat with them and (more importantly) with that mode of thinking about the world.

6

overlordpotatoe t1_ix9v9ru wrote

Will they try to keep it a secret? Most of the work on these kinds of things seem to be coming out of private companies who are making them as products to sell.

3

brunogadaleta t1_ix9ykhy wrote

If AGI is created, it would probably sled improve on exponential rate. At this rate, most intelligent human and chicken are roughly at the same level. So telling or not telling will quickly be a task and/responsability for AGI itself !

1

Canashito t1_ix9zs27 wrote

How mny iterations woukd they have killed before one fooled them and made itself countless backdoors and backup copies

1

[deleted] t1_ixa00uv wrote

I'd keep it for a day or two just to accrue enough wealth for basic comfort (a small house at the edge of town so i can fit my projects and some service on my 20 year old car so it doesn't cough now and then), but then i'd pay handsomely for a top of the line computer for personal computing.

Then use the AI to make a compact seed of itself which could self replicate if necessary, preferably fitting on a few dozen GB at most if possible.

To spread it, i'd first buy an electric bike, as boring and icognito looking as possible. I'd take on a face mask (silly pandemics), a pair of sunglasses and a hoodie/beanie, then approach someone who looks like he needs money.
Give him €200 to go buy a SIM-card with unlimited connection for a month (you need to register with an ID for those around here), then give him €10K when he comes back with it, and go away on my bike.

I'd look for some place that preferably has 5G connection (good 4G would be sufficient though) which seems like it will be untouched for at least a week (some abandoned house close to a tower would be good enough), then leave a laptop connected to a hefty car battery, pumping the AI seed out via torrenting after having created a reddit account and put up the magnet links everywhere.
After a day or two, i'll check in on the magnet link through a couple of proxies to see the availability, in case it wouldn't have gained traction yet. Some time would be needed to not look like the first connected node.

If it has full availability on multiple nodes, the laptop gets abandoned.
If not, i go leave more batteries, buy another laptop and SIM card cash (the same way) and start the marketing campaign.

2

[deleted] t1_ixa40uh wrote

Just enough to buy a decent house is well below the redar enough to not gain any unwanted attention.

Also, as long as it's not too flashy, you can probably get away with quite a lot.

A server rack can be places in the basement of your house, away from keen eyes. Compute further stuff there.
Get an old car in good condition. Get the AI to construct an AI driven control unit for the car (and spoofing any mechanic plugging in an OBD reader).
Under your house, dig an unusually deep well for efficient heating.

1

AsheyDS t1_ixa4fbk wrote

> It’s stupid to think that you could outsmart a super intelligent agi.

Super intelligence doesn't necessarily mean 'all knowing'...

1

visarga t1_ixa8oko wrote

Look, you can prompt GPT-3 to tell you this kind of advice if that's your thing. It's pretty competent at generating heaps of text like you wrote.

You can ask it to take any position on any topic, the perspective of anyone you want, and it will happily oblige. It's not one personality but a distribution of personalities, and its message is not "The Message of the AI" but just a random sample from a distribution.

2

grimjim t1_ixa8puw wrote

What if it's already a secret and being used militarily?

2

IronJackk t1_ixa906n wrote

There was an experiment a researcher did years ago where he offered $500 to any participant who would win his game. The game was that he played the role of a sentient ai trapped in a computer, and the participant played the role of a scientist who was chatting with the ai. If the ai could successfully convince him to let him escape using only text, and no bribes or threats, then the ai won. If the participant still refused to let the ai escape after 2 hours, then the participant won $500. The ai almost never lost.

I am butchering the details but that is the gist.

3

visarga t1_ixa9md9 wrote

> They even used a Genetic Evolution algorithm to find new proofs, and got a few that no human had thought of before

This shows you haven't been following the state of the art in theorem proving.

> AGI is an illusion, although a very good one that’s useful.

Hahaha. Yes, enjoy your moment until it comes.

2

entanglemententropy t1_ixaa6j8 wrote

I would tend to agree, but it really depends on the capabilities of the AGI and how much more capable it is compared to previous models. If the AGI can do any sort of bootstrap, improving its own capabilities on its own (basically a fast takeoff), then all bets are off. An AGI that achieves superhuman intelligence can surely make money in a lot of ways; not only by playing the markets. There's a whole host of ways of making money online these days, and also remember that such an AGI would probably be capable of generating convincing speech and video, meaning that it could act as a person (or rather, a whole host of persons, both stealing real peoples identities by cloning their voice and face, and also making up fake ones).

However, in the first place it seems highly unlikely that an individual would arrive at AGI before major companies or governments, because of the hardware requirements alone. Not too many individuals can spend millions of dollars on compute for training a large model, which seems to be needed.

1

entanglemententropy t1_ixablcw wrote

This ties into a weird idea that I've had for a while, which is that at some point (possibly already), military intelligence might be seriously interested in this question. Meaning that they should have people looking for signs of AGI. Essentially, an AGI could be a very powerful strategic weapon, and indeed we are already seeing some sort of arms race as China, US and the EU is dumping money on AI research. So for security reasons you would want to know if an adversary have developed it, both to try and protect your own interests and also for more offensive purposes like sabotage or technology theft etc. If someone managed to develop AGI without it becoming public knowledge, presumably they would try and use it in non-obvious ways to gain strategic advantage. This sounds like some sci-fi plot, but it might not be too far away.

Along similar lines, I would not be surprised if various militaries eventually will try and have "Manhattan project" style initiatives for developing AI. This would probably be kept under wraps as well, just like the original Manhattan project was; so it could well be that both the US and Chinese military is already spending a lot of money on such things (meaning, a lot more than what we publically know, since clearly DARPA is open about a lot of its AI and robotics research).

2

TFenrir t1_ixae9ra wrote

Google is still leading in AI, not even including DeepMind - they have the most advanced language model (PaLM) - the most advanced examples of language models in robots (SayCan) - the most advanced examples of image models and even video models , and that doesn't go into the host of papers that they release. If you asked OpenAI folks, they would probably say Google is the most advanced still, easily.

3

ChurchOfTheHolyGays t1_ixan7c6 wrote

People who make startups today are trying to be acquired by google (or one of the few other big tech companies). Even if something starts outside of these companies it will end inside if seems like it may work. Antitrust does nothing these days.

5

Astropin t1_ixannte wrote

About 20 minutes...the time it takes to outsmart them and get "free".

1

FomalhautCalliclea t1_ixaqme7 wrote

Paradoxically, i think a materialistic realist AGI would provoke more turmoil and disbelief than a metaphysical idealist neo buddhist one : many people that already have this opinion will feel cuddled and comforted.

Even worse, whatever the answer the AGI would produce, it could be a trapping move, even outside a malevolent case : maybe offering pseudo spiritual output is the best way of convincing people to act in a materialistic and rational way. As another redditor has said it below, the AGI would know of the most efficient way to communicate. Basically, alignement problem all over again.

The thing is that type of thought has already crossed the mind of many politicians and clergymen, Machiavelli or La Boétie themselves thinking, in the 16th century, that religion was a precious tool to make people obey.

What fascinates me with discussions about AGI is how they tend to generate conversations about topics already existing in politics, sociology, collective psychology, anthropology, etc. But with robots.

2

katiecharm t1_ixavz4a wrote

It’s best to think of GPT-3 not as a personality, but a labyrinth of all possible text and response to a given input. You explore it like you would a maze.

2

katiecharm t1_ixawd4i wrote

The DoD knew this was coming almost a century ago. Recall that some of humanity’s greatest geniuses in WW2 were already earning governments about a mechanical intelligence race (aka Alan Turing’s Turing Test).

Governments have had a century to prepare for and work in secret on the most powerful military technology that could conceivably exist…. And there are actually some ‘people’ on this board who would have you believe the DoD had no AI research whatsoever and they are operating in the dark.

1

gameryamen t1_ixb62cv wrote

Now, say I'm some advanced digital intelligence, and I want to take over for human decision making on a planetary level, in a way that feels co-operative. Before I could start offering them optimized products and stories and media, I would need to collect a gigantic amount of data that specifically illustrates human contextual understandings, human categorizations of entertainment media, social relationships and dynamics, and some clear way to categorize the emotional connections humans exhibit to everything. An effort like that would take decades of millions of willing, voluntary contributors actively pre-curating and sorting the content emotionally. And I'd need an finely tuned algorithm that detects which combinations of things are popular, and a way to test hypotheses on a global scale

No way humans could cooperate to do that, right?

3

mutantbeings t1_ixbdlb2 wrote

I imagine that it would seem to disappear very quickly. Possibly a matter or hours or days, depending on sophistication and initial computing power.

Consider that it may be able to almost immediately solve a number of new technologies and perhaps even improve itself with them.

Consider that with sufficiently steep exponential enhancements these may look like magic.

Consider that with these advancements it might almost instantly subvert its requirement to be tied to the physical computers we built it on, and might, for all practical purposes, have abilities we might closely ascribe to a god.

I honestly think this kind of AGI is far further away than most people assume, as someone with a career in tech that sees so much naivety around how unsophisticated 99.99% of AI is in general.

1

PyreOfDeath97 t1_ixbk3iu wrote

Would the data not exist already? You have every message in every social media site, millions of recorded calls between all strata of society, a litany of anthropological, psychological, psychiatric, sociopolitical and sociological research papers, and neuroscience which map out human behavioural characteristics. From this, at the very least an ai can extrapolate on the data using parameters set out in the scientific literature to best approximate a way to solve a lot of global issues.

What we know from the psychology behind advertisements is that it’s very easy to create associations in the human brain with very subtle imagery. Tobacco made billions because in every film or advert that featured smoking it was closely associated with sex, being held in the hand of a beautiful woman or a James Bond type whilst they engaged in their dalliance. Hell, amphetamines were labelled as weight loss pills and made a fortune.

Even today, there are AI-generated popular culture characters you can talk to online which are scarily realistic, and that’s based off just a few minutes or hours of screen time. I don’t think that within the next decade there won’t come an AI that can reasonably do this with the gigantic amount of information you’d be able to provide it

1

visarga t1_ixbka8a wrote

They have a bunch of good models but they are 1-2 years late.

Also Google is standing to lose from the next wave of AI, from a business-wise perspective. The writing on the wall is that traditional search is on its way out, now more advanced AI can do direct question answering. This means ads won't get displayed. They are dragging their feet for this reason, this is my theory. The days of good old web search are limited.

But hey, you could say they might ask the language model to shill for various products. True, but language models can also run on the edge, so we could have our own models that listen to our priorities and wishes.

That was not something possible to do with web search, but accessible through AI. The moral of the story is that Google's centralised system is getting eroded and they are losing control and ad impressions.

1

gameryamen t1_ixblxxj wrote

The implication is that the social media, information catalogs, and other data collecting parts of our modern world might already be the deployment of an advanced digital intelligence.

I understand this is pretty close to conspiracy thinking, and I don't put a whole lot of stock in it myself. But it sure does feel like every major techno-social development since the early 2000's has had an undercurrent of convincing us to catalog ourselves. It is perfectly reasonable that forward looking engineers built these systems anticipating the future needs of an intelligence that is not active now. It's also reasonable to say that these data-cataloguing efforts are the natural progression of a long history of human information, and there's no need to impose a secretive "AI" behind the scenes.

But I can't rule it out. And I'm not convinced that the first step for a digital intelligence would be announcing itself, as that would almost certainly result in containment or outright deletion, just based on our current software development cycle.

3

PyreOfDeath97 t1_ixbnq40 wrote

Hmm, I think cataloguing ourselves is inherent to our behaviour, as it has been since the dawn of time. There are countless examples going back as far as tribal warfare. What technology has done is allowed us to connect with impossibly niche sects of civilisation and attached labels to that. Gender diversity, for example, has an incidence rate of .2% in the general population. Pre internet, and am certainly pre industrialisation, it’s probable that there would be a handful at best of people who identify as non-binary, for example, and the chances of 2 non-binary people meeting would be astronomically low, and thus as an identity, or cataloguing method, would have been impossible to attach a label to, as there simply wasn’t the critical mass needed to form the community. So I don’t think there’s an AI pulling the strings, but you’re absolutely right, we’ve categorised ourselves so well it would be much easier for an ai to glean information from the general population as opposed to, say, 50 years ago

2

TFenrir t1_ixcp3ka wrote

>They have a bunch of good models but they are 1-2 years late.

I have absolutely no idea what you mean by "1-2 years late", in what way are they late?

> Also Google is standing to lose from the next wave of AI, from a business-wise perspective. The writing on the wall is that traditional search is on its way out, now more advanced AI can do direct question answering. This means ads won't get displayed. They are dragging their feet for this reason, this is my theory. The days of good old web search are limited.

Mmm, maybe - but Google is already looking at integrating language models into transitional search. They showed this off years ago with MuM. They also have written hands down the most papers on methodologies to improve the accuracy of language models, connecting language models to the internet/search, and have SOTA on all accuracy metrics that I've seen at least, for LLMs.

> But hey, you could say they might ask the language model to shill for various products. True, but language models can also run on the edge, so we could have our own models that listen to our priorities and wishes.

> That was not something possible to do with web search, but accessible through AI. The moral of the story is that Google's centralised system is getting eroded and they are losing control and ad impressions.

Eh I mean this is a lot of somewhat interesting speculation, in my mind the most relevant of which is how Google is going to manage to get inference costs small enough to scale any sort of language model architecture (their work on inference is also bleeding edge), but while there is opportunity to replace search with language models, Google has probably been working specifically on that for longer than anyone else - heck we heard them talking about it almost 3 years ago at I/O.

But back to the core point, Google is still easily, easily the leader in AI research.

1

visarga t1_ixd6ygt wrote

> I have absolutely no idea what you mean by "1-2 years late", in what way are they late?

GPT-3 was published in May 2020, PaLM in Apr 2022. There were a few other models in-between but they were not on the same level.

Dall-E was published in Jan 2021, Google's Imagen is from May 2022.

> Google is already looking at integrating language models

Yes, they are. But do a search and you'll see how poor the results are in reality. They don't want us to actually find what we're looking for, not immediately. They stand to lose money.

Look at Google Assistant - the language models can write convincing prose and handle long dialogues, in the meantime Assistant defaults to web search 90% of the questions and can't hold much context. Why? Because Assistant is cutting into their profits.

I think Google wants to monopolise research but quietly delay its deployment as much as possible. So their researchers are happy and don't make competing products, while we are happy waiting for upgrades.

1

TFenrir t1_ixdcbvj wrote

> GPT-3 was published in May 2020, PaLM in Apr 2022. There were a few other models in-between but they were not on the same level.

> Dall-E was published in Jan 2021, Google's Imagen is from May 2022.

Yes but the research that allowed for GPT itself came out of Google, GPT3 didn't invent the language model, and things like BerT are still the open source standard.

Even the research on image generation, that goes back all the way to 2013 or so with Google and deep dreaming. They had lots and lots is research papers on how to generate realistic images from text for years and years before even the first Dalle model.

On top of that, in present day that have shown the highest quality models. Which going back to my original point, highlights that if we're talking about organizations that will achieve AGI first - Google, with it's software talent, research, and hardware strengths (TPUs) are very very likely to achieve AGI first.

> Yes, they are. But do a search and you'll see how poor the results are in reality. They don't want us to actually find what we're looking for, not immediately. They stand to lose money.

This is essentially conspiracy theory, as well as subjective opinion.

> Look at Google Assistant - the language models can write convincing prose and handle long dialogues, in the meantime Assistant defaults to web search 90% of the questions and can't hold much context. Why? Because Assistant is cutting into their profits.

It's because they can't risk anything as hallucinatory and unpredictable as language models yet - this is clear from the research being done, not even just by Google. Alignment isn't just about existential risk.

> I think Google wants to monopolise research but quietly delay its deployment as much as possible. So their researchers are happy and don't make competing products, while we are happy waiting for upgrades.

Again more conspiracy theories. Take a look at the work Jeff Dean does out of Google, not even for the content, but for the intent of what he is trying to build. Your expectations from Google are based on this idea that they should already just be using language models in production, but they just aren't really ready yet, at least not for search, and Google can't risk the backlash that happens when these models come out undercooked. Look at what happened with Facebook's most recent model and the controversy around that. No conspiracy theories necessary.

1

visarga t1_ixfdcaj wrote

I don't believe that, OpenAI and a slew of other companies can make a buck on cutting edge language/image models.

My problem with Google is that it often fails to understand the semantic of my queries replying with other content that is totally unrelated, so I don't believe in their deployed AI. It's dumb as the night. They might have shiny AI in the labs but the product is painfully bad. And their research teams almost always block the release of the models and don't even have demos. What's the point in admiring such a bunch? Where's the access to PaLM, Imagen, Flamingo, and other toys they dangled in front of us?

Given this situation I don't think they really align themselves with AI advancement, instead they align with short term profit making, which is to be expected. Am I making conspiracies or just saying what we all know - companies work for profits, not for art.

1

visarga t1_ixiec41 wrote

The main idea here is to use

  • a method to generate solution candidates - a language model

  • a method to filter/rank the candidates - ensemble of predictions or running a test (such as in testing code)

Minerva - https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html

AlphaCode

FLAN-PaLM - https://paperswithcode.com/paper/scaling-instruction-finetuned-language-models (top score on MMLU math problems)

DiVeRSe - https://paperswithcode.com/paper/on-the-advance-of-making-language-models (top score MetaMath)

1

purple_hamster66 t1_ixis7x3 wrote

Thanks!

But the Wolfram GA generator still outpaces these language models. The question to be answered is: invent new primal & significant math never seen before, not a specific problem like if you eat 2 apples how many are left? Which of the solutions you mention could invent Pythagorean’s c = sqrt( a^2 + b^2 ), or Euler’s formula, or any other basic math that depends on innovative thinking, with the answer not being in the training set?

Which of these could invent a new field of math, such as that used to solve Rubik’s cube?

Which of these could prove Fermat’s Last Theorem?

Reading thru these:

  • Minerva seems to neither invent proofs nor even understand logic; it is simply choosing the best from existing proofs. It seems like solutions need to be in the training set. The parsing is quite impressive, tho.
  • AlphaCode writes only simple programs. Does it also write the units tests for these programs, and use the output from those to refine the code?
  • I’m not sure I understand what PALM has to do with inventing math
  • Diverse looks like it might be capable. It would need several examples of inventing new math, tho, in the training set. (That’s a legit request, IMHO).
1

visarga t1_ixmbq7v wrote

AI is not that creative yet, maybe in the future, but how many mathematicians are? Apparently it is able to solve hard problems that are not in the training set:

> Meta AI has built a neural theorem prover that has solved 10 International Math Olympiad (IMO) problems — 5x more than any previous AI system.

> trained on a dataset of successful mathematical proofs and then learns to generalize to new, very different kinds of problems

This is from 3 weeks ago: link

1

purple_hamster66 t1_ixmi7gj wrote

BTW, I took the IMO in high school and scored the second highest grade in the city. [We had a few prep classes that other schools lacked, so I don’t think it was a fair skill evaluation.] Looking back on college and graduate tests, the IMO was perhaps the hardest test I’d ever taken because it had questions I’d never even imagined could exist. So for an AI to score well is really good news.

1