Kolinnor

Kolinnor t1_j3cek1n wrote

The AI beat me on this one. However, I suspect this riddle was somewhere in the dataset ?

Impressive nonetheless, here's ChatGPT's answer :

There are a few ways that the statement "I can't operate on this boy, he's my son!" could be true:

The surgeon could be the biological father of the patient and therefore have a personal conflict of interest in performing the surgery.

The patient could be adopted and the surgeon could be his adoptive father.

The patient could be the son of a close family friend or colleague of the surgeon, and the surgeon could feel unable to perform the surgery due to personal connections.

It is also possible that the surgeon was simply joking or using a figure of speech to express shock or surprise at the situation. Without more context, it is difficult to determine the exact meaning of the statement.

18

Kolinnor t1_izucjy6 wrote

I agree with the fast-takeoff argument. If I had the power to self-improve and read + understand the whole internet in a limited time, I doubt I wouldn't be basically a god.

I think AGI is a vague term and we'll probably have things that are mindblowingly close to humans but still lack some level 2 reasoning and some deeper intuition about things. ChatGPT gives me that vibe, at least.

EDIT : to clarify, humans are currently improving computers very fast, so if we truly have AGI, we have self improvement machines

47

Kolinnor t1_iyur259 wrote

We had insane results in scaling (Palm, Lamda (can't remember if that's 2022 though), Chinchilla and others), the baby steps of general agents (notably Gato), and many things that prove that many (if all) things are in reach of AI, notably : mathematics at a pretty serious level (Minnerva), and of course the realization with text-to-image models (Dalle-2, Stable Diffusion), that artists might be, at least partially, replaced in a very near future. Cherry on the cake for ChatGPT that gives a wild peak into the capabilities of GPT-4, which was rumored to be announced this year.

Thing is : we haven't really figured out "common sense" / level 2 reasoning (even though large language models exhibit some reasoning capabilities, I'd say it's still primitive). So whenever we get level 2 reasoning, we're probably in the singularity. When is that ? 10 years ? 20 years ? Maybe. But certainly not 50 years.

In the meantime, neat "narrow" applications probably won't stop to flourish...

3

Kolinnor t1_iyqdgki wrote

Reply to this sub by TinyBurbz

Bro, have a nice day, for real. I think the silent majority of this sub is not hostile to you guys.

I only see artists that are sad, that sucks. I hope one day everyone can do whatever the fuck they want without needing to get paid for it.

18

Kolinnor t1_ix8zze6 wrote

Honestly, this sub is not as good as it could be if there was any moderation at all.

Still lots of baitclick articles about solving conductors, aging and fusion for example. Baitclick should be banned, that's it.

Not even mentioning supernatural posts that are not better than religious fanatism (or just people being clearly high).

Other than that, I agree that the issue of AGI being possibly around the corner is never treated properly anywhere at all.

30

Kolinnor t1_iww46to wrote

I agree with the article concerning Galactica, it was utter trash (EDIT : apparently you can still do some nice stuff with it) and excessively arrogant. I'm glad this terrible project just gets shut down.

However, I strongly disagree about the conclusion. It makes no doubt to me that this is the right direction : I've been helped by GPT-3 when studying math (for example today I explained that I wanted to know if a certain type of a function had a name, because I wasn't able to find anything on google, and it correctly understood my vague explanation), or it's just pretty good in general with "well-known" knowledge. The fact that it is really naive helped me to craft some intuition sometimes. Of course, it's still baby steps now, but big potential.

The article kinda downplays how good LLM are in general, kinda dismissing them as nonsense generator. But Gary Marcus being cited in the article is a big red flag for me as well.

19

Kolinnor t1_ivuyei3 wrote

Just to add a pessimistic take on that : I think we're still very far from that, this really sounds like something that could only be made by an AGI (for example anything that includes human interactions is basically mini turing tests if you want it to be realistic).

But I don't see a reason for it not to happen eventually, maybe in 20 years or so if we're lucky ?

9

Kolinnor t1_iv12ygf wrote

Not bad at all : I had a go with "The vampire who ate Carpets"

The vampire was always hungry. No matter how much he ate, he could never seem to get full. One night, he smelled something delicious coming from a nearby house. He followed the scent to a beautiful carpet, and he just couldn't resist. He sank his teeth into the soft fabric and started to feast.

The vampire went to the next house and smelled another delicious carpet. He started to eat, but after a few bites he realized that something was wrong. This carpet didn't taste right. In fact, it tasted terrible! He tried to spit it out, but it was too late. He was already sick.

The vampire had to go to the hospital, where he was treated for carpet poisoning. He was never able to eat carpets again, and he always made sure to check the label before he took a bite.

The end.

7

Kolinnor t1_it1fo6u wrote

The argument "we don't even know how our own intelligence works" fails all the time, even more in the light of the new progresses in AI.

Before 2022, you could have argued AI art was decades down the road, as we have absolutely no clue how the brain processes the different concepts and tie them coherently together to create art. Same thing with Go, protein folding...

2

Kolinnor t1_isq0mb0 wrote

I definitely think that for mathematics, we'll have tools similar to chess engines (something like Lichess UI would be amazing) but to help us solve problems (well, this exists already, but it's not really that good, and the ambiguity it can tolerate is mostly hard coded stuff).

I heard a few months ago about a tool that transformed formal prover code into LaTeX (or vice-versa, I can't remember), based on GPT-3. I can't imagine this kind of tools not being a huge thing in the next few years. Especially, I'm expecting that future math articles will have code with formal proofs in appendix (or like a github link).

2

Kolinnor t1_irt1o9b wrote

It's definitely possible that there are bots out there posting and upvoting content on social media platforms. However, it's also possible that some of these posts are simply being made by humans who are trying to game the system. It's hard to say for sure without more information.

If you're concerned that some of the content you're seeing online is fake or misleading, it's always a good idea to do your own research before believing it. In many cases, a simple Google search can help you determine whether or not something is true.

Ultimately, it's up to you to decide how much trust you want to put in online content. If you're feeling overwhelmed, try taking a break from social media for a

​

And then GPT-3 reached maximum sentence length hehehe

8

Kolinnor t1_iqsl8sx wrote

Before this blows up in hype, can any expert comment on how good this is ?

(I can imagine lots of AI that auto-sabotages its code in subtle ways, so you'd have to make sure it's going in the right direction).

61