Kolinnor
Kolinnor t1_j32t9p0 wrote
Keep in mind that this question has probably tens of thousands of answers on the internet. So this is far from out distribution.
It's widely more impressive to ask something like writing a specific continuation of a dialogue that never occured, etc.
Kolinnor t1_j1nksap wrote
You can learn anything, but you can't learn everything.
Yeah. Time is a big problem.
Kolinnor t1_j0u0wz4 wrote
Reply to Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Don't forget to downvote overconfident posts, optimistic or pessimistic...
Kolinnor t1_izucjy6 wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I agree with the fast-takeoff argument. If I had the power to self-improve and read + understand the whole internet in a limited time, I doubt I wouldn't be basically a god.
I think AGI is a vague term and we'll probably have things that are mindblowingly close to humans but still lack some level 2 reasoning and some deeper intuition about things. ChatGPT gives me that vibe, at least.
EDIT : to clarify, humans are currently improving computers very fast, so if we truly have AGI, we have self improvement machines
Kolinnor t1_iz5djje wrote
Source ?
Kolinnor t1_iyur259 wrote
Reply to comment by Makingggserver in The year in conclusion by Opticalzone
We had insane results in scaling (Palm, Lamda (can't remember if that's 2022 though), Chinchilla and others), the baby steps of general agents (notably Gato), and many things that prove that many (if all) things are in reach of AI, notably : mathematics at a pretty serious level (Minnerva), and of course the realization with text-to-image models (Dalle-2, Stable Diffusion), that artists might be, at least partially, replaced in a very near future. Cherry on the cake for ChatGPT that gives a wild peak into the capabilities of GPT-4, which was rumored to be announced this year.
Thing is : we haven't really figured out "common sense" / level 2 reasoning (even though large language models exhibit some reasoning capabilities, I'd say it's still primitive). So whenever we get level 2 reasoning, we're probably in the singularity. When is that ? 10 years ? 20 years ? Maybe. But certainly not 50 years.
In the meantime, neat "narrow" applications probably won't stop to flourish...
Kolinnor t1_iyqpx8l wrote
Reply to The year in conclusion by Opticalzone
It was without the shadow of a doubt the most intense year ever for AI. And I expect the following years to be even crazier.
Kolinnor t1_ixyj79u wrote
Reply to Super Intelligent A.I. is Neither Necessary nor Desirable (11 min read) by BackgroundResult
"The folks on LessWrong are interesting philosophers, but not always very rational. Transhumanists seem to think AGI and SAI are wonderful potential creations."
Wow. That's a really shitty take on LessWrong. And there's a paywall ?
Kolinnor t1_ix8zze6 wrote
Reply to Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Honestly, this sub is not as good as it could be if there was any moderation at all.
Still lots of baitclick articles about solving conductors, aging and fusion for example. Baitclick should be banned, that's it.
Not even mentioning supernatural posts that are not better than religious fanatism (or just people being clearly high).
Other than that, I agree that the issue of AGI being possibly around the corner is never treated properly anywhere at all.
Kolinnor t1_iwyuae2 wrote
Reply to comment by TopicRepulsive7936 in Why Meta’s latest large language model survived only three days online by nick7566
What do you mean ? I know he's popular for being highly controversial, but I wonder if I don't know all the story about him.
Kolinnor t1_iww46to wrote
I agree with the article concerning Galactica, it was utter trash (EDIT : apparently you can still do some nice stuff with it) and excessively arrogant. I'm glad this terrible project just gets shut down.
However, I strongly disagree about the conclusion. It makes no doubt to me that this is the right direction : I've been helped by GPT-3 when studying math (for example today I explained that I wanted to know if a certain type of a function had a name, because I wasn't able to find anything on google, and it correctly understood my vague explanation), or it's just pretty good in general with "well-known" knowledge. The fact that it is really naive helped me to craft some intuition sometimes. Of course, it's still baby steps now, but big potential.
The article kinda downplays how good LLM are in general, kinda dismissing them as nonsense generator. But Gary Marcus being cited in the article is a big red flag for me as well.
Kolinnor t1_iwryo2e wrote
Reply to When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
I recommend the amazing horror game SOMA for that kind of mindblowing thought experiment. Cannot say more without spoiling a lot.
Kolinnor t1_ivxlf3d wrote
Reply to comment by PrivateLudo in Will Text to Game be possible? by Independent-Book4660
Yeah, I said 20 years thinking about 2030-2035 but gosh we're already almost in 2023
Kolinnor t1_ivuyei3 wrote
Reply to Will Text to Game be possible? by Independent-Book4660
Just to add a pessimistic take on that : I think we're still very far from that, this really sounds like something that could only be made by an AGI (for example anything that includes human interactions is basically mini turing tests if you want it to be realistic).
But I don't see a reason for it not to happen eventually, maybe in 20 years or so if we're lucky ?
Kolinnor t1_iv12ygf wrote
Reply to Reading bedtime stories to your kids is hard work. Now AI will do it for you! by blazedemavocados
Not bad at all : I had a go with "The vampire who ate Carpets"
The vampire was always hungry. No matter how much he ate, he could never seem to get full. One night, he smelled something delicious coming from a nearby house. He followed the scent to a beautiful carpet, and he just couldn't resist. He sank his teeth into the soft fabric and started to feast.
The vampire went to the next house and smelled another delicious carpet. He started to eat, but after a few bites he realized that something was wrong. This carpet didn't taste right. In fact, it tasted terrible! He tried to spit it out, but it was too late. He was already sick.
The vampire had to go to the hospital, where he was treated for carpet poisoning. He was never able to eat carpets again, and he always made sure to check the label before he took a bite.
The end.
Kolinnor t1_iunbwt0 wrote
I was starting to believe those kinds of clean and straight-to-the-point articles didn't exist anymore ! Thank you.
Kolinnor t1_iu3i636 wrote
Quite a misleading title for the article (as far as I understood the argumentation which is only nitpicking on acronyms ?). It's quite long, I wouldn't bother reading it
Kolinnor t1_it1fo6u wrote
Reply to Does AGI have to come before ASI? by CY-B3AR
The argument "we don't even know how our own intelligence works" fails all the time, even more in the light of the new progresses in AI.
Before 2022, you could have argued AI art was decades down the road, as we have absolutely no clue how the brain processes the different concepts and tie them coherently together to create art. Same thing with Go, protein folding...
Kolinnor t1_isq0mb0 wrote
Reply to How will fields like engineering, mathematics, medicine, and finance be changed by AI in the coming years? by pradej
I definitely think that for mathematics, we'll have tools similar to chess engines (something like Lichess UI would be amazing) but to help us solve problems (well, this exists already, but it's not really that good, and the ambiguity it can tolerate is mostly hard coded stuff).
I heard a few months ago about a tool that transformed formal prover code into LaTeX (or vice-versa, I can't remember), based on GPT-3. I can't imagine this kind of tools not being a huge thing in the next few years. Especially, I'm expecting that future math articles will have code with formal proofs in appendix (or like a github link).
Kolinnor t1_irt1o9b wrote
Reply to Am I crazy? Or am I right? by AdditionalPizza
It's definitely possible that there are bots out there posting and upvoting content on social media platforms. However, it's also possible that some of these posts are simply being made by humans who are trying to game the system. It's hard to say for sure without more information.
If you're concerned that some of the content you're seeing online is fake or misleading, it's always a good idea to do your own research before believing it. In many cases, a simple Google search can help you determine whether or not something is true.
Ultimately, it's up to you to decide how much trust you want to put in online content. If you're feeling overwhelmed, try taking a break from social media for a
​
And then GPT-3 reached maximum sentence length hehehe
Kolinnor t1_ir6l3x8 wrote
Reply to Excited to announce Imagen Video, our new text-conditioned video diffusion model that generates 1280x768 24fps HD videos by Dr_Singularity
Boys, things are fucking speeding up, what the hell
Kolinnor t1_ir6jlss wrote
"But it's just a multiplication of big matrices"
Kolinnor t1_iqsl8sx wrote
Reply to Self-Programming Artificial Intelligence Using Code-Generating: a self-programming AI implemented using a code generation model can successfully modify its own source code to improve performance and program sub-models to perform auxiliary tasks. by Schneller-als-Licht
Before this blows up in hype, can any expert comment on how good this is ?
(I can imagine lots of AI that auto-sabotages its code in subtle ways, so you'd have to make sure it's going in the right direction).
Kolinnor t1_j3cek1n wrote
Reply to Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
The AI beat me on this one. However, I suspect this riddle was somewhere in the dataset ?
Impressive nonetheless, here's ChatGPT's answer :
There are a few ways that the statement "I can't operate on this boy, he's my son!" could be true:
The surgeon could be the biological father of the patient and therefore have a personal conflict of interest in performing the surgery.
The patient could be adopted and the surgeon could be his adoptive father.
The patient could be the son of a close family friend or colleague of the surgeon, and the surgeon could feel unable to perform the surgery due to personal connections.
It is also possible that the surgeon was simply joking or using a figure of speech to express shock or surprise at the situation. Without more context, it is difficult to determine the exact meaning of the statement.