Submitted by AdditionalPizza t3_11ehv56 in singularity

This is assuming GPT4 will release in a way similar to 3 and chatGPT. When I talk about this generation, it should also include Google's public release of Bard depending on its success.

I wonder because of the way these will be implemented from here on out. Tying them into search engines and other products (Office, Excel, Snapchat, etc), will probably begin to follow how most software iterations are released. The public will just slowly see better and better results, and Microsoft/Google will just have v1.1, v1.2, v2.2, etc. We will not see substantial changes, but a constant flow of "Oh we can do this with AI now? Cool."

I don't mean people in this subreddit necessarily, but the general public won't get another chatGPT level of AI publicity sweeping across social media. Google will probably release Bard and it will improve quickly enough that there's no point in using Bing; Or who knows, Google might fail and Bing will prevail. But it doesn't really matter who wins or loses that's not my point.

Are we about to be in a situation where AI is the norm? It's just kind of a "thing" we have and it's extremely useful and replaces the point-of-access of the internet for most of us, and the world will never be the same? We will adopt it faster than we adopted the internet (because the internet makes AI adoption nearly instant). We won't know what hit us in a matter of months, we won't even feel it until we look back.

Hallucinations will just start to kind of disappear, new models will be implemented regularly on the backend without a ton of fanfare just to stay ahead of the competition; Outside of maybe the very "impressive" new features, which will hit the front page for about 24 hours. Jobs will be made easier and easier until they're unnecessary for larger and larger chunks of the population (unnecessary, not necessarily entirely replaced yet).

Every facet of our lives will [relatively] slowly be enhanced by AI at a steady pace. Is this upcoming generation of language models the beginning of this? Arguably we have reached the point of transformative AI already, and any new iteration now is just another gob of icing on the cake. But will this year's models be the last of the "look what AI can do!" days? Are we already exiting those days and this generation will cement AI into most people's daily lives? Of course there will still be big impressive feats here and there on the road to AGI, but we will stop being surprised that AI can "do" intellectual things that only humans could prior.

Just as an anecdote:

>I remember my father bringing home a box one day in the 90's. He explained that inside of the box was the World Wide Web. I was a kid, didn't really know what that meant but assumed it was some boring CD program. He got it all connected and was showing me Netscape Navigator (I thought the meteor traveling across the N logo was cool I guess) but I didn't really "get" it. Until he loaded up some random flash game page. I played Battleship for a little while, and was like "ok, but I have an SNES why would I bother with this?"

I feel like this is where we are right now with AI and the general public. It's "neat" and we know it'll probably be useful for some people.

So eventually my siblings and I started downloading music, chatting with friends over ICQ and later on MSN messenger, ROMs and emulators, movies/shows, Playstation games, and so on. Our internet got faster and more robust. The largest QoL upgrade was getting a second phone line in the house. I moved away, got cable internet, I now have fiber these days. I don't remember or really care about the time I went from 5mb/s to 15, or the first video I watched online. What still sticks in my head 25 years later is that underwhelming shitty game of Battleship, and the first time I messaged a friend from my class online over ICQ; Around this time is when the internet became a thing I used almost daily. I remember downloading music and stuff but it's not as specific. The wonder wasn't there because it just felt more normal like "Oh I heard you can do this, let's try it out."

ChatGPT hit the public pretty hard, I heard way more people talking about it than the Bing version. I think Google is poised to just release Bard, hopefully with fewer hallucinations and have it tie into our personalized records they've harvested. And that will be that. AI will be how we Google things and one day in the future I will look back on 2023 and try to pinpoint when exactly I stopped needing to Google with "site:reddit.com" because AI was more efficient. 2023 (March/May I hope!) will be the "ICQ moment" of AI.

Am I crazy here? Of course this assumes some more success with search engine implementation, but what else is there between now and a new internet? I really believe this change in how we access the internet will be much more transformative than people think. I don't see anyone really talking about it. It's safe to assume most of us in this subreddit are "good" at finding information online, and using it to enhance our intelligence. It's slow and clunky, but it is a form of enhancing our intelligence. Opening up search and merely asking a question, then getting accurate and reputable information? That will change everything. No sifting through garbage, trying to ready lengthy articles that are written in such a way to keep you engaged.

Anyway, has anyone stopped to really think about this at all?? Not like "I can't wait for gpt4 it'll be awesome" or "AGI is < 10 years away! Can't wait!" kind of things. Like, this is most likely < months away. A transformative artificial intelligence, publicly available, anytime between today and like 2 months away. Wtf. Well I hope it's within 2 months. The actual implications of technology that will begin rolling out any day now. Is this LLM gen the last big event until (if) we see a successor(s) to the language model architecture itself before AGI?

77

Comments

You must log in or register to comment.

techy098 t1_jael4fn wrote

I am looking forward to the day of having a personal assistant. Who will know a lot about me, will keep my matters private, and will be able to help me without a needing a lengthy context info from me.

Imagine the AI will have access to my W-2 and investment data and fills all the forms and asks you questions if there is any doubts. This is simple machine learning, hopefully we will get there in few years.

39

AdditionalPizza OP t1_jaetm35 wrote

I hope we're there in less than a "few" years, but we'll see. Once hallucinations are tempered enough I don't see why we wouldn't have access to that.

10

naivemarky t1_jaeu4ts wrote

Assistant is almost here and it's going to be awesome. Everyone will have one as soon as it rolls out. Plus it's not the device itself that will be anything special (it's basically as complex as headset), it's the processing power in a server center in the cloud.

10

NarrowEyedWanderer t1_jaewuj0 wrote

So looking forward to giving corporations total access to every aspect of my life.

I really hope we develop open-source, self-hosted assistant projects.

11

Unfocusedbrain t1_jae7e6m wrote

I dont know if “realistic” would be appropriate word for this since we dont know what -will- happen in the next 5-10 years. Though this is probably the most reasonable view of AI yet of anyone who posted on this board.

Anyone who’s been on the internet since the very beginning understands how (paradoxically) drastic, yet invisibly, the creeping change on the internet has been. Some times I have to step back from everything just for the question ‘how did things changes so drastically? what the fuck happened?’ to come into my head.

Same thing is happening with AI. People who understand concepts like the singularity notice these changes, but for the laymen who are focused on their daily struggles and routine wont even notice anything but useful tools and entertainment available to them.

I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

25

AdditionalPizza OP t1_jae8yaq wrote

>I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

Did I imply that in my post somewhere? I don't mean anything that capable within the year, I'm saying a drastic change in the average person's life caused directly by the impact AI will have this year when the "next gen" is in-your-face on search engines and widely used instead of our primitive search today.

10

Unfocusedbrain t1_jaebvqz wrote

I agree with you and I apologize if it seemed like I was implying you were giving a deadline for AGI. That was not my intention. I just liked your realistic perspective on AI progress, instead of the “AGI is < 10 years away! Can’t wait!” hype that some people have.

And yes, there will be a huge change on the web soon, similar to the iPhone and social media revolution in 2008. It’s not only Google and Microsoft - many other companies are working on LLM-enhanced search engines. We don’t know how that will affect the world, but I think it will speed up AGI research and make the world even more different than before and after social media & smartphones.

9

AdditionalPizza OP t1_jaecxy9 wrote

Oh ok, I thought maybe I made a slip in my post somewhere implying that.

But yeah, I think although we will all adapt very quickly to this upcoming shift to how we access the internet, I think in hindsight it will be one of the big moments we remember for the rest of our lives.

5

DowntownYou5783 t1_jaebugj wrote

What a great and insightful post. I think you are largely on point. Our smart devices are about to get a whole lot smarter. It's not unreasonable to think we could all have something approaching a JARVIS-level intelligence (see Ironman) in our home by 2030.

ChatGPT is just the beginning. It tends to hallucinate quite a bit with difficult questions, but it can maintain a conversation better than many humans. And it's willing to be educated and admit mistakes. Later iterations from OpenAI and similar iterations from other sources (i.e. within the next 18 months) are likely to take substantial steps forward.

It's crazy that the larger public is largely unaware of what appears to be happening (although John Oliver's segment on Last Week Tonight will no doubt raise awareness).

11

AdditionalPizza OP t1_jaedsf6 wrote

I do think that segment missed a lot of crucial points, and focused on very near term issues that will no doubt be overcome relatively easily.

But The hallucination aspect has to be solved and it needs to happen very soon. I think once that is tackled the train won't stop. I think it will be reduced over the coming months to the degree that it becomes a non-issue in most cases fairly soon. Google has a lot riding on that.

We also shouldn't underestimate how much more useful a model with access to the internet will be over the current chatGPT. Recent events will prove very useful.

6

wisintel t1_jae4ewg wrote

Isn’t 4 already out in the Bing/Sydney chatbot

3

RabidHexley t1_jae6i99 wrote

GPT3".5", same backbone as ChatGPT, different software wrapping, meta-prompts, and internet access.

9

MysteryInc152 t1_jaehoz6 wrote

It's definitely not 3.5. For one thing, it's much smarter. For another, Microsoft have said it's not 3.5. They're cagey about admitting it's 4 but it almost certainly is.

5

LEOWDQ t1_jaeilhw wrote

This guy is correct.

Microsoft openly said that Prometheus (the model behind Bing) is OpenAI's successor to GPT 3.5, so it's GPT-4 in all but name. And also the fact that it seems to be closed-source, meaning no open-source APIs like GPT-3 and GPT-3.5 for the public

2

LEOWDQ t1_jaehwvl wrote

I don't know why you're being downvoted, but the current model on Bing is indeed GPT-4, just that Microsoft had the licensing rights with OpenAI, and called it Prometheus instead.

And it seems that with Microsoft's 10 billion USD additional backing, GPT-4 may be forever closed-source within Microsoft.

3

mrfreeman93 t1_jaf3nyt wrote

JPMorgan stated that OpenAI is most likely training GPT-5 on >20k GPUs right now

2

tedd321 t1_jae8vf8 wrote

It’s just a LLM. Very cool and useful. But as far as AI goes, we need everything. Games, products, robots.

−5

AdditionalPizza OP t1_jae9nsz wrote

Today it's just an LLM. When the next generation drops, and it's widely implemented across several products and industries, I think we will have a very different definition of "cool and useful." I can't say what all of that will be, but I do believe it starts very soon. Sooner than anyone is comfortable saying out loud. A month, maybe 2? Then from there it's like dominoes, companies adopting an ultra useful AI into their products.

8

tedd321 t1_jaehyko wrote

Makes sense… saw the paper today how this llm affected Microsoft’s robots. We need the products now

1