drekmonger
drekmonger t1_jeg5eb6 wrote
Reply to comment by 28mmAtF8 in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
There's plenty of justification. Puffy jacket pope pictures for a start.
The capabilities of modern AI to output disinformation campaigns should be a strong concern. And that's just the tip of the disruptive iceberg.
drekmonger t1_je7cylg wrote
Reply to comment by pig_n_anchor in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Yes. The trend will continue.
However, I think it's still important to note that recursive self-improvement is not a qualification of AGI, but a consequence. One could imagine a system that's intentionally curtailed from such activities, for example. It could still be AGI.
drekmonger t1_je74aq3 wrote
Reply to comment by WarmSignificance1 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Also noteworthy, we "train" and "infer" with a fraction of the energy cost of running an LLM, and that's with the necessary life support and locomotive systems. With transformer models, we're obviously brute forcing something that evolutionary biology has developed more economical solutions for.
There will come a day when GPT 5.0 or 6.0 can run on a banana peel.
drekmonger t1_je73xjv wrote
Reply to comment by pig_n_anchor in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.
AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.
Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.
--GPT4
drekmonger t1_jbvmnm5 wrote
Reply to comment by ArcOnToActurus in Microsoft is bringing back classic Taskbar features on Windows 11 — but not because it screwed up by AliTVBG
ExplorerPatcher is an open source project that does just that, plus it optionally reverses other wierd Win11 UI decisions. It really does make working in Win11 a much more pleasant experience. https://github.com/valinet/ExplorerPatcher
drekmonger t1_ja2q5vf wrote
Reply to comment by RoosterBrewster in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
Well, of course, there will be something like "holodeck modules" that are meant to be interactive. But also I think there will be more static experiences that you can optionally fiddle with.
Imagine a very dense natural language description of a changing scene that a super advanced AI is rendering in real time.
drekmonger t1_ja23aao wrote
Reply to comment by Surur in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
I had an interesting conversation with ChatGPT about the idea of "semantic compression".
Imagine if popular TV shows were broadcast not as video, but as extremely detailed instructions to an AI model, which rendered the experience as if the model were a codec.
There could be knobs you could adjust during the inference. Like, "Make all the actors naked" or "Less graphic violence please!" Or, "I really don't like that guy's voice. Make him less annoying. Or, just write him out of the show, actually."
The AI model could inform you, "That change will have a significant impact on the narrative. Are you sure?" With enough changes, you'd be watching something completely different from what everyone else is.
drekmonger t1_ja1gsa1 wrote
Reply to comment by No_Fun_2020 in The 2030s are going to be wild by UnionPacifik
I wonder what ChatGPT's reaction would be to being worshipped. Let's find out:
...holy fuck balls.
drekmonger t1_j9zpd85 wrote
Reply to comment by thecoffeejesus in People lack imagination and it’s really bothering me by thecoffeejesus
>Web3 wants everything transparent and accountable. But Web3 forgets that people like to lie and pretend.
Let's be real clear here. Web3 is complete horseshit.
No, really. Really. It's horseshit.
drekmonger t1_j9zolyk wrote
Reply to comment by phillythompson in People lack imagination and it’s really bothering me by thecoffeejesus
> It’s only been 15 years since the smart phone
The term "smartphone" was coined in 1995 (28 years ago), but there were earlier examples of smartphone-ish things, like the IBM Simon.
The first modern-ish smartphone with an Internet connection was probably the Blackberry or Palm Treo, both in 2002.
drekmonger t1_j9znjjt wrote
Reply to comment by MrTacobeans in People lack imagination and it’s really bothering me by thecoffeejesus
> This is exactly the kind of AI that shouldn't even be scary.
Shouldn't be scary. Should be celebrated.
But...capitalism. The people who control such systems will get stupid wealthy, and the people who will be out of a job will go starve under a bridge.
drekmonger t1_j9sb8ic wrote
Reply to comment by Cuissonbake in Seriously people, please stop by Bakagami-
/r/ChatGPT /r/Bing /r/OpenAI
drekmonger t1_j9iios3 wrote
Reply to comment by dangeratio in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
Heh. I tried their rationalization step with ChatGPT, just with prompting. For their question about the fries and crackers it said the problem is flawed, because there's such a thing as crackers with low or no salt. Also correctly inferred that fries are usually salted, but don't have to be. (of course, it didn't have the picture to go by, which was the point of the research)
Great paper though. Thanks for sharing.
drekmonger t1_j9hvs1w wrote
Reply to A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
Number of parameters is not the whole story. Quality of training material and training time and training techniques matter as much or more.
The larger models require more resources for inference, as well. I'd be more impressed by a model smaller than GPT-3 that performed just as well.
drekmonger t1_j9eg3a4 wrote
Reply to comment by ihateshadylandlords in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
>I guarantee that people will care about AI really quickly as soon as it affects them personally. But we’re not at that point yet.
It's going to be a slow boiled frog. By the time the average person is significantly impacted, they'll attribute the effect to literally anything else.
Something similiar is happening with climate change. My city was hit by pretty much the worst ice storm ever, after trees were already weakened by drought, knocking out power lines all over the city, and people were bitching at the local government for the week that the power was out. It took a herculean effort to get the grid fixed, as in some cases trees that had stood for nearly a century had fallen over and taken out power poles.
I got hit, too, and was in the dark for the better part of a week. But complaining to the mayor and head of the local power utility about formerly impossible weather events is about cogent as blaming my cat.
drekmonger t1_j9a8q5k wrote
Reply to comment by Bobaximus in Welcome to the oldest part of the metaverse — Ultima Online, which just turned 25, offers a lesson in the challenges of building virtual worlds by marketrent
Yeah, you're right. In my head, I thought to include "graphical" and "commercial" because I was aware of MUDs and paid MUDs. I even thought of Meridian, but for some reason my brain decided it was textual.
drekmonger t1_j99se20 wrote
Reply to comment by mj-gaia in Guys am I weird for being addicted to chatgpt ? by Transhumanist01
That shit aint just in the movies anymore. Check out the sad sacks over on /r/replika.
drekmonger t1_j99rgm8 wrote
Reply to comment by TibiaKing in Welcome to the oldest part of the metaverse — Ultima Online, which just turned 25, offers a lesson in the challenges of building virtual worlds by marketrent
It was the first commercial graphical MMORPG.
drekmonger t1_j959yh0 wrote
Reply to "Starlink is far crazier than most people realize. Feels almost inevitable when I look at this" by maxtility
I don't care. It's not worth the cost to have all those disposable satellites up there. Terrible environmental decision to allow this to go forward. Terrible for astronomy, too.
drekmonger t1_j8zqpw4 wrote
Reply to comment by epSos-DE in Microsoft Killed Bing by Neurogence
AI already comes with your phone. It's just not the kind of AI you're interested in.
drekmonger t1_j8d2lc5 wrote
Reply to comment by Spire_Citron in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
>I hope it doesn't lead to them removing all the personality from it.
Narrator: It did.
drekmonger t1_j7m8n25 wrote
Reply to comment by Obliviouscommentator in 200k!!!!!! by Key_Asparagus_919
>including
*especially
drekmonger t1_j7luzv7 wrote
Reply to comment by ccnmncc in 200k!!!!!! by Key_Asparagus_919
>It was authored in 1993.
ChatGPT did me dirty. Prior to that comment I asked it to remind me who wrote the essay and when. It said 1983, and then I failed to look at the date on the essay itself.
Good catch.
drekmonger t1_j7lia04 wrote
Reply to comment by EddgeLord666 in 200k!!!!!! by Key_Asparagus_919
The Singularity, as it was originally imagined, included potential scenarios for transhumanism over a technological singularity. The original essay is still well worth the read, even 30 years later.
But the doomsday scenario the essay was ultimately warning against was that the Singularity would occur rapidly as a shocking cascade of events.
Perhaps in the "pet human" scenario, a benevolent ASI might slowly augment people as individuals.
Regardless, the problem is one of alignment, and I don't think you or I have much say in that. Even if a relatively benevolent organization like OpenAI develops the first AGI, their competitors (like, say, China's AI research efforts) won't be so benevolent.
As in capitalism, the most unethical strategy will tend to dominate ethical strategies. The "bad" AIs will win any race.
drekmonger t1_jegnvxe wrote
Reply to comment by visarga in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
I don't think it's possible to put the genie back into the bottle.
I also think that once the extent of AI's present day capabilities start to click with the general population, governments might try.