Purplekeyboard
Purplekeyboard t1_je8m61n wrote
Reply to comment by currentscurrents in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
> LLMs likely have a type of understanding, and humans have a different type of understanding.
Yes, this is more of a philosophy debate than anything else, hinging on the definition of the word "understanding". LLMs clearly have a type of understanding, but as they aren't conscious it is a different type than ours. Much as a chess program has a functional understanding of chess, but isn't aware and doesn't know that it is playing chess.
Purplekeyboard t1_je8l78y wrote
Reply to comment by currentscurrents in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
The point is that GPT-3 and GPT-4 can synthesize information to produce new information.
One question I like to ask large language models is "If there is a great white shark in my basement, is it safe for me to be upstairs?" This is a question no one has ever asked before, and answering the question requires more than just memorization.
Google Bard answered rather poorly, and said that I should get out of the house or attempt to hide in a closet. It seemed to be under the impression that the house was full of water and that the shark could swim through it.
GPT-3, at least the form of it I used when I asked it, said that I was safe because sharks can't climb stairs. Bing Chat, using GPT-4, was concerned that the shark could burst through the floorboards at me, because great white sharks can weigh as much as 5000 pounds. But all of these models are forced to put together various bits of information on sharks and houses in order to try to answer this entirely novel question.
Purplekeyboard t1_jcc7cuo wrote
Reply to comment by ScientiaEtVeritas in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
But without OpenAI, who would have spent the billions of dollars they have burned through creating and then actually giving people access to models like GPT-3 and now GPT-4?
You can use GPT-3, and even versions of GPT-4, today. Or you can stand and look up at the fortress of solitude that is Google's secret mountain lair where models are created and then hoarded forever.
Purplekeyboard t1_jbd8rq4 wrote
Reply to comment by ubzrvnT in Humans Started Riding Horses 5,000 Years Ago, New Evidence Suggests by Magister_Xehanort
Zebras or camels or buffalo or elephants or elk.
Selectively breed them for thousands of years and we'd get something much better suited for riding.
Purplekeyboard t1_jasqt4g wrote
Reply to [OC] Wikipedia Edits by Day, 2001–2010 by ptgorman
It's too bad they shut Wikipedia down at the end of 2010, or we could have had the last 12 years of numbers as well.
Purplekeyboard t1_jajcnb5 wrote
Reply to comment by JackBlemming in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
> This is not good for the community.
When GPT-3 first came out and prices were posted, everyone complained about how expensive it was, and that it was prohibitively expensive for a lot of uses. Now it's too cheap? What is the acceptable price range?
Purplekeyboard t1_j9z97mm wrote
Reply to comment by oramirite in The Job Market Apocalypse: We Must Democratize AI Now! by Otarih
Democracy also isn't "free stuff for everyone".
Purplekeyboard t1_j9xz3ir wrote
"Democratizing" image generation, if that means giving people access to it free, would not be difficult. Imagegen is not that expensive. You can buy unlimited AI image generation now for $25/month from NovelAI (although they only have anime models, but photorealistic models are not more expensive to run).
This also comes with unlimited text generation, although using smaller, weaker models than the best ones available. ChatGPT is currently free as well, and it is the best text generation model that's been released as of yet.
So, at least as long as you live in a first world country, these types of AI are easy to get access to.
Purplekeyboard t1_j9uwy46 wrote
Reply to When a builder found a dirty old boot under Hobart barracks, little did he know he'd stumbled upon rare treasure - Major find for early colonial history in Australia. by ArtOak
Egypt has its pyramids, Rome has its Colosseum, and now Australians too can swell with pride as they show off their dirty old boot.
Purplekeyboard t1_j9gie2n wrote
Reply to comment by JPAnalyst in [OC] I asked Georgians (U.S.) if they learned in school about the 1912 racial cleansing in Forsyth County (GA), only 11% of respondents were taught this. by JPAnalyst
> Current teachers also responded saying it’s not in the curriculum.
That's the statistic you actually want.
Purplekeyboard t1_j9ggpzb wrote
Reply to [OC] I asked Georgians (U.S.) if they learned in school about the 1912 racial cleansing in Forsyth County (GA), only 11% of respondents were taught this. by JPAnalyst
Keep in mind that a substantial number of people couldn't even tell you what century World War II was fought in, or what countries were on which sides in that war.
I find it to be unlikely that adults decades later would have any idea if they were taught in school about a 1912 Forsyth County incident. Self reports are not the way to know what is or was taught in school.
Purplekeyboard t1_j9bd1jg wrote
Reply to [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
Keep in mind, these smaller models are going to be a lot dumber than what you've likely seen in GPT-3.
Purplekeyboard t1_j7exgma wrote
Reply to comment by [deleted] in Lead Plates and Land Claims in North America and Europe: When did the practice begin of burying lead plates to establish ownership of land, and why did it die out, and was it ever used successfully in a court of law to establish ownership? by whyenn
In Seattle, a 100 year old building is a historic landmark.
Purplekeyboard t1_j707qyj wrote
Reply to comment by eighty2angelfan in Arizona House bill would allow pregnant drivers to use HOV lane by ForkzUp
Don't most pregnant women see their unborn baby as a person?
Purplekeyboard t1_j62z8oy wrote
Reply to [OC] Youtube has over 1 billion hours of videos, we Built an AI Search Engine that can find exact timestamps for anything on Youtube by simonezchen
Your AI search engine doesn't seem to work. I tried searching on multiple things from youtube videos, like "I gotta have more cowbell", and it produced results which didn't in any way relate to what I searched on.
Purplekeyboard t1_j47f9rg wrote
Reply to Charles Joseph Minard's famous graph of the losses of Napoleon's Grande Armée during its march to Moscow and back. by TurtlePwns
Did the missing soldiers die, or leave otherwise?
Purplekeyboard t1_j44kfh9 wrote
Reply to comment by LordyVoldyy in USA Inflation v. Gold [OC] by rosetechnology
But the inflation rate didn't steadily move up over this time period.
Purplekeyboard t1_j42pnur wrote
Reply to USA Inflation v. Gold [OC] by rosetechnology
Why is inflation a straight line moving steadily up? This doesn't match the inflation rate over time at all.
Purplekeyboard t1_j40li56 wrote
Reply to [OC] New memecoin Bonk saw a 300% price return in the first 8 days, compared to 141 days for Shiba Inu and 1,253 days for Dogecoin for the same threshold. Bonk’s launch strategy, which involved airdropping 50% of its total supply to a wide base of Solana users, drove its price spike. by coingecko
Anyone stupid enough to buy this deserves to have their money stolen.
Purplekeyboard t1_j2tukmg wrote
Ok, now what does HDI mean?
Purplekeyboard t1_j2s8it2 wrote
Bloom's not very good, pruned or not.
Purplekeyboard t1_j1j5pdk wrote
Reply to ELI5 What is the underlying principle that lets the creators of ChatGPT (for example) feel confident that it will accurately provide answers to questions they themselves haven’t pondered? by onlyouwillgethis
ChatGPT is essentially just a text predictor.
It is trained on basically all the text on the internet, and it uses this to learn what words tend to follow what other words. It's very powerful and sophisticated, to the point where it can write proper english sentences which are on topic and which are (mostly) accurate.
So if you say, "Where was Elvis Presley born?", it predicts that after this text would generally come text which gives the answer to the question, and that's the text it gives you. And because it has been trained on the text of the entire internet, it knows the answer to this question.
If you say, "Please write me a brief essay on the difference between capitalism and socialism", it predicts how such an essay would likely start, then writes that text. Then predicts how such an essay would likely continue, then writes that text. And so on, until the essay is completed. As it's been trained on the text of the internet, it has large volumes of text in its training material about capitalism and socialism and the differences between them.
ChatGPT is specifically trained to be a chat bot, and it probably has multiple censorship routines and "be a nice chatbot" routines which identify when your prompt or its own writing is something against its rules.
Purplekeyboard t1_j12uk1s wrote
Reply to comment by master3243 in [R] Nonparametric Masked Language Modeling - MetaAi 2022 - NPM - 500x fewer parameters than GPT-3 while outperforming it on zero-shot tasks by Singularian2501
But what I'm asking is, how do the benchmarks match real world performance? Because I've seen claims that other language models were supposedly close to or equal to GPT-3 in this or that benchmark, but try interacting with them and the difference is striking. It's like the difference between talking to a college grad student and talking to the meth addled homeless guy who shouts at lampposts.
Purplekeyboard t1_j12lik7 wrote
Reply to [R] Nonparametric Masked Language Modeling - MetaAi 2022 - NPM - 500x fewer parameters than GPT-3 while outperforming it on zero-shot tasks by Singularian2501
Ok, but how does it compare in the real world to GPT-3?
Purplekeyboard t1_jecuaja wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
>Relative Response Quality Assessed by GPT-4
There's no way Bard is 93% as good as ChatGPT. Bard is dumb as hell, comparatively.