3_Thumbs_Up
3_Thumbs_Up t1_jeh041j wrote
Reply to comment by AndiLittle in Sam Altman's tweet about the pause letter and alignment by yottawa
Humans are extremely aligned compared to what's theoretically possible. We just generally focus on the differences rather than the similarities, because the similarities seem so obvious that we don't even consider them.
3_Thumbs_Up t1_je8tvtc wrote
Reply to comment by sdmat in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
>We can test with things that are highly unlikely to be in the training data.
We can also test things where theres an infinite amount of alternatives so that memorization would be impossible.
If GPT could solve every arithmetic problem thrown at it, then it's obvious that it has developed some understanding of arithmetic, as it's simply impossible to memorize the answer for every possible problem.
However, the fact that it fails on arithmetic of large numbers could be an indication that it doesn't understand, but failure could also be caused by other factors, such as lack of enough working memory or similar (humans would fail at multiplying large numbers in their head as well).
So I think one could prove understanding, but proving lack of understanding seems harder.
3_Thumbs_Up t1_jduq277 wrote
Reply to comment by SgathTriallair in How would a malicious AI actually achieve power in the real world? by 010101011011
You wouldn't need a billion robots to start a new society anymore than you'd need a billion humans.
3_Thumbs_Up t1_jdhp6zj wrote
Reply to comment by Deep-Station-1746 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Unnecessarily insulting people on the internet make you seem really smart. OP, unlike you, at least contributed something of value.
3_Thumbs_Up t1_jde7kj6 wrote
Reply to comment by kmtrp in My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
Have you heard of books?
Sometimes smart people can read one for hours straight. There are even some smart people who have read hundreds of books in their lives.
3_Thumbs_Up t1_jadq63e wrote
Reply to comment by RabidHexley in Is the intelligence paradox resolvable? by Liberty2012
There is an infinite multitude of ways history might play out, but they're not all equally probable.
The thing about the singularity is that its probability distribution of possible futures is much more polarized than humans are used to. Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times. It doesn't mean other futures aren't in the probability distribution.
3_Thumbs_Up t1_ja361o8 wrote
Reply to comment by _sphinxfire in How Far to the Technological Singularity? by FC4945
When AI is better at creating new technologies for developing AI than humans.
3_Thumbs_Up t1_ja31x94 wrote
Reply to comment by Zer0D0wn83 in The 2030s are going to be wild by UnionPacifik
If you are equally likely to be any one human throughout history, then you're most likely to be born in the time period that supports the most humans.
3_Thumbs_Up t1_j9hb1um wrote
Reply to comment by obfuscate555 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
That seems like a very bad basis to dismiss something.
Someone else was wrong about a similar statement.
Everyone will be wrong about human extinction until someone is right. Your method of reasoning would never be able to distinguish the person who is right.
3_Thumbs_Up t1_j9ccco0 wrote
Reply to comment by SoylentRox in Altman vs. Yudkowsky outlook by kdun19ham
Once again not true.
From my perspective it has a cost, because I value other things than my own survival. As do most humans who are not complete sociopaths.
3_Thumbs_Up t1_j9caxcf wrote
Reply to comment by SoylentRox in Altman vs. Yudkowsky outlook by kdun19ham
You're not counting the full cost of humanity dying. Humanity dying also means that all the future humans will never have a chance to exist. We're potentially talking about the loss of trillions+ of lives.
3_Thumbs_Up t1_jeh0uid wrote
Reply to comment by Current_Side_4024 in The pause-AI petition signers are just scared of change by Current_Side_4024
That's a design principle. What are you designing?
Moreover, the simplest explanation would be that they're telling the truth. They're afraid of getting killed.