FomalhautCalliclea
FomalhautCalliclea t1_je60b4q wrote
Reply to comment by Neurogence in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
The irony would be that all this paper manages to achieve is to encourage OpenAI and Microsoft to accelerate their work before legislation intervenes.
FomalhautCalliclea t1_jdqn1bf wrote
Reply to comment by Lartnestpasdemain in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Best post here.
One of your paragraphs reminded me Don Hertzfeldt's "It's such a beautiful day".
FomalhautCalliclea t1_jdqmtmy wrote
Reply to comment by [deleted] in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I'm conflicted about your post.
On the one hand i like your tag and especially its ending point.
On the other hand, i don't like your written conclusion since i would have expected the apogee of mankind not to be a celestial kim jong un.
FomalhautCalliclea t1_jdqmjjs wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
I answered "other".
The thing is that your reasoning is thwarted by the very form of the question: luck.
Luck is about probability. And in order to assess probability, one has to possess a data set consequent enough to make comparisons and try to detect patterns (from which we can predict).
But the issue with the question at hand is that we have a data set of 1 (one) sample: us. We don't have anything to compare it with. It's like having a deck of card, drawing one randomly, and wondering about the odds after the picking and without knowing anything about the other cards, while wondering what are the odds of having picked that card after you picked it.
It's the main problem behind teleological reasoning (reasoning on goals and ends of things): it has confirmation bias from what you already experienced projected on things you haven't and trying to find patterns in the unknown. It's not hard to guess why this could go wrong.
As for luck, here's a chinese story illustrating the limits of the concept:
A farmer has his only horse flee in the wilderness. His neighbours tell him "oh my, you're really unlucky, this horse was so useful to your work, this is a bad thing!". He answers "Maybe".
The following day, the horse comes back with 5 wild horses. Neighbours say "wow, you're so lucky, you won 5 free horses, this is a good thing!". He answers "Maybe".
The following day, his son tries to tame one of the wild horses, falls and breaks his leg. Neighbours: "oh my, this is really unlucky, your son was such a huge help at the farm, this is a bad thing!". He answers "Maybe".
The following day, war is declared. The king is mobilizing forcefully every young man able to fight. Militaries see the farmer's son and decide not to pick him up because of his broken leg. Neighbours: "wow, you're so lucky, your son won't die in war, this is a good thing!". He answers "Maybe".
Morality: reasoning about unknowns and their consequences on our lives and very subjective and limited desires is often meaningless.
FomalhautCalliclea t1_jcdigp0 wrote
Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid.
It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past.
He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either.
And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past.
What matters during time isn't time itself, but what happens during time.
>If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things
Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read:
>praying
ranked among
>working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts
which tells a lot about why this man can see being
> huddled together like frightened sheep
as the only reaction to a terrible danger and suffering.
>They may break our bodies (a microbe can do that) but they need not dominate our minds
With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.
FomalhautCalliclea OP t1_jaepa7r wrote
Reply to comment by RabidHexley in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
Well put (same for the comment above).
People that think that rich people can ride the collapse remind me of some XVIIIth century economists that would make "robinsonnades", meaning that they would create fictional stories that sound like Robinson Crusoe, with economical agents starting in a non existing pure land with no previous inhabitant, out of nothingness, completely ignoring social structures, anthropology, etc.
FomalhautCalliclea OP t1_jaeolb6 wrote
Reply to comment by Quealdlor in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
It depends when and where:
On the one hand, some ancient societies were quite egalitarian compared to the XIXth century (Harappa civilization, Tlaxcallan pre-colombian civilization, Sassanid empire under Khosrow I, etc).
On the other hand, some were much less egalitarian, in a dystopic manner almost (medieval serfdom societies).
The XIXth century was a progress on the precedent century, with many countries abolishing serfdom (1789 for the earliers like France, 1861 for the laters like Russia) and slavery (1807-1831 in the UK, 1848 in France, 1865 in the US).
There is also a continuity between centuries. There is even a saying: "the XVIIIth century asked the questions (with enlightenment), the XIXth century brought the answers".
FomalhautCalliclea OP t1_ja9ov9k wrote
Reply to comment by Iffykindofguy in The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
Totally agree, "history repeats itself" almost sounds like a fallacy (appeal to nature), presupposing some immanent order to things that would magically explain everything.
FomalhautCalliclea t1_ja11ru0 wrote
Reply to The 2030s are going to be wild by UnionPacifik
>this society we’ve created is breaking down for so many
"The old world is dying, the new world is late to appear and in this chiaroscuro arise monsters." (Gramsci).
FomalhautCalliclea t1_ja11c9k wrote
Reply to comment by TupewDeZew in The 2030s are going to be wild by UnionPacifik
I'm one generation older than you and lots of us folks hate it too, believe me you're not alone.
>my generation (gen z) is literally the most depressed and stupid
They're not stupid, they're suffering. Ignorance is often caused by that.
Hopefully both our generations see the end of this.
FomalhautCalliclea t1_j9qh7ur wrote
Reply to comment by headypete42033 in Been reading Ray Kurzweil’s book “The Singularity is Near”. What should I read as a prerequisite to comprehend it? by Golfer345
Your reading list sounds like a slow descent into dementia and faulty reasoning.
FomalhautCalliclea t1_j9qggw5 wrote
Reply to comment by 94746382926 in Seriously people, please stop by Bakagami-
I agree with your disdain for those, but maybe banning might sound as a too extreme solution ?
FomalhautCalliclea t1_j73nifj wrote
Reply to comment by SoulGuardian55 in Controversy over current progress in AI by SoulGuardian55
Very interesting that the ones more open to the topic are the most educated and versed in science oriented fields.
My overall advice would be not to talk right away about far future and the most improbable things (AGI, the singularity itself), but rather about current advances and progress, hence my reference to current achievements (the links), showing AI is far from being only "wrong" and having only "failures".
FomalhautCalliclea t1_j6z1n0v wrote
Reply to comment by visarga in Controversy over current progress in AI by SoulGuardian55
Totally agree on the fact that it's very important. It's just that we're not there yet and that AlphaFold is not made for that. Maybe a future descendant of it, but not AlphaFold itself.
The day we'll have that will definitely be a big deal for sure.
FomalhautCalliclea t1_j6ysiyp wrote
First off, your post and attempt deserves more upvotes, you are trying to bring the topic to people that disagree and start a discussion, even more so in a context and country where the topic isn't mainstream. For that alone you deserve praise.
Now for the points in question :
-
Neural networks aren't the end of AI research. The bet they make on the fact that no architecture will ever replace them is a bit presomptuous. And the goal of NNs is not to be trusted blindly. That's the word lacking in their reasoning.
-
That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy:
https://en.wikipedia.org/wiki/Appeal_to_nature
- This point is somewhat anachronistic and tautological: it of course cannot identify a problem without a human currently. Otherwise, it would be an AGI... which they say is not possible... And one doesn't need a tool to be human independent to have correct results. Some AI have been detecting breast cancer better than humans:
https://www.bbc.com/news/health-50857759
and those results were "correct" (whatever your fellows meant by "wrong result", maybe a bit lost in translation there, it's ok, i'm not a native english speaker myself). Btw, it's not even new, AI has been in use in cancer detection for the last 20 years or so.
-
AlphaFold's goal isn't to "install proteins on his own, in real time". It seems your interlocutors make the same tautology as in point 3: "it's not an AGI, therefore it cannot be an AGI"... AlphaFold isn't conceived as a magic wand that tells you 100% truth but as a helping tool to be used along with X-ray crystallography. It was intended that way. What your interlocutors hope AlphaFold to be isn't here yet.
-
The actual "learning" in university is actually quite separate from actual knowledge. Many learn some topic hard for just an exam then forget about it in a few days. Many doctors, in the example of medecine, keep learning through their career. The classical way of learning isn't as optimal as they believe it to be. Sure GPT can be abused. As any tech. But those cheater fellows won't remain in their job long if they absolutely know nothing. Hospitals won't keep them long.
FomalhautCalliclea OP t1_j3l5p9t wrote
Reply to comment by [deleted] in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
>you included it as an answer in a poll about the steps to get to “intelligence” like it somehow matters
The precise goal of a poll is to present every opinion, even the ones you deem as ludicrous and don't have as your own. Your reaction would be as if you saw a political poll listing options from far-left to far-right and were revolted that the author was far-right because the option was included.
Your writing manages to be messy and short at the same time, just like your reasoning and reading abilities.
Even better, you lack self-awareness, being the very one that used insults.
You reproach others of not knowing what they're talking about when you didn't even understand what they were saying nor the very concept of a poll.
You're having a conversation in an alternate reality with yourself, so i'll let you guys have fun between yourselves.
FomalhautCalliclea OP t1_j3j0sal wrote
Reply to comment by [deleted] in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
You managed to do worse than a post without arguments : a post without arguments but with strawmen.
I never state that multi-modality brings "intelligence" to those models.
Breakthroughs haven't happened yet (congratulations, you have a notion of time !), but we can expect them in some fields or areas of interests, like the nuclear fusion example could have made you guess : we don't know exactly how to produce the mechanism (or if it's even possible), but we know on what general issues we need to work.
As for your speculations about my knowledge, maybe you should start trying to understand what is written before speculating on things you can't have access to.
The only one here that is speculating is you : the options my poll propose are different from each other and do not represent my opinions, otherwise, the sixth ("it's impossible") would contradict the 5 first ones.
But understanding and speculating do not seem to be your best skills...
>maybe you should actually learn
those skills.
FomalhautCalliclea OP t1_j3erqxl wrote
Reply to comment by [deleted] in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
An argumentation to such a lapidary statement would have made your post much more interesting.
You should also abstain yourself of judging who is skeptical and who is overly optimistic: i lean quite heavily on the former side. And you would be surprised at the amount of critical povs that get upvoted here.
Finally, arguing against the roganites is the most constructive way of putting forward your ideas, instead of being a cliché of a 1990's curmurdgeon neckbeard programmer.
FomalhautCalliclea OP t1_j3equ6e wrote
Reply to comment by Lone-Pine in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
Indeed. But there are things to consider here:
The polls limit the number of options to 6, which forced me to sort of restrict possibilities.
Another guiding line of this poll was to classifiy the complexity of required achievements by increasing difficulty, and scale is all you need often is presented as the "simplest" way.
Finally, the problem with the views of Yudkowsky on that topic (not that they are bad or uninteresting, quite the contrary) are not very specific and he remains a bit silent on how to get to AGI. Some around this sub even have, and quite recently at that, suggested that Robert Miles directed a meme towards Yudkowsky and the likes of him mocking people that thought only additional layers/higher scale was sufficient. And the underlying message is that people that both remain vague (at least in appearance) and are bullish on their AGI timelines like Yudkowsky believe in the shortest easiest way to AGI, ie scale only.
FomalhautCalliclea OP t1_j3eq5mk wrote
Reply to comment by Superschlenz in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
It depends on how you envision it: multi-modal can be achieved in one breakthrough, multiple breakthroughs (one after the other for example) or even as not being a breakthrough, ie we already have the technolofy for it (i don't say we do, i give the example of a position).
The thing some might not know here is that the number of possibilities in a poll here are limited to 6, which constrained my choices a bit as you might guess and forced me to be synthetic.
FomalhautCalliclea OP t1_j3epto2 wrote
Reply to comment by ItsTimeToFinishThis in Poll: What needs to happen for us to get to the minimal steps of AGI (description below) by FomalhautCalliclea
Not necessarily: by breakthrough i meant something we know is physically possible but we don't know how to make it yet (like nuclear fusion), whereas "new fundamental concepts" means a change in paradigm, akin to darwinian evolution's discovery, an association of concepts that is independent of technology, ideas we can't even conceive right now because of lack of words.
FomalhautCalliclea t1_iz2anar wrote
The plan proposed by GPT here is very general and vague. Pretty much a synthesis of what you would find in most of the sci-fi dystopian novels available on the internet. Which is where it probably found its data.
As for the code part, let's remember that any program could be used for any purpose, even without the intervention of AI, right now. And is already done with AI assistance right now (check Blackrock's Aladdin).
FomalhautCalliclea t1_je62n1a wrote
Reply to Anyone else feel like everything else is meaningless except working towards AGI? by PixelEnjoyer
None of the options represent my opinion so i didn't vote.
AGI will be the most important/last invention of mankind (per I. J. Good), it is therefore a quite significant event in mankind's history and will very likely bring "meaning" to many.
On the other hand, you need to be alive at that time (if it ever happens), therefore surviving until then matters. And since it's "singularity" here, we cannot fathom what happens after. Which mean you'll perhaps have new meanings. Or none at all. Or something on which we don't have words or concepts yet to define. AGI will improve your situation. Not erase it (or else it won't matter anymore anyway). You'll build from there on whatever you brought so far.
Finally: Meaning is subjective, fluid and contextual.
Careful of letting this very vague and semantically diverse concept be the sole articulation of your life.