FomalhautCalliclea

FomalhautCalliclea t1_je62n1a wrote

None of the options represent my opinion so i didn't vote.

AGI will be the most important/last invention of mankind (per I. J. Good), it is therefore a quite significant event in mankind's history and will very likely bring "meaning" to many.

On the other hand, you need to be alive at that time (if it ever happens), therefore surviving until then matters. And since it's "singularity" here, we cannot fathom what happens after. Which mean you'll perhaps have new meanings. Or none at all. Or something on which we don't have words or concepts yet to define. AGI will improve your situation. Not erase it (or else it won't matter anymore anyway). You'll build from there on whatever you brought so far.

Finally: Meaning is subjective, fluid and contextual.

Careful of letting this very vague and semantically diverse concept be the sole articulation of your life.

10

FomalhautCalliclea t1_jdqmjjs wrote

I answered "other".

The thing is that your reasoning is thwarted by the very form of the question: luck.

Luck is about probability. And in order to assess probability, one has to possess a data set consequent enough to make comparisons and try to detect patterns (from which we can predict).

But the issue with the question at hand is that we have a data set of 1 (one) sample: us. We don't have anything to compare it with. It's like having a deck of card, drawing one randomly, and wondering about the odds after the picking and without knowing anything about the other cards, while wondering what are the odds of having picked that card after you picked it.

It's the main problem behind teleological reasoning (reasoning on goals and ends of things): it has confirmation bias from what you already experienced projected on things you haven't and trying to find patterns in the unknown. It's not hard to guess why this could go wrong.

As for luck, here's a chinese story illustrating the limits of the concept:

A farmer has his only horse flee in the wilderness. His neighbours tell him "oh my, you're really unlucky, this horse was so useful to your work, this is a bad thing!". He answers "Maybe".

The following day, the horse comes back with 5 wild horses. Neighbours say "wow, you're so lucky, you won 5 free horses, this is a good thing!". He answers "Maybe".

The following day, his son tries to tame one of the wild horses, falls and breaks his leg. Neighbours: "oh my, this is really unlucky, your son was such a huge help at the farm, this is a bad thing!". He answers "Maybe".

The following day, war is declared. The king is mobilizing forcefully every young man able to fight. Militaries see the farmer's son and decide not to pick him up because of his broken leg. Neighbours: "wow, you're so lucky, your son won't die in war, this is a good thing!". He answers "Maybe".

Morality: reasoning about unknowns and their consequences on our lives and very subjective and limited desires is often meaningless.

1

FomalhautCalliclea t1_jcdigp0 wrote

Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid.

It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past.

He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either.

And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past.

What matters during time isn't time itself, but what happens during time.

>If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things

Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read:

>praying

ranked among

>working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts

which tells a lot about why this man can see being

> huddled together like frightened sheep

as the only reaction to a terrible danger and suffering.

>They may break our bodies (a microbe can do that) but they need not dominate our minds

With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.

0

FomalhautCalliclea OP t1_jaepa7r wrote

Well put (same for the comment above).

People that think that rich people can ride the collapse remind me of some XVIIIth century economists that would make "robinsonnades", meaning that they would create fictional stories that sound like Robinson Crusoe, with economical agents starting in a non existing pure land with no previous inhabitant, out of nothingness, completely ignoring social structures, anthropology, etc.

2

FomalhautCalliclea OP t1_jaeolb6 wrote

It depends when and where:

On the one hand, some ancient societies were quite egalitarian compared to the XIXth century (Harappa civilization, Tlaxcallan pre-colombian civilization, Sassanid empire under Khosrow I, etc).

On the other hand, some were much less egalitarian, in a dystopic manner almost (medieval serfdom societies).

The XIXth century was a progress on the precedent century, with many countries abolishing serfdom (1789 for the earliers like France, 1861 for the laters like Russia) and slavery (1807-1831 in the UK, 1848 in France, 1865 in the US).

There is also a continuity between centuries. There is even a saying: "the XVIIIth century asked the questions (with enlightenment), the XIXth century brought the answers".

1

FomalhautCalliclea t1_ja11c9k wrote

I'm one generation older than you and lots of us folks hate it too, believe me you're not alone.

>my generation (gen z) is literally the most depressed and stupid

They're not stupid, they're suffering. Ignorance is often caused by that.

Hopefully both our generations see the end of this.

14

FomalhautCalliclea t1_j73nifj wrote

Very interesting that the ones more open to the topic are the most educated and versed in science oriented fields.

My overall advice would be not to talk right away about far future and the most improbable things (AGI, the singularity itself), but rather about current advances and progress, hence my reference to current achievements (the links), showing AI is far from being only "wrong" and having only "failures".

1

FomalhautCalliclea t1_j6ysiyp wrote

First off, your post and attempt deserves more upvotes, you are trying to bring the topic to people that disagree and start a discussion, even more so in a context and country where the topic isn't mainstream. For that alone you deserve praise.

Now for the points in question :

  1. Neural networks aren't the end of AI research. The bet they make on the fact that no architecture will ever replace them is a bit presomptuous. And the goal of NNs is not to be trusted blindly. That's the word lacking in their reasoning.

  2. That is the silliest point of them all, with respect to the people you were talking to. First of all it can be said of many technologies, just think of space travel and the amazing discoveries it brought even indirectly. But even simpler: we haven't been doing that good in the last 40 000 years. Besides, that sounds a lot like an appeal to nature fallacy:

https://en.wikipedia.org/wiki/Appeal_to_nature

  1. This point is somewhat anachronistic and tautological: it of course cannot identify a problem without a human currently. Otherwise, it would be an AGI... which they say is not possible... And one doesn't need a tool to be human independent to have correct results. Some AI have been detecting breast cancer better than humans:

https://www.bbc.com/news/health-50857759

and those results were "correct" (whatever your fellows meant by "wrong result", maybe a bit lost in translation there, it's ok, i'm not a native english speaker myself). Btw, it's not even new, AI has been in use in cancer detection for the last 20 years or so.

  1. AlphaFold's goal isn't to "install proteins on his own, in real time". It seems your interlocutors make the same tautology as in point 3: "it's not an AGI, therefore it cannot be an AGI"... AlphaFold isn't conceived as a magic wand that tells you 100% truth but as a helping tool to be used along with X-ray crystallography. It was intended that way. What your interlocutors hope AlphaFold to be isn't here yet.

  2. The actual "learning" in university is actually quite separate from actual knowledge. Many learn some topic hard for just an exam then forget about it in a few days. Many doctors, in the example of medecine, keep learning through their career. The classical way of learning isn't as optimal as they believe it to be. Sure GPT can be abused. As any tech. But those cheater fellows won't remain in their job long if they absolutely know nothing. Hospitals won't keep them long.

3

FomalhautCalliclea OP t1_j3l5p9t wrote

>you included it as an answer in a poll about the steps to get to “intelligence” like it somehow matters

The precise goal of a poll is to present every opinion, even the ones you deem as ludicrous and don't have as your own. Your reaction would be as if you saw a political poll listing options from far-left to far-right and were revolted that the author was far-right because the option was included.

Your writing manages to be messy and short at the same time, just like your reasoning and reading abilities.

Even better, you lack self-awareness, being the very one that used insults.

You reproach others of not knowing what they're talking about when you didn't even understand what they were saying nor the very concept of a poll.

You're having a conversation in an alternate reality with yourself, so i'll let you guys have fun between yourselves.

1

FomalhautCalliclea OP t1_j3j0sal wrote

You managed to do worse than a post without arguments : a post without arguments but with strawmen.

I never state that multi-modality brings "intelligence" to those models.

Breakthroughs haven't happened yet (congratulations, you have a notion of time !), but we can expect them in some fields or areas of interests, like the nuclear fusion example could have made you guess : we don't know exactly how to produce the mechanism (or if it's even possible), but we know on what general issues we need to work.

As for your speculations about my knowledge, maybe you should start trying to understand what is written before speculating on things you can't have access to.

The only one here that is speculating is you : the options my poll propose are different from each other and do not represent my opinions, otherwise, the sixth ("it's impossible") would contradict the 5 first ones.

But understanding and speculating do not seem to be your best skills...

>maybe you should actually learn

those skills.

1

FomalhautCalliclea OP t1_j3erqxl wrote

An argumentation to such a lapidary statement would have made your post much more interesting.

You should also abstain yourself of judging who is skeptical and who is overly optimistic: i lean quite heavily on the former side. And you would be surprised at the amount of critical povs that get upvoted here.

Finally, arguing against the roganites is the most constructive way of putting forward your ideas, instead of being a cliché of a 1990's curmurdgeon neckbeard programmer.

2

FomalhautCalliclea OP t1_j3equ6e wrote

Indeed. But there are things to consider here:

The polls limit the number of options to 6, which forced me to sort of restrict possibilities.

Another guiding line of this poll was to classifiy the complexity of required achievements by increasing difficulty, and scale is all you need often is presented as the "simplest" way.

Finally, the problem with the views of Yudkowsky on that topic (not that they are bad or uninteresting, quite the contrary) are not very specific and he remains a bit silent on how to get to AGI. Some around this sub even have, and quite recently at that, suggested that Robert Miles directed a meme towards Yudkowsky and the likes of him mocking people that thought only additional layers/higher scale was sufficient. And the underlying message is that people that both remain vague (at least in appearance) and are bullish on their AGI timelines like Yudkowsky believe in the shortest easiest way to AGI, ie scale only.

1

FomalhautCalliclea OP t1_j3eq5mk wrote

It depends on how you envision it: multi-modal can be achieved in one breakthrough, multiple breakthroughs (one after the other for example) or even as not being a breakthrough, ie we already have the technolofy for it (i don't say we do, i give the example of a position).

The thing some might not know here is that the number of possibilities in a poll here are limited to 6, which constrained my choices a bit as you might guess and forced me to be synthetic.

1

FomalhautCalliclea OP t1_j3epto2 wrote

Not necessarily: by breakthrough i meant something we know is physically possible but we don't know how to make it yet (like nuclear fusion), whereas "new fundamental concepts" means a change in paradigm, akin to darwinian evolution's discovery, an association of concepts that is independent of technology, ideas we can't even conceive right now because of lack of words.

1

FomalhautCalliclea t1_iz2anar wrote

The plan proposed by GPT here is very general and vague. Pretty much a synthesis of what you would find in most of the sci-fi dystopian novels available on the internet. Which is where it probably found its data.

As for the code part, let's remember that any program could be used for any purpose, even without the intervention of AI, right now. And is already done with AI assistance right now (check Blackrock's Aladdin).

1