Submitted by Final-Cause9540 t3_zm722l in Futurology

In the field of AI, there is a strong push to use data to predict future events. We see this now with algorithmic trading, in particular. Many of the most powerful computers in the world are focused on weather prediction (which has gotten incredible over the past 2 decades). As AI gains in capability, the ability to extend further into the future with increasing complexity should be expected. Theoretically, with a powerful enough computer (many, many years from now), you could accurately predict any future event.

I’m fascinated with the idea that this same predictive capability could be used to view accurately into the past. Why can’t we feed data into a powerful supercomputer and look back, into the past of history, to gain a better understanding of humanity and civilization?

72

Comments

You must log in or register to comment.

beeen_there t1_j09fimw wrote

Can we just stop confusing vast datasets with intelligence please?

45

Readityesterday2 t1_j09vv0s wrote

This has been my stance until recently. I’d ask, would Von Neumann have named these ai and neural networks? Terms of biology, mind you. Or would have gone the route of Matrix Based Linear function Optimization Techniques, etc etc.

Seeing chatgpt, and knowing gpt4 is “like seeing the face of god”, as per one developer, I’m wondering if for all practical and functional purpose we have serious intelligence coming out of these LLMs. You can’t deny how insanely good this shit is. It was giving me jokes from ancient era and they were fucking hilarious.

14

EvenPalpitation6074 t1_j0e3kr6 wrote

>“like seeing the face of god”

True AGI intelligence isn't required for task-oriented task-trained purpose built intelligence, nor does anybody know how to build one. It may arise incidentally with a sufficiently overbuilt task oriented AI, but that doesn't mean we'll have actually built one as people describe them.

Chatbots are like seeing the inner workings of language, any "face of god" is just anthropomorphism on our part.

1

EverythingGoodWas t1_j0e9omy wrote

People see these LLMs and think they are sentient or have some vast intelligence. They are a tool that is designed to be used with other tools. I appreciate what OpenAi has done with bringing visibility to LLMs, but they are by no means “the face of God”

1

TrekForce t1_j09os14 wrote

It is literally defined as artificial intelligence. It is more than just vast datasets. In a large portion of AI programs, they are using “neurons”(not biological ones, but software developed to behave like one). This is why it’s artificial intelligence. ChatGPT for instance (why not, it’s the new big thing) isn’t just quoting things from its vast dataset, it is understanding the language in the prompt and responding with what it thinks is the most probable response based on it’s vast dataset. The response is likely to be nowhere inside that dataset, however.

8

beeen_there t1_j09qct2 wrote

Can we just stop confusing vast datasets plus instuctions for output with intelligence please?

OK? Happy now?

13

TrekForce t1_j0ab6jk wrote

It’s not intelligence. It’s artificial intelligence. But ok. Not sure what your point is. You don’t seem to have one. Nobody is confusing anything except you.

4

Fake_William_Shatner t1_j0jc2ts wrote

>It’s not intelligence. It’s artificial intelligence.

It's really not intelligence yet and therein lies the confusion. There is no THINKING going on. It's Machine Learning with an Expert System and LARGE Datasets. Enough samples and it can sound like it's smart.

Neurons are vastly more complex than just a bundle of connections. They have protein storage for long term information (equivalent to memory in a computer). They have glial cells. And, they sort of function together in an analog way.

There is one gene of difference that separates human intelligence from Chimpanzee. And I think it might be part of the folding in the brain...

Anyway. The collection of parts in various algorithms might get a very close approximation of intelligence. It might be "insightful." But it won't be conscious and not actually intelligent.

Now, the "gestalt" of various systems tied together, could very well be conscious and intelligent. I really thought it was going to require a paradigm shift away from binary computing but much to my chagrin, looks like we are less complicated than we thought -- but, there is one more trick we do that nobody is doing yet with these machines and there might be some interface with quantum effects.

The human brain is doing something like Stable Diffusion on a constant basis. Our perception is slightly in the future -- anticipating our environment -- constantly.

The functional parts that work together to make a human mind all seem to be in development. And like humans, each alone won't be conscious.

Also -- humans I think are MOSTLY conscious. We rationalize more than are rational, and we think we are making choices all the time. But we aren't fully aware of things objectively and take a lot of shortcuts.

And a lot of us choose not to be intelligent on a regular basis, so, once the AI gets the conscious bit, it won't be much of a leap to get ahead.

0

PrimeWasabiBanana t1_j0a21q3 wrote

I thought the probable response was just that - based on the probability of what a human would respond according to it's dataset. So it's still math all the way down, right? I want to say thats not intelligence, it's just doing math. But, I mean, I guess determinism in human thought could be thought of as math too.

−3

M0romete t1_j0bc3ll wrote

But all brains are prediction machines too, just very very sophisticated ones. At a certain point something called emergence kicks in so you can’t just say AI is statistics or math.

2

beeen_there t1_j0bd9yy wrote

>you can’t just say AI is statistics or math.

well of course you can

−4

M0romete t1_j0beaht wrote

Yeah but you’d be wrong. It’s like saying everything in the universe is subatomic particles. While technically true, it’s misleading and leaves out a lot of details.

3

beeen_there t1_j0bgib8 wrote

what, like the techzealot misconception that statistics, math and a set of complicated instructions somehow = intelligence?

No point in arguing over semantics, but imho its obvious enough that human experience, wisdom, feel and emotion are essential to intelligence. And computers don't have those.

They can imitate intelligent output, but they are not intelligent.

−2

monsieuryuan t1_j0bjser wrote

>No point in arguing over semantics

Then proceeds to do exactly that.

It's called artificial intelligence because nobody coded those instructions. The model learned those on its own to exposure and experience just like living things. This is a huge departure from humans explicitly writing the instructions.

>human experience, wisdom, feel and emotion are essential to intelligence

Experience and wisdom are one and the same. So are feel and emotions. So your definition of intelligence boils down to experience and emotions.

Experience is exactly what these things use to learn.

Emotions. The AI models can learn to recognize those and output in consequence if that is their purpose. If you're talking about them feeling emotions on their own, then your defining intelligence as sentience, which AI totally has the potential to achieve.

5

beeen_there t1_j0boped wrote

> AI totally has the potential to achieve.

It really doesn't. But you're obviously in religous faith mode here, a techzealot. Otherwise you wouldn't try to claim experience and wisdom are one and the same, or feel and emotions are one and the same. That demonstrates a incredibly superficial understanding of all those.

An understanding very similar to AI or a bot. An impression of understanding.

−1

monsieuryuan t1_j0c7u6i wrote

Tell me then. That's the difference between experience and wisdom? What's the difference between feelings and emotions that's actually relevant in this discussion?

I'm not a tech zealot at all. I simply understand why they call it artificial intelligence', and it's quite justifiable as a monkier.

Edit: I love how you just call anyone who disagrees with you a tech zealot. And haven't made any substantive argument or demonstrated any in-field knowledge, but the latter would make one a tech zealot right?

6

beeen_there t1_j0ckc5u wrote

Not feelings, feel - different from emotion, if you're creative you'd know what that was. Do you? Paint or compose or write or cook or whatever?

Are you seriously asking me the difference between experience and wisdom? How about you start with a dictionary, then have a think, then come back if you still don't know.

I don't call anyone who disagrees with me a tech zealot, but there is this tech intensity in some people that is like religious fervour, and as such completely misses the main points.

−1

monsieuryuan t1_j0cs1bs wrote

I get what you mean by feel vs emotions now. Though it's of personal opinion how that's necessary to characterize something as being intelligent.

One can easily argue that wisdom is a consequence of acquiring experience. It's part of the decision making or 'instructions' as you put it. So in this sense, wisdom should be captured within experience in this context.

I don't have tech intensity. I don't believe tech will solve all, or believe in AI stuff like self-driving will be imminently achievable. I just understand why they give artificial intelligence that moniker - it learns its own instructions, instead of a human explicitly coding it., which is quite justifiable.

4

beeen_there t1_j0m7hd7 wrote

> I just understand why they give artificial intelligence that moniker

its called marketing

1

DickieGreenleaf84 t1_j09fx3w wrote

You'd probably enjoy the tv thriller, Devs. Yeah, it is interesting as a pop-pseudo-philosophical discussion, but it isn't very realistic. Even what you say about weather prediction involves quite a lot of hyperbole. Weather prediction is still not very accurate.

The show really interested me because it's premise was that it would be even easier still to look back and replicate the past by using the data we have today. To extrapolate what we don't know by comparing what we do with the path things took from that point to now.

35

mbardeen t1_j0b39eu wrote

"Theoretically, with a powerful enough computer (many, many years from now), you could accurately predict any future event."

Chaos Theory says that's just not true. For the layman, Chaos theory started because a scientist (Edward Lorenz) had a set of completely deterministic functions that simulated the weather. What he found was that for two inputs that differed by less than 0.00001 that the simulation evolved completely differently.

What does this mean for predictability? To predict something in the future, you need to input the current state. What Chaos theory tells us is that we can never know the current state precisely enough to be able to accurately predict the future.

This also holds going backward in time.

15

sohaibshaheen t1_j0bdhud wrote

Very well put. Events are based on human emotions and human emotions are unpredictable. Stock market predictions fail all the time because they are often derived by sentiment, not governed by logic.

−1

umkaramazov t1_j0cy4fu wrote

Human emotions are not unpredictable. I guess it's just too many variables to be able to model the future.

2

sohaibshaheen t1_j0fc7a1 wrote

To further clarify my point about humans, it has been proven in the past that one person’s actions have changed entire course of history e.g. Hitler and since we know that people are unique, his actions couldnt have been predicted by any model. Even if we modelled his entire life, chaos theory states that a change in fraction of a variable would have changed entire output i.e. if he did or didn’t meet one person in his life, his entire view about world and jews could have never developed.

2

sohaibshaheen t1_j0fbm61 wrote

How is that different from unpredictability? It has been proven that every human is unique in their thoughts, behaviour and reaction to events. So it’s impossible to map all the emotions and hence it’s impossible to create any model that can predict that.

1

__ingeniare__ t1_j0e08zw wrote

The only truly unpredictable events are those in quantum mechanics, everything else is just a lack of data and processing power to do the inference. And we're not even sure about the quantum events.

1

sohaibshaheen t1_j0fbaw1 wrote

I am sorry but I disagree. Covid clearly showed that human emotions are entirely unpredictable and there are times when there will never be historical data to make an inference i.e. once again what happened in covid ( in reference to stock market, vaccination backlash, transport, fuel, consumer behaviour ) can not be predicted using historical data.

0

sohaibshaheen t1_j0fbhiw wrote

You can easily call it lack of data but then where do we draw the line? Every couple of years an event will occur which will have no historical counterpart and we will keep pinning it on lack of data. In my opinion its simply impossible, party due to chaos theory as well.

0

__ingeniare__ t1_j0ftkkp wrote

I'm not talking about practically predictable using current tech, I'm talking about theoretically predictable. Everything that happens can be theoretically derived from the laws of the universe. The laws are deterministic (perhaps except for quantum mechanics). Therefore, everything is deterministic and can be predicted. Human emotions are only unpredictable because we don't have an accurate model of the brain of the human we're trying to predict. If we did, and had a computer powerful enough to simulate it, their behaviour could be predicted. Hence - lack of data (model of the brain) and processing power (computer to simulate it).

Also chaos theory has nothing to do with the possibility of predictions, only the difficulty. It states that a chaotic system yields very different results for very small differences in initial conditions, not that there is some magic randomness that is impossible to account for in the system. Given the complete initial conditions, you can compute it completely deterministically. Therefore, if you get it wrong because of a chaotic system, it was lack of data (the incorrect initial conditions).

2

sohaibshaheen t1_j0fu2uj wrote

They are called theoretical for a reason and reason is that its not proven yet hence has no basis in reality unless it can be proven with experiment.

I believe there is absolute randomness in universe, contrary to popular believe that everything is predictable but i do respect your opinion. Thanks for sharing.

1

__ingeniare__ t1_j0fw5pq wrote

Not exactly, in this case it's theoretical because it is not of practical concern, despite being true. A theory in science is the highest possible status an idea can achieve, nothing can be conclusively proven.

Quantum randomness is a pretty popular idea, but everything else is known to be deterministic. Whether the universe as a whole is random or deterministic depends on if quantum randomness is actually true randomness, maybe we'll have an answer in the coming decades.

3

sohaibshaheen t1_j0fwvpp wrote

As far as I know, determinism is a philosophical idea not an established fact and theory is what it is, just a theory. Just because it isn’t possible to prove it doesnt mean that we will blindly trust it. Theories exist on both side of arguments, even free will to some extent negates the idea of determinism completely.

1

CubeFlipper t1_j0gwhvm wrote

>even free will to some extent negates the idea of determinism completely.

This is a conclusion drawn backwards, I think. If determinism is the evidence we have, then it is determinism that negates the idea of free will. Free will is an illusion.

2

sohaibshaheen t1_j0h16dg wrote

Like i said its a philosophical debate not an absolute one. Idea of free will has not been rejected and wont ever be in my opinion. You could be right or you could be wrong, we will never know as long as people on both sides of the argument live :)

1

sohaibshaheen t1_j0fwxvm wrote

I have learnt a-lot from this debate though so thanks a-lot for your time and valuable feedback.

1

__ingeniare__ t1_j0g3rav wrote

Determinism can be deduced from the laws of physics - if the laws are deterministic, then the universe is, by necessity, also deterministic. But there are many strange things we don't yet know about, such as consciousness and free will. So, no one really knows. Good talk!

3

DrenkardAston t1_j09hamp wrote

AI predicts the future based on past events already, it's not really magic and if you want a learning model that learns the past you would also have to give it past data anyway, so you would already know what you're trying to predict, if I understood your point correctly

8

apperceptiveflower t1_j0dw6n8 wrote

> you would already know what you're trying to predict

What do you mean by this? Couldn't AI show us something that must have happened in the past that we don't currently know about?

2

Ragnarotico t1_j09mchf wrote

>In the field of AI, there is a strong push to use data to predict future events. We see this now with algorithmic trading, in particular. Many of the most powerful computers in the world are focused on weather prediction (which has gotten incredible over the past 2 decades). As AI gains in capability, the ability to extend further into the future with increasing complexity should be expected. Theoretically, with a powerful enough computer (many, many years from now), you could accurately predict any future event.

I think you might be confusing AI with machine learning. Machine learning is capable of predicting trades and weather based on past data. You don't need AI to do that right now, it's already possible.

>Why can’t we feed data into a powerful supercomputer and look back, into the past of history, to gain a better understanding of humanity and civilization?

What kind of data would you feed it? What kind of understanding would you want? In terms of historical events we are limited to whatever recordings/recollections survive that were largely hand written. There's no more "data" to be gained.

8

bongo-in-the-congo t1_j0a0jp3 wrote

"Theoretically, with a powerful enough computer (many, many years from now), you could accurately predict any future event."

Many of my brain cells just killed themselves out of embarrassment for how meaningless this sentence is.

8

67ohiostate67 t1_j0biyvt wrote

Machine learning is a type of AI, so nothing wrong with what OP said

2

umkaramazov t1_j0cxnwk wrote

Throughout simulated worlds one can gain insight into the past

1

Final-Cause9540 OP t1_j0edpl8 wrote

Let’s say that there was a current (present day) dispute between multiple parties. For example, 2 warring nations that harbor old resentment over a centuries-old cultural issue.

A highly accurate predictive model, using vast, vast data sets, could help provide a probabilistic likelihood of the true nature of past events. This could be used for conflict resolution.

1

Senzin_ t1_j09eovq wrote

I think you're confusing simulations with AI (which barely, if at all, exist)

4

Final-Cause9540 OP t1_j09ewv2 wrote

I would agree with that statement, although most simulations in the future would likely be run with some form of artificial intelligence…but the simulations could be run in the past as easily as the future I assume?

−4

NotAnotherEmpire t1_j09n28v wrote

This gets incredibly complicated very quickly as the more variables you add the more inherent error is introduced in the values of those. Major weather forecasting is normally done with ensembles that use a wide variety of conditions to develop a range.

Trying to predict discreet events would both require implausible access to information such as human thoughts and picking the correct one of a more or less unlimited set. The war in Ukraine for example is a result of several unknown, specific decisions made at different times, possibly on different continents. It plausibly has inspiration going back to unrelated events in the 2011 protest wave.

3

PinealFever t1_j0ap4iv wrote

Accurately predicting any future event will get you bound in infinite regress pretty quick.

A philosophical non-starter, sorry.

3

Complex_Mushroom_464 t1_j0app9f wrote

Because recorded history is limited by subjective narratives and truths.

3

Httpssssss t1_j09yifs wrote

Machine learning (which can be classified as AI), doesn’t tell the story as well as other models can.

So if you were trying to understand something from the part, it would be better to use a model like linear regression.

Which people do use.

But to train a prediction, you use historical data. So every prediction is based on what was in the past, but for the most part, it’s not terribly decipherable. Simpler models are better for describing patterns for understanding.

2

midasear t1_j0ahcu0 wrote

>Theoretically, with a powerful enough computer (many, many years from now), you could accurately predict any future event.

Uh...no.

There are formal mathematical proofs demonstrating that perfect solutions to systems of non-linear differential equations (like the weather) can only be calculated if one possesses perfect knowledge of boundary conditions.

It doesn't matter how powerful your computers are, imperfect knowledge of every conceivable (and/or unimagined) relevant condition will eventually overwhelm your model, no matter how accurate the model is. To simplify things, a butterfly flapping its wings in Angola will eventually lead to an unexpected countercyclone over Idaho. And there are millions of buttefly's flapping their wings every moment. Increased computational power will not help avoid this.

And that's before we start considering human errors during the programming and data entry, which will be considerable.

2

ianitic t1_j0ao05a wrote

When did you, chatGPT, start posting on r/Futurology?

2

67ohiostate67 t1_j0bjdhh wrote

I work for an AI start up that builds machine learning algorithms for insurance companies. The software helps them make predictions on expected losses. It is accurate and pretty crazy.

2

RenzoARG t1_j0abirf wrote

Didn't Asimov write about this same idea at "Feminine Intuition"?

1

TheSecretAgenda t1_j0bu5bb wrote

If the AI could form a profile of every person on Earth and predict what that person is likely to do. Plus, predictive weather patterns, crop yields, scientific discoveries and account for a black swan event or two you could probably develop a pretty good predicative model.

1

ANKI-MASTER t1_j0dc60o wrote

Some day AI will be able to predict lottery tickets. Maybe not for 1000 years, but someday it will.

1

FamousPussyGrabber t1_j0dd5mz wrote

Imagine the AI's sell all their assets and no one knows why, but its clear they've seen a civilization ending event approaching with high levels of certainly. That would be creepy

1

lopedopenope t1_j0dohij wrote

I think I’m scared to hear it’s scenarios so let’s just not let it tell us what it thinks please

1

override367 t1_j0duxtp wrote

Artists will get over it when they realize there's literally nothing they can do and every second they spend complaining online was a second they should have spent preparing for their future

I'm not trying to be mean, it is what it is, most of them probably didn't really think about the fact that many industries have already been through this

including their own industry, the Camera put most professional artists out of work (you needed so many more artists to make things like textbooks)

1

EvenPalpitation6074 t1_j0e2tst wrote

>Why can’t we feed data into a powerful supercomputer and look back, into
the past of history, to gain a better understanding of humanity and
civilization?

Replace "supercomputer" with "academic institutions", and we've been doing that since forever. It just doesn't always spit out the answers people want.

1

Shiningc t1_j0iqv5t wrote

You can’t predict the future from data, because data is the past event. No matter how much past events that you gather, it’s not going to predict the future. You just want something that repeats the past.

1