YoungXanto

YoungXanto t1_iz07hoe wrote

>you can never infer causality from looking passively at data

In this view, causal inference is relegated only to a single observation. Extrapolating results to any other similar expiremental set-up (even identical) is just that. To quote Hume,

>I say, then, that, even after we have experience of the operation of cause and effect, our conclusions from that experience areĀ notĀ founded on any reasoning, or any process of the understanding

There is an epistemological limit of the concept of causation. In statistical inference, based on probability theory, a good professor will use this limit to routinely used to smack undergrads upside the head- be it regression or p-values.

We assume distributions of underlying samples, along with central limit theorem to do statistics that support causal inference. We can attempt to control for type 1 error via our set-up, but even when our assumptions are not violated we still can never claim a result with 100% certainty.

Carefully controlled experimentation is better than using some observation set, but it suffers two drawbacks- it is expensive to obtain and it's uses beyond the experiment are quite limited, necessarily requiring extrapolation. So I argue pragmatically that we should use latent data and the statistical tools at our disposal to understand causation (to the extent it actually exists) with the appropriate limiting caveats.

5

YoungXanto t1_iyzy0jb wrote

There are two technical books he's published. One is Causality (2008) which is very technical and requires a fair amount of math background to understand and work through. He also has "Causal Inference in Statisrics: A Primer" which presents the core concepts with significantly less math pre-requisits

His do calculus is interesting, and he's highly influential in the machine learning literature, but he has a fair amount of detractors.

I personally like the concept that he presents in which we can reverse causality by re-ordering our equations. It points to the epistemological limits of our ability to understand causation in a way that Hume elucidated with his billiard ball examples a couple hundred years ago.

That said, Pearl is a bit arrogant for my taste, coming across as if he's the sole inventor of concepts that have existed for hundreds of years. His framework is a good one, but it is far from the only one.

2

YoungXanto t1_itvklp5 wrote

Even the hard sciences are the domains of inferred causality.

Hume remarks about billiard balls

>if I see one billiard ball rolling toward another, how do I know that the second ball will move when it is struck?

That is, experience is a necessary precursor to knowledge. And our observations are limited to only the confines of the single experiment from which they emerge. Repeated measurements add evidence of a causal outcome, but the state space of our observations is necessarily a subspace of the entire space of observable outcomes and we also assume the state space is time-invariant. We can therfore never be absolutely certain about anything because we can never be absolutely certain about the space we haven't sampled (which is admittedly a bit of a tautology)

2

YoungXanto t1_itvj0sq wrote

>Not every question can be settled with airtight logic or an experiment; sometimes all you have is a better or worse argument

From the most skeptical point of view, all we ever have is a better or worse argument. That's the basis of statistics, rooted in probability theory (and very Humean).

We can only sample from observable space across time. Our counterfactual probabilities may be vanishingly small, but they can never be zero.

1