coffeesharkpie
coffeesharkpie t1_j62z0bw wrote
Reply to [OC] best-fit lines, correlations: ed spending vs evangelical –– 2020 election by terrykrohe
I don't get your interpretation of the t-value and the 10% probability. To the best of my understanding the closer t is to 0, the more likely there isn't a significant difference between both samples. Now to get the p-value of the t-value we would need the number of dfs. But the p-values also don't tell us anything about the actual probabilities, but only how likely your data is, assuming a true null hypothesis (you'd need Bayesian statistics to get actualprobabilities).
coffeesharkpie t1_j5lm7ou wrote
Reply to comment by cremepat in Books I read in 2022 [OC] by cremepat
Thanks, appreciate the response
coffeesharkpie t1_j5l2zl8 wrote
Reply to Books I read in 2022 [OC] by cremepat
Wow, that's a beautiful graph! Still, it leaves me extremely torn as its beauty alone just can't get me over critiquing its functionality. But devisive is good, as devisive gets people to talk.
Mind sharing a bit more information on the R packages and approach you used?
coffeesharkpie t1_j5hc4al wrote
Reply to comment by Terminarch in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
You stated you reviewed the paper. In the review process you should be able to point out methodological flaws to the editor leading to a rejection or a major revision.
Like I said it's a notion not hard science. For a practical example just take a look at the debunked Wakefield (1998) paper incorrectly linking vaccines to authism. 4000+ citations according to Google Scholar. Other examples are papers on water that has a memory, magical stem cells, arsenic DNA, or non-Mendelian genetics. It's actually quite easy to find examples of papers with very high numbers of citations that should have been printed in a tabloid instead of a scientific journal.
Many scientists are really no better than high school gossipers.
coffeesharkpie t1_j5gva2d wrote
Reply to comment by Terminarch in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
Welp, if you reviewed the paper, you at least had your chance of critizing the approach. Did you strongly suggest a rejection to the editor? Also, there is a notion that bad or devise papers have a higher chance of being cited. That's one reason why the number of citations is a pretty bad metric to judge the quality of research.
coffeesharkpie t1_j5e1g0z wrote
Reply to comment by unhappymedium2 in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
Well, you know it's a common notion in statistics that "All models are wrong, but some are useful". This means no model will ever capture reality as is, but we can make sure the model is good enough to be useful for the particular application. This is possible because we can actually quantify uncertainty about prior information, estimates and predictions (e.g. through credible or confidence intervals) and make sure models are as exact and as complex as needed.
Funnily, we can predict things quite well, especially when it comes to large numbers of people (individuals are the hard stuff). Like how social background influences educational levels for a population, how lifestyle will influence average health, how climate change may affect frequency of extreme weather, even what people may want to write on their smartphones is predicted with these kind of models.
coffeesharkpie t1_j5bzuc7 wrote
Reply to comment by BobRussRelick in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
While I see your point and agree with it mostly. One could still make a case that at least in some key metrics like average age, life expectancy, physicians per 1000 inhabitants NZ and the USA are rather similar. While e.g. the average South African is roughly 10 years younger (38 vs 28 yrs)...
coffeesharkpie t1_j5bflc0 wrote
Reply to comment by BobRussRelick in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
Are you seriously comparing the US to Africa? Just look at stats of mean age, obesity, climate, time spent outside, and those immunsystems hardend by things like malary...
coffeesharkpie t1_j5bety5 wrote
Reply to comment by jkjkjk73 in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
If you can tell me what mechanism of the vaccine should cause this X years down line, I'm all ears. After a few weeks, there will be nothing left in your body to cause anything. That's why, for all vaccines and medication that is not consumed through years, side effects are commonly found quite close to taking them. Only viable situation would be taking your jab just a bit prior to your 60th birthday...
coffeesharkpie t1_j5bdc9j wrote
Reply to comment by Obvious-Priority-791 in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
You realize that this is a problem that concerns quite a lot of areas when it comes to research in medicine where you can't simply conduct classical experiments? E.g. what would happen if Person X smokes vs. doesn't smoke, takes certain medication vs. don't, stays in his mouldy home vs. moves out, etc. Things where you would put people's lives at risk if you withhold treatment or activly damage them like with taking drugs/smoking, etc. You get the gist.
For this reason, researchers developed sophisticated, statistical methods to get a grip on this. E.g. Rubins Potential Outcomes Framework, Causal Mediation Analysis, etc. Using, for example, prior information or trying to find someone who is as equivalent as possible to Person X in all relevant traits (e.g., age, gender, fitness, social background, etc.) aside from smoking to draw inferences from there. Honestly, there are multiple approaches there.
So, long story short, estimates are not drawn from thin air. They are a product of scientific rigour, commonly used in practical all empirical fields in science (from intelligence tests or personality assessments to climate science or partical physics), and because of this they can be surprisingly accurate. Especially as most of them also have information on uncertainty related to them (e.g. standard errors, confidence or credible intervals, etc.)
coffeesharkpie t1_j63k7fy wrote
Reply to comment by terrykrohe in [OC] best-fit lines, correlations: ed spending vs evangelical –– 2020 election by terrykrohe
Doesn't help to make clear if you report the t-value or the p-value of a t-test. If it's the t-value, you would at least need to report the related p-value to judge if the mean difference is statistically significant or not. If it is the p-value depending on the chosen alpha level (commonly .05) and depending on if it's one or two-sided, it's likely not statistically different because the value is too high. And if it's the p-value, you would also not interpret this directly as a probability of there being a difference in the means (at least in a Frequentist framework).