Submitted by billjames1685 t3_youplu in MachineLearning
picardythird t1_ivht670 wrote
Reply to comment by lgcmo in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Interesting. I would be interested to see how your models accounted for unexpected causal impacts, as well as seemingly-outsized shocks to the market based on seemingly-irrational (or arbitrary) news or reports.
Dr-Do-Too-Much t1_ivhuc29 wrote
You'd need a "news feed digestion" pipeline to scrape and encode world news before winding that into the main market predictor. I'd love to spend hedge fund money on that
narwhal_breeder t1_ividnb1 wrote
You dont need to, AzFiNText has been around for a long time and is well reaserched.
lgcmo t1_ivjpk6e wrote
When I left there was some ideias floating around to scrape not only news, but also statements given by public figures and central banks. There is an metagame of interpreting what Jerome Powell really meant at each phrase of his speech.
Nothing like that was done (and believe it's not being built), but we had a sorta smart news pipelines that filtered relevant news and send to analysts. As I said, we had not fired the analysts, we were teaming up with them. When there was a market shock or something very disruptive, the analysts took over and we would update the models.
But I have to say that even with radical events (such as the invasion of Ukraine), the models were not that far off. We had some anomaly detection on the pipeline
Viewing a single comment thread. View all comments