You should take a look at UniHPF. They make minimal assumptions on data format/mapping by chucking everything into a large language model. It's comparable to performance on FHIR embeddings. I think this is an interesting avenue for machine learning in health, but the failures of large language models are difficult to uncover. Like how much can covariates shift before a prediction flips? Or is the act of measuring the covariate the only information we need?
persistentrobot t1_j128d70 wrote
Reply to comment by FHIR_HL7_Integrator in [R] Foresight: Deep Generative Modelling of Patient Timelines using Electronic Health Records by w_is_h
You should take a look at UniHPF. They make minimal assumptions on data format/mapping by chucking everything into a large language model. It's comparable to performance on FHIR embeddings. I think this is an interesting avenue for machine learning in health, but the failures of large language models are difficult to uncover. Like how much can covariates shift before a prediction flips? Or is the act of measuring the covariate the only information we need?