Submitted by adventurousprogram4 t3_zyiib1 in MachineLearning
Hyper1on t1_j2dzz01 wrote
Bit early to say, but I'd be willing to bet that most of their major papers this year will be widely cited. Their work on RLHF, including constitutional AI and HH seems particularly likely to be picked up by other industry labs, since it provides a way to improve LLMs deployed in the wild while reducing the cost of collecting human feedback data.
Viewing a single comment thread. View all comments