Lionfyst
Lionfyst t1_j8i1477 wrote
Reply to ChatGPT Passed a Major Medical Exam, but Just Barely | Researchers say ChatGPT is the first AI to receive a passing score for the U.S. Medical Licensing Exam, but it's still bad at math. by chrisdh79
A recent paper (around Reddit somewhere) demonstrated that LLM's can do all these novel things like tell stories, or make poems, or do math or make charts despite a lack of implicit design, because the massive training organically creates all kind of sub-models in their network that can handle those types of patterns.
ChatGPT is bad at math because it's training was insufficient to give it a model that is reliable.
It's not going to be too long before someone feeds a LLM with better math training, and/or creates a hybrid that uses some other kind of technique for the math part and hands off math questions to the other engine.
Lionfyst t1_j8x2xhd wrote
Reply to Investors and techies gather in San Francisco to bathe in generative A.I. hype sparked by ChatGPT by audiomuse1
There will be a boom and there will be a bust, but some interesting stuff and new players will thrive and some old ones die.
Generative AI really feels like it fits a generational wave pattern, of explosion, boom, shake-out, transformation of the industry and culture.
Edit: Tried to add 10's