Submitted by [deleted] t3_11ul904 in MachineLearning
banatage t1_jcon2zl wrote
IMHO, those models are very good for general knowledge that can be sucked up from public sources.
When it comes to proprietary / confidential data / knowledge, this is where your work will pay off.
fullstackai t1_jcopazt wrote
100% agree Also any AI that requires sensor data (e.g., in manufacturing) cannot easily be placed by foundation models.
Individual-Sky-778 t1_jconas3 wrote
Yes, I completely agree. Right now that's true. But I wonder how long this will be true? Protocols for data encryption and privacy preserving learning are already out there, IMHO it's just a matter of time until openAI (and similar) will offer such services
banatage t1_jcoo9ek wrote
Factuality is not a guarantee either with LLMs...
wind_dude t1_jcoqe5z wrote
nor with statistical models. But accuracy has generally been higher, but LLMs are catching up and key NLP domains.
jakderrida t1_jcoobvg wrote
Nor with humans.
NoRip7374 t1_jcpex1s wrote
Nor with self hosted models...
EmmyNoetherRing t1_jcot5t2 wrote
Is that true? OpenAI seems to think they’ll be able to train task-specific AI on top of their existing models for specific roles.
Viewing a single comment thread. View all comments