Touchyuncle45 t1_jbjqouz wrote
Reply to comment by HanaBothWays in The hedge fund that just posted the best return in history is negotiating a company-wide ChatGPT license by habichuelacondulce
Well looks like this is a race , the more you wait the more money and power you will lose.
Who would have thought google could become less relevant as a search engine? Ai powered search engines are the future , imagine reddit using AI to filter posts and results ...
HanaBothWays t1_jbjtj4n wrote
It’s a race to develop better Large Language Model tech, but if you are in a sector that deals with sensitive data and these tools pose a risk of inadvertently disclosing that data (because the tools send everything back to “the mothership” for analysis), being an early adopter is maybe not such a good idea.
NoSaltNoSkillz t1_jbkm19o wrote
If you localize the instance within the company, or more specifically, within the teams with access to that data already, and run different instances for those outside of that group, its less of a problem. The model being local, and only allowing input local should limit the risks, although if it is still scrapping current data, who knows, could be a risk poin
HanaBothWays t1_jbko8au wrote
Yes, but to ensure you have a model that’s behaving in that way, with standardized controls, you need to first established what those standardized controls are and then figure out some kind of auditing and certification framework for saying “this version of the tool works that way and is safe to use in an environment with sensitive information/regulated data.”
These organizations shouldn’t be trying to roll their own secure instance of ChatGPT (they wouldn’t even know where to start) and I bet they don’t want to.
seweso t1_jbkb2eh wrote
OpenAI isn't going to be the only one with this tech. You can't lock it down...
Viewing a single comment thread. View all comments