I have been trying to work out what kind of implementation of this technology you can have on the average home computer. For now, the ram/gpu required to run gpt3.5 is beyond the compute power of the average power users home equipment.
You can make a basic system that can do limited tasks for specific inputs but the chat bot using the parameter set for gpt 3.5 is too large.
And the emergent properties present in this larger model is the most interesting and useful part of the the current progress. So for NOW openAI have a huge market advantage - they have a live product with a huge existing user base and compute power to support current throttled usage.
If I were openAI, I would be looking at how to launch the paid 'beta' product for generic use and then look at the subset of interactions on the free version to see if there are use cases that could do with additional training inputs to give further enhanced interactions. Some of my nebulous thoughts on potential use cases for custom products that people might pay for include,
Roleplaying bot - partner with online role playing systems to ingest large amounts of (anonymised) conversational data and have human feedback on training based on the new model.
Developer/infrastructure/IT helper: ingest even more publicly available data sets on q&a forums, open source systems documentation and support forums, GitHub, etc
Private instances of ChatGPT that have a "commercial in confidence" license so that businesses can provide their commercial IP datasets and transform the chat bot into the company knowledge system - all data, processes procedures can be used and accessed in a dynamically linked interactive and proprietary context. (Would also need to conform with country/state privacy laws etc)
Similar private instances provided to academic institutions where all academic and student information, emails and conversations (also anonymised) can be used to train the course and subject matter expert bots that can assist academics to design courseware and students to learn and understand much faster.
I think once we can run our LLM models on home computers, all bets are off. Your fridge might have a bot to tell you the options for dinner. Your wallet will alert you when your expenses are off track from previous months. You will ask your home assistant for a daily plan and it will remind you to take your medicine and prompt you to eat/drink something depending on your current vitals... The next steps are very exciting.
I am sure there are issues with these ideas but I'm very excited to see where this all goes!
kalydrae t1_j2be8w9 wrote
Reply to OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
I have been trying to work out what kind of implementation of this technology you can have on the average home computer. For now, the ram/gpu required to run gpt3.5 is beyond the compute power of the average power users home equipment.
You can make a basic system that can do limited tasks for specific inputs but the chat bot using the parameter set for gpt 3.5 is too large.
And the emergent properties present in this larger model is the most interesting and useful part of the the current progress. So for NOW openAI have a huge market advantage - they have a live product with a huge existing user base and compute power to support current throttled usage.
If I were openAI, I would be looking at how to launch the paid 'beta' product for generic use and then look at the subset of interactions on the free version to see if there are use cases that could do with additional training inputs to give further enhanced interactions. Some of my nebulous thoughts on potential use cases for custom products that people might pay for include,
Roleplaying bot - partner with online role playing systems to ingest large amounts of (anonymised) conversational data and have human feedback on training based on the new model.
Developer/infrastructure/IT helper: ingest even more publicly available data sets on q&a forums, open source systems documentation and support forums, GitHub, etc
Private instances of ChatGPT that have a "commercial in confidence" license so that businesses can provide their commercial IP datasets and transform the chat bot into the company knowledge system - all data, processes procedures can be used and accessed in a dynamically linked interactive and proprietary context. (Would also need to conform with country/state privacy laws etc)
Similar private instances provided to academic institutions where all academic and student information, emails and conversations (also anonymised) can be used to train the course and subject matter expert bots that can assist academics to design courseware and students to learn and understand much faster.
I think once we can run our LLM models on home computers, all bets are off. Your fridge might have a bot to tell you the options for dinner. Your wallet will alert you when your expenses are off track from previous months. You will ask your home assistant for a daily plan and it will remind you to take your medicine and prompt you to eat/drink something depending on your current vitals... The next steps are very exciting.
I am sure there are issues with these ideas but I'm very excited to see where this all goes!