sebzim4500
sebzim4500 t1_je16h58 wrote
Reply to comment by gunbladezero in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
I'm going to simplify a bit here, if you want a more complete answer I can write something up. I was planning on writing a blog post about this, because it is relevant to why ChatGPT does so much better when asked to show its working.
Basically, LLMs do not have any memory except what you see in the output. You may think that the network just needs to decode the base64 once and then use it to answer all the questions, but in actuality it needs to do it for every single token.
This is compounded by the fact that decoding base64 like this is a per-character operation, which GPT-n is especially bad at due to their choice of tokens. Since it only can use a finite amount of computation per token, wasting computation in this way will decrease the effectiveness.
Here's an example where simply making GPT-4 reverse the string makes it completely unable to do a straightforward calculation, unless you let it show its working.
sebzim4500 t1_je10iu2 wrote
>Lower-precision fine-tuning (like INT8, INT4)
How would this work? Are the weight internally represented as f16 and then rounded stochastically whenever they are used?
sebzim4500 t1_je0ceht wrote
Reply to comment by pinkballodestruction in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
He probably doesn't have access to the GPT-4 API.
sebzim4500 t1_je0c899 wrote
Reply to comment by TehDing in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
> You can ask GPT to spell a word, or provide the words as individual "S P A C E D" characters and it will similarly do poorly- it has nothing to do with tokenization. GPT is capable of spelling, it can even identify that it is not playing well if you ask if something is a good guess- but continues to give poor answers.
Yeah, because 99.99% of the time when it sees words they are not written in the way. It's true that the model can just about figure out how to break a word up into characters, but it has to work hard at that and seemingly doesn't have many layers left for completing the actual task.
I would expect that a model trained with single character tokens would do far better at these word games (wordle, hangman, etc.) at the cost of being worse at almost everything else.
sebzim4500 t1_jdzmpee wrote
Reply to comment by TehDing in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
Wordle is kind of unfair though, because the LLM takes input in the form of tokens rather than letters, so doing anything which requires reasoning on the level of letters is difficult. Incidentally, this might also be affecting it's ability to do arithmetic, LLaMA by comparison uses one token for each digit to avoid the issue (but of course suffers from the same problems with breaking words into characters).
sebzim4500 t1_jdzczty wrote
Reply to comment by gunbladezero in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
I think you're forcing the model to waste the lower layers on each step decoding that base64 string. Let it output the word normally, and you would probably see much better performance. Just don't look at the first output, if you want to still play it like a game.
sebzim4500 t1_jdeq6uo wrote
Reply to comment by light24bulbs in [N] ChatGPT plugins by Singularian2501
There may have been pretraining in how to use tools in general, but there is no pretraining about how to use any third party tool in particular. You just write a short description of the endpoints and it gets included in the prompt.
The fact that this apparently works so well is incredible, probably the most impressed I've been with any developement since the original ChatGPT release (which feels like a decade ago now)
sebzim4500 t1_jc6jye3 wrote
Reply to comment by 127-0-0-1_1 in [D] ChatGPT without text limits. by spiritus_dei
The company doesn't always win, sometimes the open source product is simply better. See Stable Diffusion vs DALL-E, or linux vs windows server, or lichess vs chess.com, etc.
Of course that doesn't mean it will be used more, but that isn't the point.
sebzim4500 t1_jan85s7 wrote
Reply to comment by Timdegreat in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Yeah, I get that's that embeddings are used for semantic search but would you really want to use a model as big as ChatGPT to compute the embeddings? (Given how cheap and effective Ada is)
sebzim4500 t1_jan01xr wrote
Reply to comment by Timdegreat in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Would you even want to? Sounds like overkill to me, but maybe I am missing some use case of the embeddings.
sebzim4500 t1_ja8agwp wrote
Reply to comment by coconautico in [P] [N] Democratizing the chatGPT technology through a Q&A game by coconautico
Are you using the output of ChatGPT to determine which inputs you copy across and which ones you don't? If not, I agree that you are probably in the clear. Otherwise idk.
sebzim4500 t1_ja87cym wrote
Reply to comment by coconautico in [P] [N] Democratizing the chatGPT technology through a Q&A game by coconautico
> You may not [...] (iii) use the Services to develop foundation models or other large scale models that compete with OpenAI
sebzim4500 t1_ja874jk wrote
Reply to comment by avocadoughnut in [P] [N] Democratizing the chatGPT technology through a Q&A game by coconautico
Oh how the turntables.
sebzim4500 t1_j6ck159 wrote
Reply to comment by AssCakesMcGee in In the absence of cosmic radiation, would an object placed in space eventually cool to absolute zero? by IHatrMakingUsernames
What definition of temperature are you thinking of? The only definition I know is based on how the entropy changes with energy, which clearly makes negative temperature objects extremely hot.
sebzim4500 t1_jecbyml wrote
Reply to comment by IntrepidTieKnot in [R] TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs - Yaobo Liang et al Microsoft 2023 by Singularian2501
I think the 'feedback to api developers' idea is novel and useful.