Submitted by floppy_llama t3_1266d02 in MachineLearning
drizel t1_je8v9cj wrote
Reply to comment by idontcareaboutthenam in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
GPT-4 can parse millions of papers and help uncover new optimizations or other improvements much faster than without it. Not only that but you can brainstorm ideas with it.
Swolnerman t1_jead4wo wrote
How can it do that with a context window of 32k?
On top of that, I don’t think gpt4 can make informed decisions on picking between academic research papers as of yet
Viewing a single comment thread. View all comments