Submitted by radi-cho t3_11z9s3g in MachineLearning
Comments
Icko_ t1_jdc09e5 wrote
You could use faiss instead of pinecone and alpaca instead of gpt-4
_Arsenie_Boca_ t1_jdc0ko2 wrote
True, but im not sure how much cheaper that would really be.
Individual-Road-5784 t1_jdc0z0j wrote
Instead of FAISS, you can also use a truly vector search database like Qdrant. It's open-source and also offers a generous free tier offering in the cloud https://qdrant.tech
edthewellendowed t1_jddoq57 wrote
Can you give me a little bit more info on this ? I'm interested but also very slow
Icko_ t1_jdecnjx wrote
Sure:
- Suppose you had 1 million embeddings of sentences, and one vector you want the closest sentence to. If the vectors were a single number, you could just do a binary search, and you'd be done. If they are higher dimensionality, it's a lot more involved. Pinecone is a paid product doing this. Faiss is a library by facebook, which is very good too, but is free.
- Recently, Facebook released the LLama models. They are large language models. ChatGPT is also a LLM, but after pretraining on a text corpus, you train it with human instructions, which is costly and time-consuming. Stanford took the LLama models, and trained them with ChatGPT. The result is pretty good not AS good, but pretty good. They called it "Alpaca".
edthewellendowed t1_jdewxml wrote
So If I had a pdf, I could use faiss to make am it into an embedding, and then llama / alpaca to use the pdf as a base for a chatbot ?
saintshing t1_jdgwgt7 wrote
I heard of people talking about using ANNOY for approximate nearest neighbor search. How is ANNOY compared to pinecone and faiss? Are pinecone and faiss self-hostable?
Icko_ t1_jdh2pja wrote
Idk, I've never heard of it.
localhost80 t1_jdcrfd0 wrote
GPT charges per token so it depends on the length of the document
dancingnightly t1_jdcnhuh wrote
Will you add semantic chunking?
Different_Prune_3529 t1_jdccrmr wrote
Can it have good performance as openai’s GPT?
Smallpaul t1_jdd8q6m wrote
It *is* OpenAI's GPT. Through an API.
localhost80 t1_jdct42q wrote
It will have better performance relative to the knowledge in the documents. It's the comparison of GPT-4 with global knowledge vs GPT-4 with local knowledge.
Always1Max t1_jdkn18v wrote
could there be something like this, but for code?
fletchertyler914 t1_je9o94n wrote
I just found this a few days ago and actually used it as a prototype base to learn the ropes, so thanks op! I ended up gutting the ingest flow in favor of an additional upload api route to make it more flexible, but overall it was a good example/guide to follow. Nice work.
_Arsenie_Boca_ t1_jdbsl4b wrote
What are the costs for all the services? I assume GPT-4 is billed per request and Pinecone per hour?