Viewing a single comment thread. View all comments

TFenrir t1_jdicafu wrote

Awesome! Good to know it will work

1

light24bulbs t1_jdijmr3 wrote

My strategy was to have the outer LLM make a JSON object where one of the args is an instruction or question, and then pass that to the inner LLM wrapped in a template like "given the following document, <instruction>"

Works for a fair few general cases and it can get the context that ends up in the outer LLM down to a few sentences aka few tokens, meaning there's plenty of room for more reasoning and cost savings

1

TFenrir t1_jdim3vv wrote

That is a really good tip.

I'm using langchainjs (I can do python, but my js background is 10x python) - one of the things I want to play with more is getting consistent json output from a response - there is a helper tool I tried with a bud a while back when we were pairing... Typescript validator or something or other, that seemed to help.

Any tips with that?

1

light24bulbs t1_jditg4b wrote

Nope, I'm struggling along with you on that I'm afraid. That's why these new plugins will be nice.

Maybe we can make some money selling premium feature access to ours once we get it

2