Viewing a single comment thread. View all comments

visarga OP t1_jegmcux wrote

I think they spin up a container if there isn't one running. Usually there isn't, so you have to wait a minute or two. Then it works slowly, but it is simpler than downloading the model.

In this paper the HuggingGPT system uses a bunch of local models, and calls on the HuggingFace API for the rest. So they try to run their own tool-models, at least a few of them because HF is so flaky.

I think this paper is pretty significant. It expands the OpenAI Plugin concept with AI-plugins. This is great because you can have a bunch of specialised models combined in countless ways, chatGPT being the orchestrator. It's automated AI pipelines. If nothing else, it could be used to generate training data for a multi-modal model like GPT-4. Could be a good business opportunity for HuggingFace too, their model zoo is impressive.

4

mjk1093 t1_jegn93v wrote

I have found that the spaces tend to get slower the more you use them. It feels like throttling.

1

RobLocksta t1_jeh1mtc wrote

Great post! I've been looking for something like this that would allow me to look at models tuned towards my industry. Sounds like a great place to learn!

1