Disastrous_Elk_6375

Disastrous_Elk_6375 t1_jb8y5r2 wrote

GptNeoX should fit with 8bit and low prompt sizes. GptJ-7B should fit as well with 16bit inference. On smaller models you might even be able to do some finetuning for fun.

There's a couple of coding models from salesforce that you could fit comfortably. Check out FauxPilot for a copilot "clone".

8

Disastrous_Elk_6375 t1_j9z7xvx wrote

> "By subtracting the visible matter, we can calculate the presence of the dark matter which is in between," [Euclid project manager] Racca said.

This reminds me of this great nugget brought to us by the department of redundancy department:

The missile knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the missile from a position where it is to a position where it isn't, and arriving at a position where it wasn't, it now is. Consequently, the position where it is, is now the position that it wasn't, and it follows that the position that it was, is now the position that it isn't. In the event that the position that it is in is not the position that it wasn't, the system has acquired a variation, the variation being the difference between where the missile is, and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the missile must also know where it was.

13

Disastrous_Elk_6375 t1_j9yzjmx wrote

No. There are definitely areas where ML can help. We have models that are known to be good at classification and that also generalise reasonably well. These models can and should be used to speed up the "anomaly detection" in a large amount of data. These models are also better at the task than manually defined "traditional" algorithms.

2

Disastrous_Elk_6375 t1_j8hdb2r wrote

Do you know if distilling will be possible after instruct finetuning and the RLHF steps? I know it works on "vanilla" models, but I haven't searched anything regarding distillation of instruct trained models.

2

Disastrous_Elk_6375 t1_j8cd4x4 wrote

I think it will depend on how small the LLMs that it uses are. If they can be run on consumer GPUs, then it will probably take off. If you need to rent 8xGPU servers just for inference, probably not.

Stablediffusion took off because in the first two weeks you could run it on 4GB VRAM GPUs. Then when "finetuning" aka dreambooth came along, it went from 24 to 16 to 8 GB in a matter of weeks. Same effect there.

15

Disastrous_Elk_6375 t1_j885wd6 wrote

Check out this - https://huggingface.co/models

You can download models and try them out locally, depending on your specs. It's unlikely you'll find a single model that does everything you need, but there's a chance you can use a combination of models to get close to what you want. You'll need to be a bit more specific about your end goals to get better suited suggestions.

2

Disastrous_Elk_6375 t1_j87rwu5 wrote

As a large language model I have to caution against using sharp objects in programming languages, as it would pose a great risk to the developers unknowingly hurting themselves with it. Furthermore, it can be said that axes are typically not very sharp, and as we know blunt objects are objects that are not very sharp and also might not be extremely sharp. Sharp is a now defunct company that used to produce TV sets. A TV set is like a modern TV but it used to also be old. /s?

7

Disastrous_Elk_6375 t1_j836fxc wrote

They're testing engines separately at the factory. They've ran hours of tests and most likely have a pretty solid understanding of what thrust each engine gives at a certain "throttle" level. So they'll have precise measurements of things like flow for each engine, and they'll know what each flow setting would translate into thrust. From there it's simple math and some approximation.

7

Disastrous_Elk_6375 t1_j7hvc22 wrote

> Nuclear power will never be safe

Mmmhhmm. We've had extremely safe, sufficiently compact and mobile nuclear power since the 50s. We know they're safe because navy personnel on nuclear subs / ships have lived long healthy lives. In fact, the commander of the first US nuclear sub (commissioned in 1954) went on to also command the first nuclear ship. He got to live 94 years!

8