Glycerine
Glycerine t1_j2ddrg8 wrote
This went viral pretty quickly. I'm pretty sure that was posted on reddit only a few days ago about going open source with the project: https://github.com/lucidrains/PaLM-rlhf-pytorch
https://old.reddit.com/r/artificial/comments/zy6swx/palm_with_rlhf_is_now_opensource/
I starred it this week at ~50stars, now it's 3.3k
It looks really exciting, but yes it's not easy to run. Knowing I'm underpowered for most ML work I still gave it a shot on my AMD 4.0Ghz - 32GB ram - 1080GTX.
The moment I knew it was out of reach to process wikipedia:
training: 0% 36/100000 [1:01:05<2581:58:40, 92.98s/it]training loss: 2.976300001144409
That shows it took 1 hour to reach epoch 36 (of 100K). Which estimates about 3 months (24/7) of training...
Secondly it's not built for daily driving yet, the source is still in dev mode and needs a intermediate python dev to execute it - just due to the implementation after the training step.
It would be fun to have a slow input system, or some documentation on how to load super thin datasets as an example. A finished model I can run immediately would be awesome - but I guess that's what the other team are doing.
The future of talky speaky machines is getting very exiting; I can't wait to see what happens two more papers down the line... I'm 101% looking forward to my speaky toaster!!!
Glycerine t1_j2ch0mm wrote
This sounds like the new neural AI from google or openai. However It's hard to place - where did you find the reference?
Microsoft voices are good but I can't hear this one: https://azure.microsoft.com/en-us/products/cognitive-services/text-to-speech/#features
Potentially it could be one of these https://mycroft.ai/mimic-3/
The mimic AI voices sound great and are also offline: https://github.com/MycroftAI/mimic3-voices There are a lot of voices to choose from - some are more human than others https://mycroftai.github.io/mimic3-voices/
I understand it's not your specific but the female English (US)
vctk_low
-> p225
is phenomenal.
Glycerine t1_iyvcgh1 wrote
Reply to comment by bleachedsharkfur in Egypt to add 1.1 GW in solar, wind power with cheapest rates in Africa by darth_nadoma
This is an interesting video about the current solution, and some of the limitations of sahara based solar plants: https://www.youtube.com/watch?v=q_HxJFuF5io&ab_channel=KnowledgePoint
Glycerine t1_itv3b25 wrote
Reply to [N] OpenAI Gym and a bunch of the most used open source RL environments have been consolidated into a single new nonprofit (The Farama Foundation) by jkterry1
> This page uses Google Analytics to collect statistics. You can disable it by blocking the JavaScript coming from www.google-analytics.com.
This is not a compliant method of privacy safety - as it's too late to opt-out once I've visited your site.
Glycerine t1_iqxixkl wrote
Reply to comment by Space-Robot in This machete is controlled by a plant yielding a robot arm. hat does this mean for the field of robotics? Don't anger the plants by Leprechan_Sushi
Oddly I was also thinking about /u/TrainHooterBlare's bowels
Glycerine t1_j2frh3o wrote
Reply to comment by Disastrous_Elk_6375 in An Open-Source Version of ChatGPT is Coming [News] by lambolifeofficial
You're right it's poor. All 8 CPU's hit 100%.
As an update though:
I made a bunch of changes and reduces the dataset to 5 lines from wikipedia; reduced the PaLM size to about 25% of the original, and reduced the epoch times to 8.
It's phenomenal. Within < 30 minutes and a bunch of poking it can easily generate sensible sentences.
I dropped it onto lambda GPU A100 instance - it's silly fast
Edit:
As an example; I trained the model on 5 sentences, with a optimal length of ~128 chars. I ask for a word and see what it constructs.
The goal here is to see if it produces sensible sentences from real words:
With a known word the response is fairly stable:
untrained words produce some interesting results. Prior to the <100 epochs of training it was saying nonsense: