mxby7e
mxby7e t1_jdncs51 wrote
Reply to comment by ebolathrowawayy in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
From my understanding its limited to no commercial use, so you can use it for what you need, but not commercially.
mxby7e t1_jdl18t6 wrote
Reply to comment by throwaway2676 in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Maybe, open assistant by Stability.ai is doing this type of manual dataset collection. The training data and the model weights are supposed to be released once training is complete
mxby7e t1_jdktvqr wrote
Reply to comment by big_ol_tender in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
The license won’t change. The dataset was collected in a way that violates the term of service of OpenAI, which they used to generate the data. If they allowed commercial use it would open them up to lawsuit.
mxby7e t1_jdjzkzy wrote
Reply to comment by danielbln in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
The use of OpenAI’s models for generating competing models violates the term of use, which is why the Stanford dataset is restricted.
mxby7e t1_jd5rn62 wrote
https://github.com/oobabooga/text-generation-webui
I’ve had great results with this interface. It requires a little tweaking to get working with lower specs, but it utilizes a lot of optimization options including splitting the model between VRAM and CPU RAM. I’ve been running LLaMa 7b in 8bit and limiting to 8GB of VRAM.
mxby7e t1_j215ewa wrote
Reply to comment by TrueBlueDreamin in [P] I built an API that makes it easy and cheap for developers to build ML-powered apps using Stable Diffusion by TrueBlueDreamin
Looking forward to it! I’ve been doing style training on the lastBen colab but I have a lot of artists that want a more accessible way to build and use models based on their own styles
mxby7e t1_j20q0hg wrote
Reply to [P] I built an API that makes it easy and cheap for developers to build ML-powered apps using Stable Diffusion by TrueBlueDreamin
Does your api support the ability to train a style instead of a subject?
mxby7e t1_ivz9zsz wrote
Reply to comment by s1me007 in [D] Current Job Market in ML by diffusion-xgb
Meta is crashing because of C level hubris and the belief that they could be the central point of all social interaction online and in the metaverse. They've been making poor decisions internally for years and its catching up to them.
mxby7e t1_jdsqijp wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
I got bored last night and took a look what is out there. Between diffusion and GAN workflows, you can deepfake almost anything you want in any style with just a little technical background.
You can easily take a real photo and use inpainting to replace any aspect of the image, then run it through a few img2img loops to balance the composition. You can train a subject finetune with a handful of pictures and a few hours of training time.
You can use consumer face swap tools to swap faces into any image you want.
Midjourney v5 can generate images that are hard to differentiate from real photos.