FreddieM007
FreddieM007 t1_j4bch6d wrote
Reply to comment by Cold-Ad2729 in [D] Is MusicGPT a viable possibility? by markhachman
Yes, exactly. That's what makes it useful for a composer or song writer because you edit and change the material to make it your own. With good libraries you can make it sound professional. An AI system that generates audio like jukebox is useless since everything intermingled.
FreddieM007 t1_j4a6uye wrote
Reply to [D] Is MusicGPT a viable possibility? by markhachman
MuseNet from OpenAI uses GPT-2. IMO it is the best musical idea generator that exists. I used it to compose a number of classical solo piano pieces.
FreddieM007 t1_j12qcmz wrote
Reply to [D] Techniques to optimize a model when the loss over the training dataset has a Power Law type curve. by Dartagnjan
Since your current model is perhaps not complex or expressive enough and vram is limited: have you tried building a classification model first that partitions the data in 2 classes? What is the quality there? Then you can build separate regression models for each class each using all vram.
FreddieM007 t1_j4l58y4 wrote
Reply to [P] I built arxiv-summary.com, a list of GPT-3 generated paper summaries by niclas_wue
Great idea! There is a lot of potential! The biggest challenge for me is not just reading the most important papers but finding them. You already did the heavy lifting by downloading papers and computing the gpt3 embedding. With that you can build an index and add searching. You could cluster papers into categories to let the user browse. You could umap the papers etc. In the long term I would want it to be comprehensive and include all papers. In terms of costs, perhaps you can partner with arxiv directly. They should be interested to use your project...