Takadeshi
Takadeshi t1_j9b3c3l wrote
Reply to comment by Lower_Praline_1049 in Microsoft Killed Bing by Neurogence
Thank you! :) Early stages right now, just finished the literature review section and am starting implementation, I'm going to try and publish it somewhere when it's done if I can get permission from my university. I'm definitely going to see what I can do with stable diffusion once it's done, would love to get it running on the smallest device possible
Takadeshi t1_j93gacq wrote
Reply to comment by TeamPupNSudz in Microsoft Killed Bing by Neurogence
Doing my undergrad thesis on this exact topic :) with most models, you can discard up to 90% of their weights and have a similar performance with only about 1-2% loss of accuracy. Turns out that when training models they tend to learn better when dense (i.e a large quantity of non-zero weights), but in implementation they tend to have some very strong weights, but a large number of "weak" weights that contribute to the majority of the parameter count but very little to the actual accuracy of the model, so you can basically just discard them. There are also a few other clever tricks you can do to reduce the number of params by a lot; for one, you can cluster weights into groups and then make hardware-based accelerators to carry out the transformation for each cluster, rather than treating each individual weight as a multiplication operation. This paper shows that you can reduce the size of a CNN-based architecture by up to 95x with almost no loss of accuracy.
Of course this relies on the weights being public, so we can't apply this method to something like ChatGPT, but we can with stable diffusion. I am planning on doing this when I finish my current project, although I would be surprised if the big names in AI weren't aware of these methods, so it's possible that the weights have already been pruned (although looking specifically at stable diffusion, I don't think they have been).
Takadeshi t1_iwjdy95 wrote
Reply to comment by everything_in_sync in My predictions for the next 30 years by z0rm
Drones are great but not exactly stable. There are just too many random variables to predict for flying cars to ever be safer or more efficient than travelling across the ground. There's not really any advantage to doing so and there's all kinds of problems it could cause
Takadeshi t1_iwheg7a wrote
Reply to comment by botfiddler in My predictions for the next 30 years by z0rm
I'm not so sure, I don't think they make any sense energy-wise and would be less safe than ground vehicles. The amount of energy for a plane to take off is far greater than the amount it requires to stay in the air. The only short-range flight I expect to see are small electric powered planes for short flights, we've already seen these spring up in a few places in the US.
Takadeshi t1_iw09t4e wrote
Reply to comment by bitfriend6 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Idk that might be true but we don't really know what the limits of scaling these models are, nor do we know the limits of how much faster we can make ML hardware. Expert opinion on the latter though suggests quite a lot; GPUs are really just the tip of the iceberg when designing hardware to train models
Takadeshi t1_itofgcb wrote
Reply to Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Being able to generate cohesive video? Probably 3 years or less, honestly. But a movie with its own music, a coherent plot, acting e.t.c? Seems a long way off to me; at that point you basically have an LLM which is a better writer, director, actor and musician than the majority of humans. I think for that you're probably going to need something which is near-human level intelligence, and you're also going to need a system that works for both language, visual and audio data, which is something outside of the scope of LLMs. Maybe you could make a "writer-bot" that writes the story, then a "video bot" that makes video from a long text input (the size of inputs is also another limitation of LLMs rn, so it would be difficult to plug a whole movie script into a model and expect good results), then an "audio bot" that takes a video and composes suitable music for parts of the movie that make sense.
Takadeshi t1_ir6ytww wrote
Reply to comment by whatTheBumfuck in Creating a Research group to study and try to solve the ageing problem. by naturethesupreme
Self awareness 100
Takadeshi t1_ir6yslx wrote
Reply to comment by dnimeerf in Creating a Research group to study and try to solve the ageing problem. by naturethesupreme
Not sure why you are so upset, everyone else is engaging with you in good faith
Takadeshi t1_je55jwq wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Lol even if it were real, do you think OpenAI/Microsoft/Google etc are suddenly going to pause all research because a few people complain about it? Far too much has been invested in it at this point to stop now