Submitted by WobblySilicon t3_zz0tua in MachineLearning
Complete-Maximum-633 t1_j2ab7zy wrote
Reply to comment by Mefaso in [D] NLP/NLU Research Opportunities which don't require much compute by WobblySilicon
Anything with “video” is going to be costly.
WobblySilicon OP t1_j2d1n2x wrote
question is how much cost? can it be done with one GPU or do i need a swarm of those?
Complete-Maximum-633 t1_j2drquz wrote
Impossible to answer without more context.
WobblySilicon OP t1_j2ffcey wrote
Sure! Sir!
In the months to come i would be working on the problem of text to video. After literature review i got the idea that it might be compute extensive, like a cluster of GPUs required to train the models. So I asked that if it could be done with a mediocre GPU such as a 3080. I haven't really thought about the models i would use or general architecture of the model. Just wanted an answer, because i dont wish to take up this topic then get stuck due to compute issues.
Viewing a single comment thread. View all comments