Submitted by Shardsmp t3_zil35t in MachineLearning
pommedeterresautee t1_izu96au wrote
Reply to comment by spaccetime in [D] Does Google TPU v4 compete with GPUs in price/performance? by Shardsmp
Why do you say TPU is not for experimental usage?
spaccetime t1_izw55mj wrote
Yes, just as /u/Mrgod2u82 mentioned - it’s expensive.
You should debug and prepare your model on less expensive machine - your experimental and development machine - and then run the top model with all the data on the TPU - your production-grade machine.
For example, we trained BERT for 4 days. If we didn’t pay enough attention when setting up the training we could have spent another 800$ just for experimenting, which is too expensive for us. Of course, at some companies like Google Brain and OpenAI they probably don’t care about cost minimization. There you can use TPU as your daily work station.😄
Use one machine for development and one for the heavy-and-long training.
Mrgod2u82 t1_izujb7i wrote
Guessing because you're paying for it? No point in paying if you're not confident it makes sense to pay. All depends on how deep one's pockets are I suppose.
Viewing a single comment thread. View all comments