Submitted by Shardsmp t3_zil35t in MachineLearning
Shardsmp OP t1_izwhsfm wrote
Reply to comment by herokocho in [D] Does Google TPU v4 compete with GPUs in price/performance? by Shardsmp
is there any data to back this up?
How do I know where exactly the line is, from what scale it is worth it more to use a TPU?
herokocho t1_izxnzhd wrote
not aware of any good comparisons out there, this is all anecdata from looking at profiler traces when training diffusion models and noticing that I was communication bottlenecked even on TPUs, so on GPUs it would be much worse.
it's usually better to use TPU as soon as you'd have to use multiple GPU nodes, and basically always better at v4-128 scale and above (v4-128 has 2x faster interconnect than anything smaller).
Viewing a single comment thread. View all comments