Submitted by Open-Dragonfly6825 t3_10s3u1s in deeplearning
AzureNostalgia t1_j732f33 wrote
Reply to comment by Open-Dragonfly6825 in Why are FPGAs better than GPUs for deep learning? by Open-Dragonfly6825
The claim that FPGAs have better power efficiency than GPUs is a reminiscent of the past. In the real world and industry (and not in scientific papers which are written by PhDs) GPUs achieve way higher performance. The simple reason is FPGAs as devices are way behind in architecture, compute capacity and capabilities.
A very simple way to see my point is this. Check one of the largest FPGAs from Xilinx, the Alveo U280 (https://www.xilinx.com/products/boards-and-kits/alveo/u280.html#specifications). It theoretically can achieve up to 24.5 INT8 TOPs AI performance and it's a 225W card. Now check a similar architecture (in nm) embedded GPU, the AGX xavier (https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/). Check the specs on the bottom. Up to 22TOPs in a 30W device. That's why FPGAs are obsolete. I have countless examples like that but you get the idea.
Open-Dragonfly6825 OP t1_j733r6w wrote
That's an interesting comparison.
I get the idea. Thank you.
Viewing a single comment thread. View all comments