Thorusss

Thorusss t1_iusxlia wrote

I mean if you want it and look for it, you can buy a radicicolous amount of objects with an additional microchip in it. Heated insole: chip. Camera in glasses: Chip. T-shirt that measures heartbeat: Chip. Ring that measures temperature and movement: chip. Implant for paying: Chip. Light up Shoes: Chip. Jacket with speakers: chip.

I mean many packagings have an microchip in the small security sticker, than many people never notice.

1

Thorusss t1_irv7qc8 wrote

Of course. Kind of naive question. It is one of their main uses. What will the user click next, what will the weather do, how will nuclear fusion behave, how will the stock market move, will the car in front of you brake, etc.

3

Thorusss t1_ir9pbcd wrote

So why is matrix multiplication faster with it?:

>Leveraging this diversity, we adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimising arbitrary objectives.

Are you saying it would be slower, if it had to multiply multiple matrixes of the same dimension one after the other?

3

Thorusss t1_ir9or2m wrote

The algorithm is plain faster on the most advanced hardware. For such an already heavily optimized area, that is very impressive.

>Leveraging this diversity, we adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimising arbitrary objectives.

https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor

2