Comments

You must log in or register to comment.

Vucea OP t1_jdpt2c6 wrote

Inverse lithography produces features smaller than the wavelength of light, but it usually takes weeks to compute

Nvidia says it has found a way to speed up a computation-limited step in the chipmaking process so that it happens 40 times as fast as today’s standard.

Called inverse lithography, it’s a key tool that allows chipmakers to print nanometer-scale features using light with a longer wavelength than the size of those features. Inverse lithography’s use has been limited by the massive size of the needed computation.

Nvidia’s answer, cuLitho, is a set of algorithms designed for use with GPUs, turns what has been two weeks of work into an overnight job.

25

FuturologyBot t1_jdpvre8 wrote

The following submission statement was provided by /u/Vucea:


Inverse lithography produces features smaller than the wavelength of light, but it usually takes weeks to compute

Nvidia says it has found a way to speed up a computation-limited step in the chipmaking process so that it happens 40 times as fast as today’s standard.

Called inverse lithography, it’s a key tool that allows chipmakers to print nanometer-scale features using light with a longer wavelength than the size of those features. Inverse lithography’s use has been limited by the massive size of the needed computation.

Nvidia’s answer, cuLitho, is a set of algorithms designed for use with GPUs, turns what has been two weeks of work into an overnight job.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/122czjz/nvidia_speeds_key_chipmaking_computation_by_40x/jdpt2c6/

1

czl t1_jdpwbhi wrote

> Inverse lithography’s use has been limited by the massive size of the needed computation.

This massive computation is done once per design so for example the chip that powers the latest iphone will be ready two weeks faster?

16

elehman839 t1_jdpxkm9 wrote

Sounds like the computation may sometimes need to be done multiple times per design:

Even a change to the thickness of a material can lead to the need for a new set of photomasks

Moreover, it sounds you can also get better chips, not just the same chip sooner. Prior to this speedup, inverse lithography could be practically used in only certain parts of the design:

it’s such a slog that it’s often reserved for use on only a few critical layers of leading-edge chips or just particularly thorny bits of them

Furthermore, you can get an increased yield of functional parts, which should lower manufacturing cost:

That depth of focus should lead to less variation across the wafer and therefore a greater yield of working chips per wafer

20

czl t1_jdq094y wrote

So this is like making software programmers more productive by giving them faster tools like compilers so there is less waiting time?

However once the design is done and tested and chips are being "printed" (?) this speed up does not help with that?

Asking because I want to know how this innovation will impact the production capacity of existing fabs.

The impact will be better designs due to more design productivity but actual production capacity does not change, yes?

6

czl t1_jdqk4ts wrote

> You wanna become a computer scientist?

I want to understand this discovery and its impact on capacity of chip production. The article describes the discovery as better parallelism (for “existing”?) algorithms so as to better use NVIDIA’s GPUs.

I wonder what the nature of these inverse lithography algorithms is. A domain specific numerical optimization problem? Why would that be hard to parallelize? Perhaps till now nobody translated the problem to efficiently use the NVIDIA CUDA API?

5

anengineerandacat t1_jdqpi8i wrote

Usually takes awhile to iterate on designs, two weeks saved per iteration is huge.

Especially considering the cost of the engineers involved, you don't exactly pause those paychecks.

1

czl t1_jdqr4ob wrote

> Usually takes awhile to iterate on designs, two weeks saved per iteration is huge.

Agreed.

> Especially considering the cost of the engineers involved, you don’t exactly pause those paychecks.

Since they work on microprocessors they must be familiar with pipelining techniques. These techniques apply to optimal use of microprocessor hardware. These techniques also apply to optimal use of engineering talent. High latencies make pipelining essential.

1

czl t1_jdqsyqt wrote

> The teams of scientists that made these programs don’t fully understand how they work. This is an entire new field of science

Yes it would not surprise me if teams of scientists that made these programs don’t fully understand how they work. Nearly always your “understanding” stops at some abstraction level below which others take over.

Making pencils is not exactly cutting edge technology yet somewhere I read that likely nobody understand all that is necessary to make an ordinary pencil if starting with nothing manufactured. Our technology builds on our technology builds on our technology …

5