Submitted by Vucea t3_122czjz in Futurology
czl t1_jdpwbhi wrote
Reply to comment by Vucea in Nvidia Speeds Key Chipmaking Computation by 40x by Vucea
> Inverse lithography’s use has been limited by the massive size of the needed computation.
This massive computation is done once per design so for example the chip that powers the latest iphone will be ready two weeks faster?
elehman839 t1_jdpxkm9 wrote
Sounds like the computation may sometimes need to be done multiple times per design:
Even a change to the thickness of a material can lead to the need for a new set of photomasks
Moreover, it sounds you can also get better chips, not just the same chip sooner. Prior to this speedup, inverse lithography could be practically used in only certain parts of the design:
it’s such a slog that it’s often reserved for use on only a few critical layers of leading-edge chips or just particularly thorny bits of them
Furthermore, you can get an increased yield of functional parts, which should lower manufacturing cost:
That depth of focus should lead to less variation across the wafer and therefore a greater yield of working chips per wafer
czl t1_jdq094y wrote
So this is like making software programmers more productive by giving them faster tools like compilers so there is less waiting time?
However once the design is done and tested and chips are being "printed" (?) this speed up does not help with that?
Asking because I want to know how this innovation will impact the production capacity of existing fabs.
The impact will be better designs due to more design productivity but actual production capacity does not change, yes?
GPUoverlord t1_jdqgc9a wrote
You wanna become a computer scientist?
czl t1_jdqk4ts wrote
> You wanna become a computer scientist?
I want to understand this discovery and its impact on capacity of chip production. The article describes the discovery as better parallelism (for “existing”?) algorithms so as to better use NVIDIA’s GPUs.
I wonder what the nature of these inverse lithography algorithms is. A domain specific numerical optimization problem? Why would that be hard to parallelize? Perhaps till now nobody translated the problem to efficiently use the NVIDIA CUDA API?
GPUoverlord t1_jdqkw3m wrote
The teams of scientists that made these programs don’t fully understand how they work
This is an entire new field of science
czl t1_jdqsyqt wrote
> The teams of scientists that made these programs don’t fully understand how they work. This is an entire new field of science
Yes it would not surprise me if teams of scientists that made these programs don’t fully understand how they work. Nearly always your “understanding” stops at some abstraction level below which others take over.
Making pencils is not exactly cutting edge technology yet somewhere I read that likely nobody understand all that is necessary to make an ordinary pencil if starting with nothing manufactured. Our technology builds on our technology builds on our technology …
ItsAllAboutEvolution t1_jdqb0kh wrote
Compute is a major problem for inverse lithography / curvilinear masks. If this is solved, it not just speeding up mask production but enabling much broader use of the technology.
anengineerandacat t1_jdqpi8i wrote
Usually takes awhile to iterate on designs, two weeks saved per iteration is huge.
Especially considering the cost of the engineers involved, you don't exactly pause those paychecks.
czl t1_jdqr4ob wrote
> Usually takes awhile to iterate on designs, two weeks saved per iteration is huge.
Agreed.
> Especially considering the cost of the engineers involved, you don’t exactly pause those paychecks.
Since they work on microprocessors they must be familiar with pipelining techniques. These techniques apply to optimal use of microprocessor hardware. These techniques also apply to optimal use of engineering talent. High latencies make pipelining essential.
Jefejiraffe t1_jdqzbfh wrote
More flexible testing and engineering iteration.
Viewing a single comment thread. View all comments