Comments
Wind_14 t1_jef3xd6 wrote
considering the size of an atom is roughly 0.1 nm, someone who today saying that they'll make 0.01 nm chip will be laughed out of the room. Our physics and engineering is not solid enough to create subatomic transistor.
richiehustle OP t1_jeemp2c wrote
What prevents them from doing that in theory and then doing a trial production? How would it fail in reality if it all would be calculated in advance to the tee? In other words, why and how is it a risk?
dirschau t1_jeeogbz wrote
I know it'll sound sarcastic, but it's a good illustration of the point:
Go build a racecar, personally, right now. No, you can't practice making a simpler one first. Straight to trophy winning.
Well, what's the problem? Racecars already exist, so it's not even like you need to develop new technologies. Go, do it. Then also make a profit, because no one's giving you free money here. Chop chop.
In this analogy, the only difference between you and them is that the current chip makers know how to make a moped.
Making stuff from scratch isn't easy, and these guys are working literally at the edge of what's physically possible. It's not even completely sure if you can go even smaller than currently available.
For comparison, you can't make a 0.01 nm chip because that's less than half the size of a hydrogen atom.
And to even do the current work, you need to come up with and build whole new tools and machines, which cost billions, because they're literally manipulating atoms at this point.
ReallyGene t1_jeeox4t wrote
Because an individual photolithography machine costs millions of dollars to build.
The first prototype usually costs several times that.
To get smaller features, you have to keep using shorter and shorter wavelengths of light.
The latest machines use ultraviolet light. So, in order to see what's happening, you need UV cameras, and software to convert the images to something the human eye can perceive.
Then, you need to develop techniques that can etch those tiny features; it's roughly analogous to writing with a marker; what works when the letters are 4" tall doesn't work when they're 1/8".
You need ultra pure chemicals in an ultra clean environment. But many of those chemicals require special handling and materials to transport and apply them. Those materials in turn require exotic techniques to make and machine them.
So you might have to become the world expert in welding a particular metal, at incredibly small scale, without contaminating anything.
The world is full of failed attempts at all of these things.
You might spend hundreds of millions of dollars just to fail.
richiehustle OP t1_jeetx3k wrote
I appreciate this thoughtful response
Aggietallboy t1_jefg66v wrote
Please also remember that we're already doing voodoo fuckery where the "x nm process" is already smaller than the wavelength of the light doing the process.
In order to get much smaller, you're no longer talking about "light" but rather other high frequency EM radiation, which, as you're pointing out, we don't necessarily have the technology to
A) reliably generate that EM radiation precisely
or
B) use that frequency of radiation to achieve the equivalent photolithography.
There's also going to be (if there isn't already) a point at which the insulative properties of the silicon substrate aren't sufficient to keep the electrical signal isolated in the circuits.
There's also one more element at play, and that's the size of the element silicon itself:
A silicon atom is 1.92 Angstroms wide.
1 nm = 10 angstroms.
Silicon's "Lattice Constant" - how the atoms are arranged in a crystal is 5.4 angstroms wide (.54 nm)
Emyrssentry t1_jeen86w wrote
Because there are parts to the situation where you calculate it out, and it might work, but then you go to build your fabrication machines, and nothing you do gives you the precision necessary to make the transistors that small.
And now you've just put 3 years of intense engineering and money into a product that doesn't even make it to market.
mb34i t1_jeendp2 wrote
You have MILLIONS of transistors in there, and they're applied with a process that's like photography. So you're asking them to go to the biggest zoom possible, right away. But unfortunately, every time you zoom a little bit more, you could have more errors. 1% errors means 10,000 transistors are bad, and that means that chip is shot, no good at all. Your computer could go haywire if maybe even a single transistor is bad, the error allowances are extremely small.
So if you look at it historically, look at what happened when they sent up the space telescopes. They had issues with the Hubble lens (zooming). Every time you step up in technology, there are errors that have to be worked out. Errors that could destroy your entire set of chips, resulting in billions of dollars in costs.
richiehustle OP t1_jeeu22g wrote
I think this response gives me the perspective I needed on the matter
Funchyy t1_jeeoc02 wrote
It isn't just about calculations, because even if calculated to a tee, is not a guarantee it will work because you are pushing boundaries on what is possible or what we know is possible. So there will be unknowns throughout. There is no guarantee that your calculations will translate 100% to reality. That in itself should kinda explain the risk imo. You simply cannot guarantee based on calculations alone that something boundarie pushing will work in reality even though the math worked out perfectly.
mmmmmmBacon12345 t1_jeepofl wrote
We know theory and practice don't precisely align in semiconductors. They live on the finnicky edge of quantum physics
Semiconductor devices books will have two charts next to each other. One telling you what the equation says the values should be and one showing you what the actual measured values tend to be. They can be off by >20% at times because exactly where in the silicon crystal that phosphorous atom landed matters a lot
thisisdumb08 t1_jeevvc5 wrote
They are, they started 10-15 years ago based on the results of work 20 years ago. You get the chip today. 9-14 years ago they got the results from the manufacturing 19 years ago and couldn't incorporate the changes into the the plans already in motion the previous year, though sometimes they can and you get internode improvements either in yeild or performance.
no_step t1_jef3j6o wrote
ASML is rumored to have spent $300 million and 5 years in developing extreme UV lithography
therealdilbert t1_jef9471 wrote
it is unbelievably hard to make it work and a factory cost +$10billion ...
mmmmmmBacon12345 t1_jeepdpt wrote
Its a lot easier to fix one problem at a time than 10 problems at the same time
You get up in the morning and your car won't start. You run through the quick checklist. Fuel in the tank? Yup. Key in the ignition? Yup. Battery charged? Nope
So you jump the battery and the car starts and you're off
This is what happens when they slowly go through the process nodes. Each node has a new quirk that needs to be identified, triaged, and fixed and once that's done they can start getting product
Its a lot harder if your car just went through a wild science experiment and won't start. If your battery is dead, ignition switch is broken, fuel pump is missing, and spark plugs are bad its going to be a whole lot harder to troubleshoot that car and get it going since you have to identify and fix all problems before you get any results
It doesn't make business sense to go from 90% yield and then hop to a node with 2% yield that slowly ramps up to 90% over the next decade. Take a sequence of small steps that each have a small step rather than a giant leap that either works or utterly bankrupts the company
richiehustle OP t1_jeeuazp wrote
I think that is a very solid analogy
no_step t1_jeepwz0 wrote
There's a false premise in your question. We certainly know in theory that shorter wavelengths of light allow for smaller features, but the technology to generate extreme UV did not exist and needed to be developed. Right now there's only one company in the world that knows how to do it
druppolo t1_jef0ubu wrote
Like every tech, the concept is “we can just win every golf trophy by doing every hole in one shot”
But then you shoot and miss and you realize you forgot the wind, the grass type and so on.
Most technologies are not about getting from A to B, but going from A to B without stepping on some dog poo.
For example, at this level a spec of dust can totally destroy your chip or even your machinery. So you may start your journey thinking about chips and now you have a department that develops vacuum machinery, another department developing air cleaners that stop nano dust, and another developing optical lenses… and your final product is only as good as your “side quest score”. And every side quest opens new side quests.
NoPlaceForTheDead t1_jeersoz wrote
Have you ever made or constructed anything?
HungryHungryHobo2 t1_jef3zeo wrote
Partly what everyone here has said already - the technology takes time to develop and starting from square 1 makes more sense than starting at the final iteration.
Although, on the other hand, I'd say part of it is "Planned Obsolesence."
Imagine you're a company that makes computer chips - you can instantly make the smallest most compact most efficient chip possible - and then what?
After everyone who wants a chip purchases one - what drives sales beyond the slow trickle of replacements and late adopters?
IF, however, you design a chip that will become obsolete in 2 years - because you have an entire multi-decade plan for how you will scale down your chips over time, you can sell all of those chips... then 2 years later, they're obsolete - your new chips are better... so you can sell all those chips to everyone who bought one before... rinse and repeat for a few decades and you've turned a one time profit into a long-term business model that will generate you billions.
Moskau50 t1_jeemfw8 wrote
Because what is possible isn’t known until someone tries it. If you invest a ton of money to make the equipment and materials to make a 0.01nm chip and fail, you’re out a lot of money and time, which puts you well behind your competitors.