Submitted by Zalack t3_11x4f9t in askscience
LoyalSol t1_jd4r58z wrote
Reply to comment by PercussiveRussel in Can a single atom be determined to be in any particular phase of matter? by Zalack
So to be a bit careful about how we go about defining things. Yes entropy will still be directly tied to an ensemble in that it is directly related to the probability of an observation. Probability of course being tied to thousands of observations. But the key is that entropy can be observed in any type of probabilistic system and will very often behave the same way in a system with millions of atoms or a system of a single particle. It will just be tied to different averages such as the time average, spacial average, etc.
Where entropy is distinguished between many other bulk properties is that the later are often the result of thousands of atoms acting in unison where as entropy can be observed even in a single particle system. It's especially true when talking about quantum descriptions of molecules.
For a single particle the Jacobian of the principle coordinate is the entropy term.
Say for example you have a classical particle who is attracted to a single point by the equation
E(r) = 1/2 * k * (r-r0)^2
In this system we can simply write the Jacobian as a function of r. For an N-dimensional system
J(r) = r^(N-1)
Assuming we integrate the angular terms out. If you perform a simulation of the particle with a given momentum. One of the things of course in a system with conserved momentum is that while the lowest energy position is a distance from the center r0, the time average position will only be r0 if we perform the simulation in 1 dimension. If we have two dimensions you will notice the value will be some value above r0. And as we add more and more dimensions the particle will deviate more and more from r0 outwards. That is because as you increase the number of accessible dimensions you increase the translational entropy. A hyper-dimensional particle will spend very little time near r0 despite r0 being the most stable position.
You don't need multiple equivalent systems to observe this. The time average of a single particle will give rise to this.
In statistical mechanics and such we usually define these in terms of a number of equivalent systems because in practice that's what we are typically measuring and we take advantage of the ergodic hypothesis to link the time average to other averages of interest. But the thing about entropic effects is that they show up even in atomic and sub-atomic systems and many behaviors are a direct result of it. For example if an electron can be excited to a higher set of orbitals where all the orbital is the same energy and one orbital has more momentum numbers than another sub-orbital that orbital will be preferred simply because there's more combinations that suborbital has.
Larger systems have more degrees of entropy they can take advantage of such as swap entropy, rotational entropy, etc. but the rules and interpretations are still very much the same no matter if you got 1 million particules or just one. That's not always the case for other bulk properties. Sometimes the bulk properties are only observable in the limit of the average and not on a single particle.
PercussiveRussel t1_jd50td1 wrote
Ah yes, this helps a lot. Brings back a lot of statphys memories too. Thank you very much.
In a way, a time averaged system could be described as a mixed-state density matrix I suppose, which is where my intuition comes back again. I always picture a single object as being in a pure state, but there are ways it doesn't have to be.
Because when you say that entropy is tied to the probability of an observation, that really doesn't hold for an object in a superposition, since its multiplicity of states is just 1 (the superposition itself), which is where we do need to be careful I guess. I'd call it classical probabilistic, and avoid all confusion with quantum probabilistic.
So, to get more philosophical: It feels like there needs to be some sort of "outside influence" on a single particle for it to have entropy. Would you agree with this line of thinking? For some definition of outside influence.
That is not me trying to say my intuition was right by the way, it wasn't.
LoyalSol t1_jd586m6 wrote
>Because when you say that entropy is tied to the probability of an observation, that really doesn't hold for an object in a superposition, since its multiplicity of states is just 1 (the superposition itself), which is where we do need to be careful I guess. I'd call it classical probabilistic, and avoid all confusion with quantum probabilistic.
It gets a little strange in quantum, but you still have entropy effects there. But yeah it gets kind of harry just because super positions themselves are already strange to behind with.
It's been a while since I focused on quantum stuff so I won't go too much into those since I'll probably get myself into trouble. :)
>So, to get more philosophical: It feels like there needs to be some sort of "outside influence" on a single particle for it to have entropy. Would you agree with this line of thinking? For some definition of outside influence.
It's easier to understand with an outside influence, but even in the situation of say a classical particle in a box where all points in the box are equally probable, the more dimensions you have the less likely you will observe a particle in the center of the box. Simply because there is more area toward the edge of a hyper-cube than in the center and this effect grows with dimensions.
I guess we could say the box is an outside influence, but I guess we wouldn't have a system without any constraints what so ever? I would have to think about that.
For an isolated particle the volume of the space it occupies is where it gets it's entropy from. Even for a quantum particle in a box the trend is also true, but just not uniform since you have a wave function. The odds of observing a particle near the center of the box goes to 0 as the number of dimensions increases. You're more likely to observe it near the edge in higher dimensions.
Which also a bit of trivia, is why the translational partition term is usually the only one in statistical mechanics that has a volume component. Because the other forms of entropy deal with internal degrees of freedom where as translational entropy is the space of the system.
Viewing a single comment thread. View all comments