Comments

You must log in or register to comment.

viscence t1_jcrg718 wrote

Imagine you have two boxes. In each of them you have a perfect, frictionless rubber ball, labelled A and B. You've shaken the box with A inside a lot, and the ball is bouncing around inside it... high energy! The other, B, is just sitting at the bottom of its box, low energy.

Now you put the boxes together and remove the wall between them. What happens? Soon the bouncing ball hits the stationary one. It's a glancing blow, but B now has a tiny bit of energy, and A has a little less. The total energy is the same. Soon, it happens again! This time a lot of energy transfers. Now B has a little more than half the energy, and A a little less than half!

As you watch, this keeps happening. The balls keep trading energy between them. Sometimes A gets a bit more of the energy, sometimes B, most of the time it's about even. It IS possible for all the energy to go back to A... but the balls have to hit JUST RIGHT for that, and there are far more ways they can hit where that doesn't happen.

Now repeat the experiment with 1000 rubber balls in each box. Again the ones in box A start with all the energy. The same thing happens, when the wall goes down the A balls slam into the B balls and everything just reaches an equilibrium quickly. Sure, sometimes one rubber ball gets a huge kick, maybe because two others slam into it at once, but on average there isn't really a difference between the balls labelled A and the ones labelled B anymore. It's even more unlikely to spontaneously arrange itself into a state where the A balls have all of the energy again -- ALL 2000 balls would have to collide perfectly at once for that to happen. But the total energy remains the same.

That's "what decides", on the microscale -- chance. On the macroscale, the resultant statistics. Things don't really tend to low energy, they tend to equilibrate. If you throw a hot rock into a pot of cold water, the water gets hotter and the rock gets colder, but the water doesn't get hotter than the rock (on average) and the rock doesn't get colder than the water (on average)... they come to be the same temperature... and if you have a perfectly insulated pot, the total energy inside doesn't change even if the water and rock are changing.

24

fr293 t1_jcrn2xs wrote

Why do you think that the energy of the universe is constant? If you have an isolated system that is small enough that relativistic effects can be neglected, then energy is conserved. In systems as large as the universe, it’s not so simple. But if we are dealing with closed systems in which energy is conserved, then the flow of energy is determined in the most coarse sense by the second law of thermodynamics, which tells us that heat cannot move to from one body to a hotter body.

3

Chemomechanics t1_jcs2i57 wrote

Energy minimization is a consequence of entropy maximization, as I derive here.

Broadly, when things fall into a lower energy minimum, they heat the rest of the Universe, which increases its entropy. Nature loves this.

> What decides which thing gets to have low energy?

The configuration with higher entropy. It has more ways to appear, so we see it more often. That’s the Second Law, in essence.

At equilibrium, there’s no difference in any intensive parameter: temperature, pressure, stress, chemical potential, surface tension, electric field, you name it.

3

mfb- t1_jcsfrx5 wrote

The surrounding is often empty space which has a very low energy density and temperature, and its energy density and temperature keep decreasing as the universe expands. Dumping some energy into empty space isn't going to make a difference for space, but it means your object loses energy.

Total energy is not conserved in an expanding universe, by the way, but that's a smaller effect than the larger volume over which the energy is distributed.

1

mesouschrist t1_jct3knw wrote

The energy of the universe is constant under the assumption that there are laws of physics that apply to the universe across all times (i.e. the motion of things in the universe can be explained without an explicit time dependence, like the gravitational constant shrinking with time without any underlying reason). Energy is the conserved quantity associated with time invariant laws of physics. So if you think energy is decreasing or increasing, you just have the wrong definition of energy (like when a ball rolling across the table slows down, it's lost energy, but really the energy has gone somewhere else).

It doesn't matter whether or not the system is relativistic - certainly energy is still conserved in special and general relativity. But I'd be curious if you could elaborate on what you were thinking there.

0

cygx t1_jctjza4 wrote

Due to the metric expansion of space, the universe is not time translation invariant at cosmological scales, hence no energy conservation via Noether's first theorem. However, Noether's second theorem still applies due to general covariance, and you get an 'improper' / 'strict' (terminology differs) conservation law for any time-like vector field (in case of cosmological time, this yields the first Friedmann equation). However, these laws are non-covariant as they include gravitational contributions that cannot be localized via a stress-energy tensor. It's somewhat similar to what happens to energy conservation in rotating frames of reference, except that there's no longer such a thing as inertial frames that make energy conservation manifest. Consequently, a large portion of physicists find it less confusing to just state that energy conservation doesn't hold for the universe at large.

5

Movpasd t1_jctnhaj wrote

Generally, energy within a system will tend to distribute until thermodynamic equilibrium is reached. But for a lot of systems that we study, it's a fair assumption that it's coupled to an environment that acts as a large, empty energy sink. So that sink will tend to take all the energy until the system we're interested in ends up in its lowest energy configuration.

For example, an electron orbiting an atom is coupled with the electromagnetic field, which is pretty empty for most situations. So if it's in an excited energy level, it will tend to dump that energy out as a photon until it reaches the ground state. But if the electromagnetic field is locally very active, with photons whizzing around everywhere, this approximation fails and you have to treat the electron's energy level statistically (like in a laser).

Another factor is friction, which in very abstract terms could be defined as the tendency for energy to fall out of macroscopic degrees of freedom towards microscopic. That's what ultimately makes a stirred fluid stop sloshing around, with the energy being dissipated into smaller and smaller vortices until it simply becomes heat.

2

fr293 t1_jctq3cv wrote

What my man cygx said. But more generally, I wanted OP to articulate the principles that they were using to arrive at their conclusion. It’s a fool’s errand to answer a question without understanding the context that produced it.

1

ritobanrc t1_jd9jfft wrote

I think all of the other answers here are sort of missing the point -- you don't need bring in entropy or statistical mechanics to answer this question.

When you've been taught that objects move to lower energy states, what is meant is they move to lower potential energy states. A ball rolls from the top of a hill (high gravitational potential energy) to the bottom of a hill (low gravitational potential energy). An electron moves from far away from the positively charged nucleus, towards the nucleus, going from high electric potential energy to low electric potential energy.

This is just how forces work. A force points in the negative direction of the gradient of potential associated with it. The gravitational force points from high to low, the force of pressure in a fluid points from high pressure to low pressure -- the direction of a force is determined by the gradient of it's potential (alternatively, you can go the other way -- if you know the force vector, you can get back to the potential by integrating it).

If you're comfortable with some calculus, you can do the math for many common potentials very easily. The gravitational potential energy on Earth is U = mgy, where y is the height, so the y-component of the gravitational force is just the negative the derivative: Fg = -dU/dy = -mg. The electrostatic potential energy is U = k q1 q2 / r, if you differentiate with respect to r, you get Fe = -dU/dr = k q1 q2 / r^2. A potential energy is defined such that the corresponding force points in the direction of decreasing potential energy.

Energy is still conserved in these calculations, because the potential energy is just becoming kinetic energy -- there's no energy being lost.

2