Viewing a single comment thread. View all comments

AgentHamster t1_j1t5m9r wrote

Reply to comment by Tioben in Second law of information dynamics by efh1

Let's imagine what would happen if all mutations were random, and every base had the same probability of turning into another base. What would be the result of this? The sequence would be entirely randomized, with each nucleotide have a 25% probability of being one of the four bases. This would be the state with the maximum amount of Shannon's entropy - which would be 2N, where N is the DNA length in base pairs. The fact you get a decrease in Shannon's IE suggests that this isn't occurring - aka, the mutations you see don't just scramble the genome to make it a randomized collection of 4 bases, which would increase rather than decrease Shannon's IE.

​

Edit - I want to point out that the article (from my understanding) is rather confusing, since it uses two distinct arguments in each section of the paper. The first part of the paper is essentially claiming information decreases due to the loss of coherent regions of the film - in other words, regions of of the film no longer coherently encode 0 or 1, and thus cannot be counted an information encoding region. This is despite the fact that the Shannon's entropy per region should actually increase. The second part of the paper deals with an example where this argument of loss of regions can no longer be applied.

5

Tioben t1_j1t9c51 wrote

Thanks! Huh.

But, okay, suppose we had a magnetized array where all the magnets are entangled, so if one is up all are up. The system would only have a single bit of information. No matter which magnet you flip, the whole array can only express a single 1 or a 0.

If you could physically force them to disentangle, my intuition is that this would increase the thermodynamic entropy of the system, because the magnets are now more mixed. But wouldn't it also increase the informational entropy, because now you can express more independent bits?

(I mean, to be honest, not sure if answering this will make me any less confused.)

2

AgentHamster t1_j1th4l4 wrote

Yes, your intuition is correct, although I will admit the general idea of informational content (as measured by Shannon's entropy) increasing with increasing randomness seems a bit counterintuitive. If all you knew was that each magnet had a 50-50 chance of being up or down, Shannon's entropy would be maximized, since you would have the maximum number of possible configurations to encode your message. For example, there 6 possible ways to have a 2 ones and 2 zeros in a sequence of 4 numbers (1100, 1010,1001,0110 0101,0011). This means you can code in 6 different options with this limitation. In contrast, if you knew all your magnets had to be in one configuration (for example, 1111), there are more limited combinations (only one in this case, so 0 information) you have to encode information.

If you take this as is, you might argue that this goes against the general idea of their paper. However, it seems that they are defining the total informational entropy as Sinf = N*ln(2)*kb*H(x), where H(x) is Shannon's entropy and N is the number of domains encoding the information (?). While H(x) goes up with increasing entropy, the article claims N goes down (because the domains themselves are loss due to decoherence in the individual regions used to store information, if I am understanding correctly). It's honestly a little hard for me to tell since they don't have an N or Sinf plot for Fig 2.

4