In general, there's two types of compression: Lossy compression and lossless compression.
With lossy compression, imperceptible details are thrown away. An example is JPEG or MP3. You don't need every visual detail in a picture to see it's beauty much like most people don't need to hear the highest frequencies to appreciate their music. And if this information is thrown away, it doesn't need to be stored, resulting in smaller file sizes.
On the other hand, lossless compression simply stores the information in a more clever and efficient way. As an example, I might want to store the text "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" which takes up 30 letters. A compression algorithm might see that in fact, this is the same letter repeated 30 times and store the same text like so: "30*A" which only requires 4 letters. When reading this file, a decompression algorithm will simply do the inverse and output my 30 As again, resulting in no information being lost. More advanced algorithms usually recognize more patterns and some specialized compression algorithms (like for instance PNG for images) employ particular characteristics of the data to store it in a smarter way.
vatbub t1_j1ykyod wrote
Reply to [ELI5] How do online compression algorithms manage to take a file that is dozens of megabytes in size and shrink it down to just a few kilobytes, while mantaining the same quality? by Karamel43
In general, there's two types of compression: Lossy compression and lossless compression.
With lossy compression, imperceptible details are thrown away. An example is JPEG or MP3. You don't need every visual detail in a picture to see it's beauty much like most people don't need to hear the highest frequencies to appreciate their music. And if this information is thrown away, it doesn't need to be stored, resulting in smaller file sizes.
On the other hand, lossless compression simply stores the information in a more clever and efficient way. As an example, I might want to store the text "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" which takes up 30 letters. A compression algorithm might see that in fact, this is the same letter repeated 30 times and store the same text like so: "30*A" which only requires 4 letters. When reading this file, a decompression algorithm will simply do the inverse and output my 30 As again, resulting in no information being lost. More advanced algorithms usually recognize more patterns and some specialized compression algorithms (like for instance PNG for images) employ particular characteristics of the data to store it in a smarter way.