Submitted by midasp t3_y4elh1 in MachineLearning
Professional-Ebb4970 t1_isgm99j wrote
I strongly disagree with your first paragraph. There is still a lot of work to be done on lossless compression, and I don't believe we are as close to the Shannon Bound as you seem to imply.
For instance, there are recent methods that use neural networks to do lossless compression by using a combination of ANS, Bits Back and VAEs, and they can often achieve much better compression rates than traditional methods. For an example, check this paper: https://arxiv.org/abs/1901.04866
Viewing a single comment thread. View all comments