Submitted by starstruckmon t3_1027geh in MachineLearning
unkz t1_j2wzgf3 wrote
Reply to comment by matth0x01 in [R] Massive Language Models Can Be Accurately Pruned in One-Shot by starstruckmon
Perplexity is one of the key evaluation metrics for how well a language model understands language. Pruning one model decreases perplexity (makes the model better), which is interesting.
matth0x01 t1_j2x49gm wrote
Thanks - I think I got it. Kind of new to me why language models use perplexity instead of log-likelihood which is a monotonic function of perplexity.
From Wikipedia it seems that perplexity is in unit "words" instead of "nats/bits", which might be more interpretable.
Are there other advantages I overlook?
unkz t1_j2x7ggd wrote
That’s basically it, cross entropy (sum of negative log likelihood) and perplexity are related by
Perplexity = 2^entropy
So the main two things are, interpretability (perplexity is a measure of how many words the model is choosing from at any point), and scale (small changes in cross entropy result in large changes in perplexity).
Viewing a single comment thread. View all comments