Submitted by OutlandishnessOk2452 t3_11wnjms in technology
reallyfuckingay t1_jd7m3b1 wrote
Reply to comment by [deleted] in The Internet Archive is defending its digital library in court today by OutlandishnessOk2452
Late reply. I think you're overestimating the reliability of these tools based on a anecdote. Google Lens can achieve such accuracy on smaller pieces of text because it has been trained to guess what the next word will be based on what words precede them, the OCR itself doesn't have to perfect so long as the text follows a predictable pattern, which most real life prose does.
When dealing with fictional settings however, with names and terms that were made up by the author, or otherwise are literary in nature and uncommon in colloquial English, this accuracy can drop quite significantly. It might mistake an obscure word for a much more common one with a completely different meaning, or parse speech which has been intentionally given an unorthographic affection on purpose as random gibberish.
I've used tesseract to extract text from garbled PDFs in the past, it still took a painstaking number of reviews to catch all the errors that seemed to fit a sentence at a glance, but were actually different from the original. It definitely can cut down on the amount of work needed, but this still isn't feasible to instantly and accurately transcribe bodies of text as large as entire books, otherwise you'd see it being used much more often.
Viewing a single comment thread. View all comments