Slypenslyde t1_jadc3l2 wrote
This is very hard to answer definitively because what information they track is secret. If they explained their algorithm for human detection, people would update their bots to look more like humans.
What we can glean from some discussions about it and some common sense is that a lot more is going on than just whether you click the right images or the check box. Sometimes you don't even have to click trains or crosswalks or non-civilian targets.
The code already knows a lot about the person you claim to be and the things you usually do. It's already made some guesses based on your IP, the information your browser gives up, the time of day, and what site you're trying to visit. All of that alone is probably enough to verify that you are the right person, but it's not enough to verify you aren't running a program working on your behalf to do things in a fashion the website owner doesn't want.
So it also tracks how your mouse moves to the checkbox when it's time to click. Bots can sometimes move in a very "not natural" way so it looks at the mouse movements to decide if a bot's involved. Maybe you did touch input: that still gives a lot of data about the "tap gesture" like the size of the tap, how long the finger stayed down, the shape of the tap, etc. Bots don't simulate that very well, or when they have to generate multiple taps they tend to create recognizable patterns.
All of that is real squirrely. Sometimes you have to go through multiple rounds of "click the picture". That's probably when something about your input looks "not human enough" so the system wants to see more. Eventually you make it confident enough it's dealing with a human it lets you through.
(Let us also not forget Google sells products based on its image recognition AIs: a side goal of this program has always been presenting images their AI has trouble classifying to humans who can help train it to be better.)
The thing is this is kind of like locks on a house's front door. A person who's spent a few months practicing with lockpicks can get inside silently in less than 30 seconds. But even among criminals the number of people who invest that much is a small percentage, and the people who do generally look for bigger scores than the average household contains. So a simple deadbolt is enough to keep out a large number of criminals, but for the ones not deterred it means they tend to try noisier or more violent forms of entry, which is less likely to go undetected.
That's what bot detection does. Many bots just aren't sophisticated enough to pass the gate. The ones that are get slowed down by dealing with the process. Part of the goal of regulating bots is making sure bot traffic doesn't overwhelm sites and APIs, and slowing down bots is one way to accomplish it.
So no, it's not a perfect shield against bots and can sometimes even reject legitimate humans. But the purpose is to make it harder to use bots and to make the bots that work less efficient. It's good at doing that.
Viewing a single comment thread. View all comments