Submitted by iboughtarock t3_10kyso6 in Futurology
Nanohaystack t1_j5tizx0 wrote
Echolocation has been a thing for a while, it's just that the normal radio background made it impractical to try developing deterministic echolocation techniques for heavily trafficked applications, though attempts were made even in the early 2000s. This is essentially the same thing as we saw in Dark Knight film, in 2008. The use of machine learning for processing such massive amounts of data enabled this application of a well-known technology.
Sweeth_Tooth99 t1_j5tr7e3 wrote
Would you need to modify the firmware of the router for being able to that with it?
seamustheseagull t1_j5tvgv9 wrote
I'm going to guess that the "routers" part in the headline is even a theory.
I didn't read the article, but if I had to guess, this was probably accomplished in a lab environment using multiple custom-built wireless access points and a load of number crunching behind the scenes infrastructure to develop the 3D images.
This means that in theory, using a top-tier wireless mesh system with a special configuration of antennae, the correct firmware and a specific layout of the access points, they could be used to relay information to a central system which could crunch this data to produce 3D layouts.
There is zero chance this is coming to your $50 Netgear home router next week.
Sweeth_Tooth99 t1_j5tw3io wrote
Thought maybe a hacker with the right software could remotely use a route to image whatever is near the router.
seamustheseagull t1_j5u8dwb wrote
Not at this stage. A malicious firmware in future perhaps, but the hacker would still need 3 devices (I read the article :D) in the room, all with compromised formware.
If this application proves to be useful, then they will likely continue building on it to allow partial imaging with two or even one device, as well as mapping of other objects besides people, and through walls and other objects which are permeable from a WiFi POV.
But what they've done on this pass is fundamentally a form of reverse triangulation; using the data from each of three waypoints to discover data points within their boundaries that can't be seen.
Think of it like 3 people standing each on the top of a hill, looking at an object in front of them. They all relay information to a 4th person about what they can and can't see. The 4th person can then use this information (after a lo-haw-haw-hawt of calculations and line drawing) to draw a reasonably accurate 3D rendering of the object.
Actually, from a WiFi perspective it's like there's a big object made of clear fluid between them, so they're telling the fourth person not only what they can see, but hoew clearly they can see it. Hence the need for insane numbers of calculations that probably weren't even reasonably possible a decade ago.
SsooooOriginal t1_j5vmpci wrote
Router, phone, pc, laptop, tablet, game console, IoT devices.
Practically any place with a wifi router is going to have two more devices connected to it.
Most people, myself included, don't have much more than a glimpse of a clue as to how we can secure our own networks. Fuck.
myrddin4242 t1_j609sli wrote
Without multiple routers, the image would lack depth and perspective.
tuscanspeed t1_j5u7pmu wrote
> I didn't read the article,
>There is zero chance this is coming to your $50 Netgear home router next week.
Well..maybe you should.
>Researchers used three WiFi transmitters, such as those on a $50 TP-Link Archer A7 AC1750 WiFi router, positioned it in a room with several people, and successfully came up with wireframe images of those detected in the room.
LaserHammerXI t1_j5x48p2 wrote
Maybe you should read the paper. They use off-the-shelve routers and train traffic data against two cam feeds. It's virtually free. No fancy hardware to capture, and no heavy compute necessary.
Nanohaystack t1_j5ukavy wrote
You'd have to fiddle with the firmware in any case to get such capacities even if you weren't using the router itself for computing. If you reeeeeaaaalllyyyy optimized a machine learned model that's fitted precisely to the conditions of a particular room, then it could be possible. There are wifi routers out there on the more expensive side with beefy CPUs that have like 1 gig memory and can take a few hundred MB worth of firmware. Even stuff you can find off the shelf in a BestBuy now, like Asus AX1800, can carry 128MB flash, that's sufficient for a rudimentary machine learning setup, though with its 256MB RAM and 4-core 1.5GHz ARM Cortex, it would be rather slow at training a model and will definitely need external storage for swap space.
If I were approaching such a task today, I'd be using two or three access points as "sensors" using a jerry rigged radio driver to stream raw data straight to a dedicated machine learning setup. I've met tech wiz guys who are in the business of optimizing trained neural networks and they do some very impressive stuff, but even then, I'd be surprised if a run of the mill home router CPU wouldn't burst into fire under all this load.
[deleted] t1_j5txnjf wrote
[deleted]
PM_ME_NUNUDES t1_j5v40mo wrote
It's probably gradient descent based inversion. You take measurements of the signal in the room with no people in it as your baseline, then introduce 1 person and observe the difference in signal responses and build a 3D forward model which accurately reconstructs the observed data due to the pertabation of the signal by the human.
Viewing a single comment thread. View all comments