Facial Recognition Drones Will Use AI to Take the Perfect Picture of You

facial recognition

Facial recognition technology has been banned by multiple US cities, including Portland, Boston, and San Francisco. Besides the very real risk of the tech being biased against minorities, the technology also carries with it an uneasy sense that we’re creeping towards a surveillance state.

Despite these concerns, though, work to improve facial recognition tech is still forging ahead, with both private companies and governments looking to harness its potential for military, law enforcement, or profit-seeking applications.

One such company is an Israeli startup called AnyVision Interactive Technologies. AnyVision is looking to kick facial recognition up a notch by employing drones for image capture. A US patent application published earlier this month outlines the company’s system, which sounds like something straight out of a Black Mirror episode.

The drone captures an image of its “target person,” then analyzes the image to figure out how to get a better image; it adjusts its positioning in relation to the target, say by flying a bit lower or centering its angle. It then captures more images, and runs them through a machine learning model to get a “face classification” and a “classification probability score,” essentially trying to identify whether the person being photographed is in fact the person it’s looking for. If the probability score is too low, the system gets routed back to the first step, starting the image capture and refinement process all over again.

If the thought of a drone programmed to move itself around in whatever way necessary to capture the clearest possible picture of your face doesn’t freak you out, you must not have seen much dystopian sci-fi, nor cherish privacy as a basic right. Stationary cameras used for this purpose can at least be ducked under, turned away from, or quickly passed by; but a flying camera running on an algorithm that’s determined to identify its target is a different—and much more invasive—story.

The nightmare scenario is for technology like AnyVision’s to be employed by governments and law enforcement agencies. But the company says this is far from its intent; CEO Avi Golan told Fast Company that the picture-taking drones could be used for things like package delivery (to identify recipients and make sure the right person is getting the right package), or to help track employees for safety purposes in dangerous workplaces like mines. Golan added that there are “many opportunities in the civilian market” where AnyVision’s technology could be useful.

The company currently sells a “visual AI platform” that can be used for security purposes, like identifying a person of interest when he or she enters a store or building. It’s also marketing its product as a tool to help businesses and employees get back to work safely in the midst of the Covid-19 pandemic, writing that computer vision technology can help with contact tracing, contactless access, and remote authentication.

AnyVision was backed by Microsoft until 2019, when allegations arose that AnyVision’s technology was being used in a military surveillance project that tracked West Bank Palestinians. Microsoft has since not only stopped investing in any startups working on facial recognition tech, it also stopped selling its own version of the technology to law enforcement agencies, with the company’s president vowing not to resume until national laws “grounded in human rights” are in place to govern its use.

What might such laws look like? How would we determine where and when—and on whom—it’s ok to use something like a drone that self-adjusts until it captures an unmistakable image of someone’s face?

AnyVision’s drone patent is pending, but it and many other companies are quietly advancing similar technologies even as public opposition to them grows. Granted, there are positive uses for these tools, but it’s challenging to think of many where the ease or convenience benefits outweigh the privacy invasion and other drawbacks.

Shankar Narayan, technology and liberty project director at the American Civil Liberties Union, summed up our reflexive opposition to this sort of technology well, saying, “The basic premise of a free society is that you shouldn’t be subject to tracking by the government without suspicion of wrongdoing. […] face surveillance flips the premise of freedom on its head and you start becoming a society where everyone is tracked no matter what they do all the time.”

Golan, for his part, is optimistic, acknowledging that there’s some distance to bridge between his company’s work and the public favor. He told Business Insider, “I think it’s a futuristic technology, but I want to have it in my pocket until it becomes more accepted by humanity.”

Image Credit: Laurent Schmidt from Pixabay



* This article was originally published at Singularity Hub

Post a Comment

0 Comments