Tech

Facial Verification Will Not Fight Fraud

Loading...

With the US economy that is just beginning to recover from Covid-19 and millions still out of work, Congress has authorized expanded unemployment benefits that integrate state assistance programs. While it is commendable to fortify Americans in struggle during an ongoing crisis, bad actors have done it unemployment fraud a serious problem. Unfortunately, many states trying to stop fraud by surveillance are stalling biased systems which can do much more harm than good. Predictably, these systems make mistakes, and when they do, they punish largely BIPOCs, trans and Americans without gender compliance.

Twenty-one states turned to high-tech biometric identity verification services that use computer vision to determine if people are what they claim to be. This is the same technology that allows users to unlock their phone with their face – a one-on-one comparison process where the software deduces whether your facial features correspond to those saved in a single model. But while facial verification is common for consumer devices, it is relatively rare for government services. It should stay that way.

You can believe in the face verification it’s harmless because the controversies that go around revolve mostly around the face recognition. Police use facial recognition when handling images of a suspect against a database containing mugs or photos of a driver’s license, where an algorithm tries to find a match. Relying on facial recognition technology has led the police to arrested wrongly at least three black men, and there are probably many others.

But facial verification can also be prejudicial. When errors occur, they are already in California, have historically centered and disproportionate on gender and racial demographics. For performance fraud programs, the government’s reliance on facial verification creates an increased risk that people of color, trans and non-binary candidates will have their claims walked slowly or even denied. These results can make it difficult to keep the lights on or a roof over your head. Even worse, law enforcement could unduly interrogate vulnerable people because prejudicial algorithms cast skepticism on who they are. Such interactions could lead to unjustified arrest, prosecution, and deprivation of government by those who have done nothing more than exploit a flawed algorithmic test.

Loading...

It is sadly predictable that government agencies will create conditions to perpetuate algorithmic injustice. When Michigan launched its Automated Integrated Data System in 2013, the initiative was characterized as the success for reporting five times more cases of unemployment fraud and bringing in $ 65 million in new fines and taxes. As it turns out, the software was not reliable. The system has flagged tens of thousands of Michiganders, and humans have stamped out automated judgments, resulting in failure and pawn.

Increases dependence on smartphone apps as well ID.me it also increases the risk of digital divide. Many lower-income Americans and seniors they are in danger of being shut down by essential government services, simply because they don’t have a phone with a camera and a web browser.

As with any expansion of biometric analysis, there is a second powerful threat. Our use of facial verification more and more normalize the expectation that our bodies should be used as a form of government ID. Every time the government embraces facial verification and recognition, it creates impetus for further strip.

The good news is that planting fraud does not require biometrics. Recognizing that there are other alternatives requires acknowledging that it is a significant problem that Americans do not have a secure digital identity system. Patchwork responses mean that some systems will use part or all of your social security number – a result that has turned social security numbers into a valuable target for hackers. Other systems use credit card transactions and credit history. These approaches are prone to errors and subtle, especially since 7.1 million American families they remain without banks.

What is required is a secure identity document, something like a digital driver’s license or a status ID that comes with a secure cryptographic key. In this way, users can provide authentication information without having to subscribe to automated systems buried by prejudice and crawling surveillance. Even if it’s not a perfect solution, we shouldn’t look for one. Any system that allows you to conclusively prove your identity at any time, such as a universal ID that everyone is required to have, is a mass surveillance tool. By adopting incremental digital ID strategies, which preserve privacy, we can mitigate the risk of benefit fraud and other forms of identity theft while preserving privacy, equity and civil rights.


Opinion WIRED publishes articles by external contributors representing a wide range of points of view. Read more reviews here, and see our submission guidelines here. Send an op-ed to [email protected].


More Great Stories WIRED




Source link

Loading...

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button