Government is wildly unprepared how AI can be abused by criminals


NEWYou can now listen to Fox News articles!

For years leading up to 2020, I warned that the next national emergency — like the ’08 financial crisis — would lead to billions in fraud losses. When COVID-19 hit, my warnings became our reality.  

Hundreds of billions of dollars were plundered from the coffers of vital government programs — rent relief, unemployment benefits, SNAP benefits and PPP loans became piggy banks for thousands of domestic and transnational cybercriminals.  

Then, when state-level labor departments realized that hundreds of billions of dollars’ worth of fraudulent unemployment claims were being paid out, many turned to facial recognition systems to verify the identities of claimants.  

AI TOOLS BEING USED BY POLICE WHO ‘DO NOT UNDERSTAND HOW THESE TECHNOLOGIES WORK’: STUDY

I said as early as 2020 that AI-generated deepfakes would be used to circumvent those systems, and low and behold, that’s exactly what’s happening.  

Social Security cards

AI could generate synthetic identities matching the profile of legitimate Social Security beneficiaries, directing millions of dollars away from deserving recipients.  (Kevin Dietsch/Getty Images)

Criminals are now using our faces to steal from the government. They’re filing tax returns, submitting unemployment claims; they’re impersonating our voices, faces and identities, and it’s largely going undetected.  

Today, I am sounding the alarm again: AI, particularly generative AI, poses the greatest risk to the security of our most vital government agencies and entitlement programs that we’ve ever faced. Perhaps this time, our leaders will listen before disaster strikes. 

Sophisticated AI algorithms have the potential to commit large-scale fraud across multiple sectors. Trained on public or leaked data sets, they can predict the structure of sensitive information such as Social Security numbers, creating synthetic identities and generating fraudulent healthcare claims, defense contracts, tax returns, and aid applications.  

The degree of accuracy can be alarmingly high, and AI-driven automation can further exacerbate the problem by overwhelming our detection and prevention systems with a deluge of fraudulent submissions. 

Consider Social Security benefits, a lifeline for millions of Americans. AI could generate synthetic identities matching the profile of legitimate beneficiaries, directing millions of dollars away from deserving recipients.  

In the realm of Medicare and Medicaid, AI could fabricate seemingly legitimate medical claims, leading to the loss of billions of dollars — funds intended to ensure that low-income families and seniors can access vital healthcare services.  

Similarly, our defense contracts aren’t immune; AI could generate bogus companies with convincing bids, diverting significant financial resources intended for national security. 

Tax collection, the backbone of our government funding, could also be compromised. Advanced AI algorithms could create complex tax returns designed to exploit loopholes and maximize fraudulent refunds.   

Artificial intelligence can be used for 'stalker-type purposes,' said Kevin Baragona, a founder of DeepAI.org

AI can be used as tool to stalk unsuspecting victims. Generative AI can be a particular threat because it can create new types of content from text and code to even images or video. (Fox News)

Fortunately, while the risks here are significant, there is a silver lining: the same technology that empowers fraudsters can be harnessed to protect our systems. For example, multifactor authentication, combined with “behavioral biometrics,” offers a unique, sophisticated way to combat AI fraud that traditional methods can’t match.   

This technology doesn’t just look at static data — like an ID number or even a fingerprint — but instead analyzes the unique ways in which a person interacts with digital devices. It takes into account factors such as typing speed, mouse movement patterns, and even the way a smartphone is held.   

AI, no matter how advanced, lacks the human touch — it can’t convincingly mimic these deeply personal and nuanced human behaviors.  

For instance, let’s consider a fraudulent tax return filed using an AI-generated identity. An ordinary system might validate the return based on static identifiers, such as a synthetic Social Security number. However, a system equipped with behavioral biometrics would delve deeper, scrutinizing the way the data was input — the timing of keystrokes, the rhythm of typing. 

In the realm of Medicare and Medicaid, AI could fabricate seemingly legitimate medical claims, leading to the loss of billions of dollars — funds intended to ensure that low-income families and seniors can access vital healthcare services.  

This is where AI, in its emulation of human behavior, would falter and raise a red flag. In this way, behavioral biometrics provide an additional, critical layer of defense in our fight against AI-driven fraud.  

The point is: AI systems can detect patterns of fraud and anomalies in data that are often overlooked by human reviewers — improbable combinations of age, income, employment history, or other personal details can be flagged.   

We can use AI to monitor the volume and pattern of applications, identifying an unusual influx or pattern that could indicate automated fraud. By acting proactively, we can mitigate large-scale damage. 

CLICK HERE TO GET THE OPINION NEWSLETTER

I can’t stress enough that any agency leader not already considering the impact of AI on their fraud detection and prevention systems is likely already falling victim to these types of sophisticated scams.  

It is no longer a question of if, but rather when and how severely, these AI-powered threats will impact their agencies. We must acknowledge the stark reality that AI fraud is not a distant threat, but one that’s knocking at our door. 

There’s no silver bullet here. Combating AI fraud will require a concerted and coordinated effort across agencies, a deep commitment to ongoing innovation, and a willingness to invest in advanced technologies like behavioral biometrics.   

CLICK HERE TO GET THE FOX NEWS APP

We need to start thinking of fraud prevention not just as an administrative function, but as a critical aspect of our national security. By elevating this issue to the strategic level and fostering an open, robust dialogue about it, we can empower ourselves to stay one step ahead of those who seek to exploit our systems.   

The future of our nation’s integrity and the security of our citizens depend on it.  



Source link

Leave a Comment