Tech

The Pentagon Reinforces Its AI-by Hacking Itself Systems

Loading...

The Pentagon sees it artificial intelligence as a way to overcome, overcome maneuver, and dominate future opponents. But the fragile nature of AI means that without proper care, technology could perhaps give enemies a new way to attack.

U Common Center for Artificial Intelligence, created by the Pentagon to help the U.S. military use AI, has recently formed a unit to collect, vet, and distribute open source and industrial machine learning models to groups throughout the Department of Defense. Part of this effort points to a key challenge with the use of AI for military purposes. A “red team” of machine learning, known as the Production and Assessment Group, will survey pretrained models for weaknesses. Another cybersecurity team examines the AI ​​code and data for hidden vulnerabilities.

Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way of writing computer code. Instead of writing rules for a tracking machine, machine learning generates its own rules based on data. The problem is, this learning process, with artifacts or errors in training data, can cause AI models to behave in strange or unpredictable ways.

“For some applications, machine learning software is only a bassillion times better than traditional software,” says Gregory Allen, director of strategy and policy at JAIC. But, he adds, machine learning “also breaks down in ways different from traditional software.”

A machine learning algorithm trained to recognize certain vehicles in satellite images, for example, could also learn to associate the vehicle with a certain color of the surrounding landscape. An opponent could potentially trick the AI ​​into changing the scenario around their vehicles. With access to training data, the opponent might also be able to plant images, such as a particular symbol, which would confuse the algorithm.

Allen says the Pentagon follows strict rules regarding reliability and safety of the software you use. He says the approach can be extended to AI and machine learning, and notes that JAIC is working to update DoD standards around software to include issues around machine learning.

AI is transforming the way some companies operate because it can be an efficient and powerful way to automate activities and processes. Instead of writing an algorithm to predict what products a customer will buy, for example, a company may have an AI algorithm to track thousands or millions of previous sales and design its own model to predict who will buy what.

The United States and other military see similar advantages, and rush to use AI to improve logistics, information gathering, mission planning, and weapons technology. China’s growing technological capacity has sparked a sense of urgency in the Pentagon to adopt AI. Allen says the DoD moves “in a responsible way that prioritizes safety and reliability.”

Researchers are developing increasingly creative ways to hack, subvert, or break up AI systems in nature. In October 2020, researchers in Israel demonstrated how carefully modified images can confuse AI algorithms that allow a Tesla to interpret the way forward. This type of “contradictory attack” involves adding input to a machine learning algorithm to find small changes that cause large errors.

Song of the Dawn, a professor at UC Berkeley who has conducted similar experiments on Tesla sensors and other AI systems, says attacks on machine learning algorithms are already a problem in areas such as fraud detection. Some companies offer tools to test AI systems used in finance. “Of course there’s an attacker who wants to bypass the system,” he says. “I think we see more of these kinds of problems.”

A simple example of a machine learning attack involved Tay, Microsoft’s scandalous chatbot-gone wrong, which debuted in 2016. The bot used an algorithm that learned how to answer new questions by examining previous conversations; Redditors soon realized that they could exploit this to get Tay to spit hateful messages.


Source link

Loading...

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button