Security Experts Predict AI Will Change Cyberthreat Landscape

By: Jacqueline Lee| - Leave a comment

Artificial intelligence has a myriad of benefits for enterprises — but in the wrong hands, it may also give an unexpected edge to cyberthreat actors. According to a new Cylance survey, 62 percent of security experts believe that within the next year, AI will be weaponized for cyberattacks.

When it comes to infiltrating defenses, the greatest advantage of artificial intelligence is its relentlessness. Unlike people, computers never tire of penetration testing, experimenting with social engineering emails or guessing passwords in a brute-force attack. They’re also capable of executing operations at a far higher volume than any human actor. After processing data from attack attempts, machine learning makes it possible to make better predictions about selecting attack methods.

At the same time, cognitive computing is the most logical tool to respond to attacks launched by other machines. Cognitive computers can effectively filter incoming email, detect unauthorized configuration changes and create spur-of-the-moment honeypots for possible attackers. Put simply, no human can match the speed at which these computers can learn and execute.

As of now, few individual attackers possess the cognitive applications and sheer computing power to mount a successful AI-driven cyberattack. Nation-state actors, however, could be another story. Roman Yampolskiy, an associate professor at the University of Louisville’s Speed School of Engineering, writes in the Harvard Business Review that this oppositional paradigm risks creating a cybersecurity arms race.

Protecting AI

Artificial intelligence isn’t just involved on varying sides of a network attack; it’s also a potential attack vector itself. In a report called “Preparing for the Future of Artificial Intelligence,” the White House stated that people working with the technology should safeguard data integrity, protect privacy and ensure the availability of cognitive applications, particularly as cognitive computing takes on a larger role in public and private sector functions.

The White House defines Narrow AI as the application of cognitive computing to specific applications, like autonomous vehicles. Yampolskiy says that as cognitive computing expands from this form to the possibility of a future brain-machine interface, protecting cognitive applications won’t just be about protecting national security or proprietary business data — it will be about protecting free speech and the deepest levels of personal privacy.

Organizations like the Department of Defense (DoD) are increasingly investigating cognitive applications. The U.S. military, according to Quartz, uses robotic systems in drones and for tasks like bomb disposal, but these robots are currently remotely powered by human military personnel. The DoD also hopes to use cognitive systems not only to increase supply chain efficiency and security but also to design sophisticated cognitive security systems that can anticipate attacks before they happen.

As cognitive systems achieve greater autonomy in military environments, protecting them from cyberattacks becomes critical. The good guys have no choice but to involve AI on their side as it becomes an essential defensive tool to protect personal, business and government interests.

Topics: , , ,

Comments

About The Author

Jacqueline Lee

Freelance Writer

Jacqueline Lee specializes in business and technology writing, drawing on over 10 years of experience in business, management and entrepreneurship. Currently, she blogs for HireVue and IBM, and her work on behalf of client brands has appeared in Huffington Post, Forbes, Entrepreneur and Inc. Magazine. In addition to writing, Jackie works as a social media manager and freelance editor. She's a member of the American Copy Editors Society and is completing a certificate in editing from the Poynter Institute.

Articles by Jacqueline Lee
See All Posts