Pentagon sees more AI involvement in cybersecurity – Defense Systems

Cyber Defense

As the Pentagons Joint Regional Security Stacks moves forward with efforts to reduce the server footprint, integrate regional data networks and facilitate improved interoperability between previously stove-piped data systems, IT developers see cybersecurity efforts moving quickly toward increased artificial intelligence (AI) technology.

I think within the next 18-months, AI will become a key factor in helping human analysts make decisions about what to do, former DOD Chief Information Officer Terry Halvorsen said.

As technology and advanced algorithms progress, new autonomous programs able to perform a wider range of functions by themselves are expected to assist human programmers and security experts defending DOD networks from intrusions and malicious actors.

Given the volume and where I see the threat moving, it will be impossible for humans by themselves to keep pace, Halvorsen added.

Much of the conceptual development surrounding this AI phenomenon hinges upon the recognition that computers are often faster and more efficient at performing various procedural functions; at the same time, many experts maintain that human cognition is important when it comes to solving problems or responding to fast-changing, dynamic situations.

However, in some cases, industry is already integrating automated computer programs designed to be deceptive giving potential intruders the impression that what they are probing is human activity.

For example, executives from the cybersecurity firm Galois are working on a more sophisticated version of a honey pot tactic, which seeks to create an attractive location for attackers, only to glean information about them.

Honey pots are an early version ofcyberdeception. We are expanding on that concept and broadening it greatly, said Adam Wick, research head at Galois.

A key element of these techniques uses computer automation to replicate human behavior to confuse a malicious actor, hoping to monitor or gather information from traffic going across a network.

Its goal is to generate traffic that misleads the attacker, so that the attacker cannot figure out what is real and what is not real, he added.

The method generates very human looking web sessions, Wick explained. An element of this strategy is to generate automated or fake traffic to mask web searches and servers so that attackers do not know what is real.

Fake computers look astonishingly real, he said. We have not to date been successful in always keeping people off of our computers. How can we make the attackers job harder once they get to the site, so they are not able to distinguish useful data from junk.

Using watermarks to identify cyber behavior of malicious actors is another aspect of this more offensive strategy to identify and thwart intruders.

We cant predict every attack. Are we ever going to get where everything is completely invulnerable? No, but with AI, we can change the configuration of a network faster than humans can, Halvorsen added.

The concept behind the AI approach is to isolate a problem, reroute around it, and then destroy the malware.

About the Author

Kris Osborn is editor-in-chief of Defense Systems. He can be reached at kosborn@1105media.com.

Go here to read the rest:
Pentagon sees more AI involvement in cybersecurity - Defense Systems

Related Posts