Cybersecurity AI is ready for prime time: why the skeptics are wrong – fifthdomain.com

Federal leaders looking at artificial intelligence offerings to strengthen the cybersecurity of their systems are often met with a confusing array of hype that leads to misunderstanding and all too often to inertia. And, as government decision-makers are well aware, cyber threats against public sector systems are increasing daily and growing in sophistication.

Unfortunately, overhype about artificial intelligence in cybersecurity only reinforces our human tendency to resist change. Remember how government IT leaders were slow to see the real benefits of cloud technology?

In just the same way, some federal agency IT experts, even in the face of rising threats to their systems, remain reluctant to examine the commercial off-the-shelf (COTS) applications using AI at scale.

Perhaps a brief review of what cybersecurity AI is and is not will be helpful. For starters, confusion (and often inadvertent misinformation) is centered on descriptions about how AI is used.

Cyber AI is not big data alone. Machine learning is not possible using deficient data sets. With consumer-facing AI-based tools such as voice-activated home assistants like Amazons Alexa, the Google Assistant or Apples Siri, we see how large data sets of consumer behavior - Alexa, tell me an apple pie recipe - leverage forms of AI known as deep machine learning or artificial narrow intelligence.

Similarly, for cyber AI, training the data set is essential. Ideally these are solutions that can learn, train, and reliably identify constantly moving threats like complex malware and other file-less attack patterns that are increasingly more common . Its critical to remember that AI is not a panacea yes, effectively training AI algorithms at scale can prevent future attacks, but the human element is still necessary to thwart cyber actors.

Cyber AI also is not laboratory AI alone. One of the clearest distinctions between cyber and other types of AI is whether its functionality can be accomplished in the real world outside the perfect conditions of the laboratory setting. For instance, claims about accuracy and false positive rates should always be interrogated in light of sample sizes. As an example, an AI model that learns only about breach attempts in the financial sector cannot be adequately applied to the intricacies of guarding protected health information in hospitals.

A solution is only as good as the data

Get the top Cyber headlines in your inbox every weekday morning.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the Daily Brief.

For cybersecurity AI to meet the challenges facing federal IT leaders, the data must be relevant to the evolutionary nature of the of threatscape, the increasing demands that the agencys mission is placing on its systems and the risks posed by the human element from within the agencys walls.

For example, it is well understood that many cyber breaches result from human error. A good cyber AI solution can analyze human behavior to anticipate mistakes and correct them proactively as part of the scanning and response functions. To that end, data must be constantly refreshed in order to keep pace with the agencys requirements - addressing both the internal environment along with changes in the external threat conditions.

Our experience tells us that the power of cyber AI is unleashed by:

The Need for Speed

Equally important, as we found in our 2019 Global Threat Report, is the importance of speed.

The report identified that breakout times (the time it takes an adversary to move beyond their initial foothold within a network to when they successfully gain broader access) of the most dangerous groups targeting U.S. government agencies have continued to shrink year over year. Russian-identified hacker groups led the way with a breakout time of less than 19 minutes.

These shrinking attack windows bolster the case for the 1-10-60 Rule: One minute to detect an incident or intrusion; 10 minutes to determine if the incident is legitimate and determine next steps (containment, remediation, etc.); and 60 minutes to eject the intruder and clean up the network.

Taking cybersecurity to the next level, as described in this perhaps deceptively simple rule is possible. The cybersecurity AI solutions that can help to accomplish this objective must utilize the power of vast data sets in a shared cloud environment, set up to collect, analyze and interpret events in real time. No overhype just the right data, smart vision, and a mission to stop breaches faster.

James Yeager is vice president for public sector and healthcare at CrowdStrike.

Originally posted here:
Cybersecurity AI is ready for prime time: why the skeptics are wrong - fifthdomain.com

Related Posts