Artificial intelligence (AI) excels at finding patterns like unusual human behavior or abnormal incidents. It can also reflect human flaws and inconsistencies, including 180 known types of bias. Biased AI is everywhere, and like humans, it can discriminate against gender, race, age, disability and ideology.
AI bias has enormous potential to negatively affect women, minorities, the disabled, the elderly and other groups. Computer vision has more issues with false-positive facial identification for women and people of color, according to research by MIT and Stanford University. A recent ACLU experiment discovered that nearly 17 percent of professional athlete photos were falsely matched to mugshots in an arrest database.
Biased algorithms are linked to discrimination in hiring practices, performance management and mortgage lending. Consumer AI products frequently contain microinequities that create barriers for users based on gender, age, language, culture and other factors.
Sixty-three percent of organizations will deploy artificial intelligence in at least one area of cybersecurity this year, according to Capgemini. AI can scale security and augment human skills, but it can also create risks. Cybersecurity AI requires diverse data and context to act effectively, which is only possible with diverse cyber teams who recognize subtle examples of bias in security algorithms. The cybersecurity diversity problem isnt new, but its about to create huge issues with biased cybersecurity AI if left unchecked.
Put simply, AI has the same vulnerabilities as people do, wrote Greg Freiherr for Imaging Technology News. Algorithms are built on sets of business logic rules written by humans. AI can be developed to perpetuate deliberate bias or, more often, it mirrors unconscious human assumptions about security risks.
Everyone has unconscious biases that inform judgment and decision-making, including AI developers. Humans tend to have a shallow understanding of other demographics and cultural groups, and the resulting prejudices can shape AI logic for security in many areas, including traffic filtering and user authentication. Language biases can shape natural language processing (NLP) rules, including spam filtering.
Business logic is a permanent part of an AIs DNA, no matter how much training data is used. Even machine learning (ML) algorithms built for deep learning cant escape built-in biases. Biased rules within algorithms inevitably generate biased outcomes, wrote IBM Security VP Aarti Borkar for Fast Company.
An AIs decision-making abilities are only as effective as its training data. Data is neutral until it is filtered through human bias. By the time data reaches an algorithm, there are usually strong traces of human prejudice. Bias can be created by preprocessing teams through a variety of factors, such as data classifiers, sampling decisions and the weight assigned to training data.
Biased training data can corrupt security outcomes. Anti-biased preprocessing is necessary to ensure adequate sampling, classification and representation.
Humans and technology have become cybersecurity collaborators. Cybersecurity contributors train AI to create better security outcomes through a lens of personal knowledge and experience, but humans can quickly contribute to algorithm bias, especially in teams with poor diversity. Varied perspectives are needed to leverage cybersecurity AI in fair, balanced ways.
Diverse teams can recognize the specific risks of biased AI and minimize its impact. Cognitive diversity can contribute to the production of fair algorithms, help curate balanced training data and enable the supervision of secure AI.
CISOs need to create more internal diversity, but getting there isnt going to be easy. Its time to collaborate on the issues that perpetuate biased security culture and flawed AI. The problem is too complex for one person or strategy to solve alone.
Hiring and internal promotions dont guarantee cognitive diversity. Security leaders need to create an inclusive workplace culture. Getting newly hired talent up-to-speed on AI can require training and likely a reevaluation of existing learning strategies as well. Microinequities are prevalent in corporate learning programs, so maintaining an equal playing field means implementing accommodations for learners with varied languages, cultures, ages and levels of dependency on assistive technologies.
Once newly hired or promoted talent is trained, its time to figure out how to retain women, minorities and other candidates, as well as how to remove any barriers to their success. Women in security are disproportionately likely to feel stressed at work and leave the industry, and both women and minorities receive lower wages and fewer promotions.
Biased performance management practices are part of the problem, as workplace cultures and policies can be especially detrimental to women and minorities. For example, an absence of flex-time policies can disproportionately hurt women.
Equal pay and equal opportunity are needed to retain and engage with diverse perspectives. The security industry desperately needs to double-down on creating a culture of self-care and inclusion. Removing barriers can improve anti-bias efforts and produce other positive effects as well. After analyzing 4,000 companies data, researcher Katica Roy discovered organizations that move the needle closer to gender equity even experience an increase in revenue.
Women in cybersecurity are dramatically underrepresented, especially in light of their overall workforce participation. However, true cognitive diversity may require significant changes around policy and culture. CISOs face the dual challenge of fixing cyber team culture and starting cross-functional conversations about equity. Collaboration between security, HR, risk, IT and other functions can create ripples of change that lead to more inclusive hiring, performance management and policies.
Bias is nothing new. Humans [have bias] all of the time, AI strategist Colin Priest, vice president of AI Strategy at DataRobot, told Information Week. The difference with AI is that it happens at a bigger scale and its measurable.
Its probably impossible to create artificial intelligence without any biases. A risk-based approach to governing AI bias is the most practical solution. The first step is to create a clear framework of whats fair, which shouldnt be filtered through a narrow lens of experience. Individuals with diverse perspectives on AI, technology, data, ethics and diversity need to collaborate on governance.
Remember, minimize is not the same as remove. A risk-based framework is the only pragmatic way to put AI governance into practice. Resources should be directed toward mitigating biases in artificial intelligence with the greatest potential impact on security, reputation and users.
Priest recommends creating a job description for AI to assess risks. This isnt a sign that robots are coming for human jobs. Rather, position descriptions are a solid baseline for understanding the purpose of cybersecurity AI and creating performance metrics. Measurement against KPIs is an important part of any governance strategy. Monitoring AI can prevent biases that slowly degrade the performance of cyber algorithms.
Checking personal biases is rarely comfortable. Industry leaders have a biased perspective on AI innovation, especially compared to regulators and researchers who focus on safety. True cognitive diversity can create uncomfortable friction between competing values and perspectives. However, a truly balanced solution to AI bias is going to require collaboration between industries, academia and the government.
IBM Chair and CEO Ginni Rometty recently called for precision regulation and better collaboration between AI stakeholders on CNBC. For example, legislation could determine how the technology is used instead of AI capabilities or characteristics.
You want to have innovation flourish and youve got to balance that with [AI] security, said Rometty.
Alphabet CEO Sundar Pichai recently expressed a similar point of view, asking European regulators to consider a proportionate approach.
Creating more effective frameworks for AI anti-bias and safety means being open to conflicting ideas. Security leaders should prepare for productive friction, and more importantly, join global efforts to create better frameworks. Industry perspectives are critical to supporting the IEEE, the European Commission and others in their efforts to create suitable frameworks.
Third-party data can be a valuable tool for cybersecurity AI, but its not risk-free. Your organization could be absorbing the risk of third-party biases embedded in training data.
Organizations will be held responsible for what their AIs do, like they are responsible for what their employees do, wrote Lisa Morgan for Information Week. Knowing your data vendors methodology and efforts to mitigate training data bias is crucial. Anti-bias governance must include oversight into third-party data sources and partnerships.
Its officially time to target the talent pipeline and cybersecurity diversity. Women are dramatically underrepresented in cybersecurity. According to UNESCO, gender diversity rates drop even lower among cyber leadership and roles at the forefront of technology, such as those in cybersecurity AI. Minorities have fewer opportunities for equal pay and opportunities.
The opportunity gap starts early. According to UNESCO, girls have 25 percent fewer basic tech skills than their male peers. Creating a fair future for artificial intelligence and a diverse talent pipeline requires that everyone pitch in, including industry security leaders. Everyone benefits from efforts to create a more skilled, confident pipeline of diverse cyber talent. Nonprofits, schools and educational groups need help closing the STEM skill and interest gap.
Creating more diverse cyber teams isnt a goal that can be accomplished overnight. In the meantime, security teams can gain diverse new perspectives by collaborating with nonprofits like Women in Identity, CyberReach and WiCys.
Frameworks, tools and third-party experts can help minimize bias as organizations work toward better talent diversity. Open-source libraries like AI Fairness 360 can identify, measure and limit the business impact of biased algorithms. AI implementation experts can also provide experience and context for more balanced security AI.
Last fall, Emily Ackerman almost collided with a grocery delivery robot at her graduate school campus. She survived, but she had to force her wheelchair off a ramp and onto a curb. AI developers hadnt taught the robot to avoid wheelchairs, which put disabled people on the line as collateral.
Designing something thats universal is an extremely difficult task, said Ackerman.But, getting the shorter end of the stick isnt a fun experience.
Sometimes, AI bias can even reinforce harmful stereotypes. According to UNESCO research, until recently, a mobile voice assistant didnt get offended by vicious, gender-based insults. Instead of pushing back, she said, Id blush if I could! In more extreme instances of bias like Ackermans experience, AI can be life-threatening.
Cognitive diversity can create better security. Diverse teams have diverse ideas and broad understandings of risk, and varied security perspectives can balance AI bias and improve security posture. Investing in artificial intelligence alone isnt enough to solve sophisticated machine learning attacks or nation-state threat actors diverse human perspectives are the only way to prepare for the security challenges of today and tomorrow.
See the article here:
Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem - Security Intelligence
- 30 Times Courtrooms Became The Stage For The Strangest Human Behavior - Bored Panda - February 3rd, 2025 [February 3rd, 2025]
- The Impact of AI on Human Behavior: Insights and Implications - iTMunch - January 23rd, 2025 [January 23rd, 2025]
- Disturbing Wildlife Isnt Fun: IFS Parveen Kaswan Raises Concern Over Human Behavior in Viral Clip - Indian Masterminds - January 15th, 2025 [January 15th, 2025]
- The interplay of time and space in human behavior: a sociological perspective on the TSCH model - Nature.com - January 1st, 2025 [January 1st, 2025]
- Thinking Slowly: The Paradoxical Slowness of Human Behavior - Caltech - December 23rd, 2024 [December 23rd, 2024]
- From smog to crime: How air pollution is shaping human behavior and public safety - The Times of India - December 9th, 2024 [December 9th, 2024]
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]