Ironically, the same innovations we tend to regard as creepy (e.g., AI, algorithms, and Big Data) may help leaders make their workplace more inclusive. But there are reasons to be skeptical.
Racism in technology concept.
I dont consider myself a techno-enthusiast, and Im definitely not optimistic by nature. So, NO, this isnt another overhyped post on how AI will save the world, or how Big Data (does anyone still use the term?) will make our world better by eliminating racism from society. Sadly, the only way to achieve that would be to eliminate humans, too. Indeed, if we were fully replaced by algorithms, racism would go extinct, much like if we replaced all human drivers with self-driving cars, traffic accidents would be rare - they would still happen, primarily due the unpredictability of human pedestrians.
Leaving aside these unlikely scenarios, and you can decide for yourself if they are more utopian or dystopian, it may be useful to understand a few of the more realistic ways in which technology could, if we truly wanted, help us keep racism in check, at work and beyond. After all, we have entered an age in which leaders willingness to reduce racism appears to have surpassed their ability to do so. So, if we want to move from condemnation to intervention, from criticism to solutions, we have to leverage every resource we have, and be open to new solutions, not least since traditional interventions have enjoyed limited success. Just look at the conclusion of a comprehensive academic reviews on the subject: Of the hundreds of studies we examine, a small fraction speak convincingly to the questions of whether, why, and under what conditions a given type of intervention works. We conclude that the causal effects of many widespread prejudice-reduction interventions, such as workplace diversity training and media campaigns, remain unknown.
There are at least four obvious ways in which technology, especially data-driven approaches to managing employees, could help us reduce workplace racism:
(1) Analyzing e-mail and messaging metadata (the context and social networks of communications): Without intruding in peoples privacy or reading what people say in their work-related communications, which in itself would not be illegal, algorithms can be trained to map the social networks in a team or organization, identifying individuals who are excluded from social interactions. Overlaying that data to demographic information on diversity (race or otherwise), could help organizations model inclusion digitally, using passive and non-invasive measures. Imagine a leader or HR professional, like a Chief Diversity Officer, who can access group-level data to assess whether race (or being part of any minority group or protected class) is statistically related to being left out, ignored, or ostracized from the team or organizational network. This granular level of evidence is likely to reveal exclusion where self-reports do not. People are not always aware of their unfair treatment of others, rationalizing their own actions to construct a benevolent self-concept, which is why the vast majority of people see themselves as nice, even when others dont. And when they are, they are pretty good at disguising it, which is why the number of people who answer yes to the question are you a racist?, is far lower than the number of actual racists.
(2) Analyzing the actual content of communications (Natural Language Processing and red flags): Without getting bosses, or any human, to spend any time snooping on employees communications, AI could certainly be trained to reliably monitor the words people use when they interact in any digital medium. Of course, we did not need AI to deter people from misbehaving in traceable or recorded communications, and cautious employees have always found ways to keep offensive comments (including prejudiced and racist jokes) offline. But with an unprecedented level of work exchanges now happening online or in virtual environments, only AI could keep an eye on all the possible toxic, antisocial, or counterproductive comments. With rapid advances in Natural Language Processing, software that translates patterns of word usage into a psychological profile of the individual, including their potential level of prejudice, anger, and aggression, it is easier than ever for organizations to detect and sanction racist behavior. What happens offline tends to stay offline, but what happens online is recorded for posterity. Note the application of this technology to reducing racism could be twofold: you can check for actual offenses, which is what humans would do in the case of reported behaviors, or if they are actually spending their time reading everything people say; or you could check for potential, which means identifying signals that predispose or increase someones probability to misbehave in the future. The latter is ethically more questionable, but also enables prevention; whereas the former is mostly helpful for sanctioning behaviors after they happened.
(3) Mining the digital footprint of external job candidates, particularly for leadership roles (reducing selection bias): One of the best ways to reduce racism is to avoid hiring racist employees, particularly to be in charge. An inclusive culture is best harnessed top-down, with teams and organizations that are led by ethical, open-minded, altruistic, and compassionate individuals who show uncompromising commitment to equality and fairness, practice what they preach, and put their money where their mouth is. Throughout most of our human history, we lived in small groups where everybody knew each other well, and our models for understanding and predicting others were bullet-proof: if you systematically misbehaved, you just ended up with a terrible reputation. Fast forward thousands of years, to the typical demands of modern work, where we are forced to make high-stakes decisions about hiring, promoting, trusting, and following others who we barely know. Such are the complexities and ambiguities of work today that we have no way to know whether the person we have in front of us is truly the person we think we see. We do know that you cannot judge a book by its cover, so the only way to make seemingly logical evaluations of others with the rather limited information we have on them - for example, during a one-off job interview, is to rely heavily on our intuition, which is how we end up making prejudiced and racist decisions in the first place, even when we try to avoid it or persuade ourselves that that isnt the case.
Imagine, instead, if we could access a candidates entire online footprint, consisting of everything they have done online in the past. The process would not be manual, of course, but algorithms could be trained to translate peoples digital history into a quantitative estimate of their open-mindedness, tolerance, authoritarianism, and empathy, etc. In some instances, you wouldnt even need algorithms to detect whether someone could have a prejudiced profile, because their behaviors would just signal racism; as when human recruiters run a google search on a potential CEO candidate to gauge their reputation and assess fit. They may not be explicitly looking at indicators of prejudice, but may still want to exclude candidates, or at least one would hope, who dont seem to have a strong reputation for integrity or ethics. As one would expect, there is a booming business for online reputation management, but these digital spin doctors are focused on helping you fool human assessors rather than machine-learning algorithms. And while prejudiced individuals may always find a way to fool both humans and computers, academic research suggests that if a well-trained AI were to access our complete digital footprint - everything from our Uber ratings to our Netflix and Hulu choices, our Facebook Likes, and of course our Twitter and Whatsapp exchanges - it would be able to predict with great accuracy whether we are likely to display any kind of prejudice or discrimination at work and beyond, and with what frequency. This could and should be deployed in an ethical way, asking candidates to opt in and put their data to the test. In fact, it may even be useful developmental feedback for them to find out whether they resemble more or less prejudiced individuals, which the algorithm could report.
(4) Exposing bias in performance ratings (eliminating the politics and nepotism in promotions and performance management systems): The most pervasive form of bias and discrimination people suffer at work, and one of the hardest to detect, is being unfairly evaluated and rated for their performance, whether consciously or not. This bias occurs even in well-meaning organizations, including ethical companies with mature diversity and inclusion policies, and meritocratic talent management intentions. Interestingly, this is an area where AI has attracted a great deal of popular criticism. For instance, when companies attempt to train AI to predict whether an employee is likely to be promoted, or algorithms are used to rank order internal candidates for potential, the likely result is that certain profiles, such as middle-aged White male engineers, over-index, while others, such as Black, Latino, or female, are underrepresented. However, in these examples, the problem is neither the algorithms nor the data scientists who develop them. If you train algorithms to predict an outcome that is itself influenced by systemic bias or prejudice, it will not just reproduce human bias, but also augment it. In other words, middle-aged White male engineers got promoted before AI was trained to reveal this, and they will continue to get promoted even if AI is not used.
Unlike humans, computers dont really care about your race, gender, or religion: they dont have a fragile self-esteem they need to boost by bringing other people down, and they dont need to bond with other computers by stigmatizing certain classes or groups of humans (or computers). One thing they can do, however, and usually rather well, is to imitate the prejudiced preferences of humans. And yet, they can also be trained to not imitate them. Artificial intelligence may never match the breadth and scope of human intelligence, but it can certainly avoid replicating the vast collection of biases encapsulated in human stupidity.
If organizations are able to measure employees job performance objectively, then AI will not only predict it better than humans, it will also identify any distortion or interference introduced by bias. For instance, an Uber driver can be judged on the basis of (a) how many trips she makes, (b) how much money she brings in, (c) how many car accidents she has, and (d) what customer ratings she gets, all relative to other drivers working in the same location, and all done through AI. If two drivers with identical performance scores on (a), (b), and (c) differed significantly on (d), then a simple analysis would suffice to reveal whether the drivers demographic background - e.g., white vs. black - inflated or depressed their scores on (d). So, if businesses, and particularly managers, were able to quantify output (what an employee actually contributes to the team and organization), and there was a formula to predict what someone is likely to produce, then algorithms wont just be better than humans at applying this formula, they will also be better than humans at ignoring the wide range of extraneous variables (including race) that distract human managers from focusing on that formula. In judgments of talent, humans are not great at paying attention to the stuff that matters, or ignoring the stuff that doesnt.
OK, so if all this sounds that simple, then whats stopping organizations from adopting these measures? After all, technology keeps getting cheaper and cheaper, they are sitting on a growing volume of data (just think about the last 3-months of videoconferencing data), and theres no shortage of data scientists who can help with this.
Three things, which do warrant a reasonable degree of skepticism, or at least a non-trivial amount of realistic pessimism.
The first is the double standards that make most people adopt the highest moral principles when they judge AI (or any new tech), while being much more lenient, and morally relaxed, when they judge humans. No technology will ever be perfect in predicting or detecting any form of human behavior, whether good or evil. Thats not the point. The point is that technology could be more accurate - and less biased - than humans, or at least that it could help humans be less biased. So, if we are comfortable understanding that even the latest, most advanced, tech will get things wrong, then lets focus on what really matters, which is whether that tech can at least minimize human bias and discrimination, even by 1%. Given humanitys historical record here, it is safe to say that the bar is pretty low, and that even rudimentary technologies may represent a hopeful improvement over the status-quo: pervasive prejudice and ubiquitous bias, courtesy of the human mind.
The second is that diagnosing or detecting the problem is necessary, but not sufficient, to fix it. So, suppose that some of the emerging technologies mentioned here reveal racism, or any other form of discrimination in a team or organization; what next then? Will leaders genuinely act on these data to sanction it, especially if it creates conflict or leads to negative short-term outcomes, particularly for themselves? Will they change the rules, the system, and those who oversee it, in order to improve progress and create a fairer, more meritocratic system? In general, people have little incentive to change the status quo when they are part of it. How comfortable will they be acknowledging that the status quo was rigged? Clearly, many leaders may prefer to avoid any new technology if it has the potential to expose the hitherto invisible toxic forces that govern the power dynamics in their teams and organizations. And AI, not just in HR but in any other area of application, is like a powerful X-ray machine that can reveal unwanted forces and socially undesirable phenomena: like opening a can of worms. It is always painful to make the implicit explicit, and the foundational grammar of any culture - both in companies and societies - is largely made of silent and subliminal rules, which makes them resistant to change. One of the disadvantages of living in a liberal world is that domination is rarely explicitly asserted, but hidden under the pretext of equality.
The third is that employers must be realistic about what can probably be achieved. Discussions on race and inclusion often focus on bias and prejudice, but it may be unrealistic to change the way people think, especially if they are part of a non-conformist minority, and how ethical would this be anyway? Crucially, peoples attitudes or beliefs are surprisingly weak as predictors of actual behavior. This means that discrimination - the behavioral side of prejudice - often happens in the absence of strong corresponding beliefs (conscious or not). Likewise, most of the people who are prejudiced - and hold derogatory attitudes towards individuals because of their race, gender, etc. - will rarely engage in overt discriminatory behavior. In short, we should focus less on changing peoples views, and more on ensuring that they behave in respectful and civil ways. Containing the expressed and public manifestations of racism at scale may be the best leaders can hope for, for they will not be able to change the way people think and feel.
Finally, we should not forget that not all leaders will be interested in reducing racism in their organizations, particularly if the process is slow and painful, and the ROI isnt clear. Such leaders may not just neglect the potential value of new technologies for reducing workplace discrimination, but use it to perpetuate existing practices, amplifying bias and prejudice. In the long run, every decision leaders make will shape the culture of their organizations, and employees and customers will gravitate towards those cultures and brands that best represent their own values and principles.
Link:
Technology Can Help Organizations Reduce Racism, But Will It? - Forbes
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]
- Observing group-living animals with drones may help us understand ... - Innovation Origins - March 31st, 2023 [March 31st, 2023]
- Mann named director of School of Public and Population Health - Boise State University - March 31st, 2023 [March 31st, 2023]
- Irina Solomonova's bad behavior is the star of Love Is Blind - My Imperfect Life - March 31st, 2023 [March 31st, 2023]
- Health quotes Dill in article about rise of Babesiosis - UMaine News ... - University of Maine - March 31st, 2023 [March 31st, 2023]
- There's still time for the planet, Goodall says, if we stay hopeful - University of Wisconsin-Madison - March 31st, 2023 [March 31st, 2023]
- Relationship between chronotypes and aggression in adolescents ... - BMC Psychiatry - March 31st, 2023 [March 31st, 2023]