Category Archives: Human Behavior

What dogs do when humans are not around, according to experts – Salon

The human-dog bond is ancient: we have co-evolved together since before writing even existed. Our long cohabitation with dogs has granted both species a unique insight into the other's feelings: dogs, for instance, know when you are looking into their eyes, unlike wolves and other animals. And, dogs can understand human language to some extent: one "Guiness"-worthy dog knows over 1,000 nouns.

Yet for all our mutual insights, we can't truly see inside the mind of a dog nor can we know for sure what they're thinking, or what they do when we're not looking. And while cameras that watch our pets can reveal what they are doing, it's harder to know what they're thinking in private. What can dog owners know for sure?

When they are not peacefully snoozing, dogs may also engage in what is known as "vigilant behavior" performing their self-assigned duty of guarding your home.

First, we know that they do indeed miss their humans.MRI tests of dogs' brainsconfirm that dogs associate the sounds and smells of their preferred humans with positive rewards. Because dogs are intelligent and perceptive about their environment, they quickly figure out patterns that indicate a human is about to leave e.g., picking up their keys, walking toward the door and clearly communicate feelings of distress when that happens. When secretly recorded, dogs who are alone in their homes often spend time at the door where their preferred human left, quite likely hoping they will soon return.

Yet if your heart aches at the thought that your dog does nothing but emotionally suffer while you are gone, rest at ease. There is plenty of research on domestic canine behavior and we know that, in addition to missing you, dogs routinely take naps.

"Previous research has demonstrated that dogs mostly spend their time resting when the owner is gone," Dr. Erica N. Feuerbacher, anAssociate Professor at Virginia Tech'sDepartment of Animal & Poultry Science, told Salon by email. When they are not peacefully snoozing, dogs may also engage in what is known as "vigilant behavior" performing their self-assigned duty of guarding your home "likely when they hear or see something outside, like a car or someone walking down the sidewalk."

When they are neither tired nor on alert, dogs may occupy themselves with play. This is why humans may return home to find their property damaged.

Want more health and science stories in your inbox? Subscribe toSalon's weekly newsletter The Vulgar Scientist.

"Of course some dogs engage in behaviors that are probably less desirable to their owners, like counter surfing or getting into the trash or vocalizing," Feuerbacher explained."Some dogs do develop separation anxiety which is a severe behavioral issue; other dogs are simply bored or take advantage of the owner not being there to explore places (like the counter) where they are usually forbidden from. But if they find something good up there to eat, that behavior will continue to happen."

It is important to remember that dogs, like humans, have quirks specific to their individual personalities. As such, anticipating their solitary behavior can be unpredictable.

"What dogs do when we are not around also depends on the individual, age, location and even the quality of relationship we share with them," Dr. Monique Udell, an associate professor who specializes in human-animal bonding atOregon State University, told Salon by email. Puppies, for instance, are more likely to get into mischief because they are biologically programmed to spend more of their time in activities like exploring and teething.Younger dogs can also experience more frequent bathroom problems, similar to older dogs.

"Puppies, whose bodies are still developing, as well as older dogs who may be experiencing health problems or cognitive decline, are often less likely to be able to avoid urinating or defecating when left alone for longer periods of alone time," Udell pointed out. "Dogs with separation anxiety experience greater than normal distress when left alone, and may panic or try to escape, which can result in injury or damage to property." LikeFeuerbacher, however, Udell emphasized that dogs spend most of their solitary time sleeping, and that this is healthy as long as the rest of their environment is sufficiently stimulating.

"Owners who have high expectations of their dogs and are highly responsive to their dog's needs are more likely to raise secure dogs."

"One important thing concerned humans can do, is make sure that the time they do spend with their dogs is quality time," Udell explained. "Dogs with secure attachment bonds to their owner are also less likely to display separation anxiety when their owner is away. Owners who have high expectations of their dogs (engage in positive reinforcement training, have consistent rules) and are highly responsive to their dog's needs (provide attention, recognize and respond when their dog is scared or sick) are more likely to raise secure dogs."

While dogs need their rest and therefore benefit from some time away from their humans, that does not mean all dogs will naturally accept that isolation. Fortunately, asFeuerbacher tells Salon, there are ways to train dogs to be as okay with temporary separation from you as you are from them.

"First, owners should work on their dog tolerating being left alone," Feuerbacher explained."Dogs are social animals so the owner leaving can be upsetting to the dog. You can do this by practicing lots of short departures, like running out to check the mail and coming back in, gardening for a few minutes and coming back in, taking a quick trip to the grocery store. This is especially useful when you bring a new dog home."

It can also be helpful to leave dogs with toys and other enrichment items bones, stuffed animals, chew devices, and so on. Finally, one should make sure to either paper train dogs or ask someone to take your dog out for a walk periodically if their humans will be gone for a while. It is cruel to expect the dog to hold in their excrement for too long. After all, while "The Secret Life of Pets" is not scientifically accurate, the essential point of the story that dogs lead rich lives separate from their humans, and should be respected as such is certainly true.

"While [the movie] might be fictional, I hope it does help folks recognize that their animals lead very rich lives, with their own interests like smelling certain smells, or getting to visit a dog friend," Feuerbacher told Salon."This also comes into play when we are interacting with our dogs we might want them to sit or do some other behavior we want, but it's worth remembering they have their own interests (such as smelling a certain patch of grass!) that doesn't align with what I want them to do."

Read the rest here:
What dogs do when humans are not around, according to experts - Salon

Animal personalities can trip up science, but theres a solution – The Hindu

Several years ago, Christian Rutz started to wonder whether he was giving his crows enough credit. Rutz, a biologist at the University of St. Andrews in Scotland, and his team were capturing wild New Caledonian crows and challenging them with puzzles made from natural materials before releasing them again.In one test, birds faced a log drilled with holes that contained hidden food, and could get the food out by bending a plant stem into a hook. If a bird didnt try within 90 minutes, the researchers removed it from the dataset.

But, Rutz says, he soon began to realize he was not, in fact, studying the skills of New Caledonian crows. He was studying the skills of only a subset of New Caledonian crows that quickly approached a weird log theyd never seen before maybe because they were especially brave, or reckless.

The team changed their protocol. They began giving the more hesitant birds an extra day or two to get used to their surroundings, then trying the puzzle again. It turns out that many of these retested birds suddenly start engaging, Rutz says. They just needed a little bit of extra time.

Scientists are increasingly realizing that animals, like people, are individuals. They have distinct tendencies, habits and life experiences that may affect how they perform in an experiment. That means, some researchers argue, that much published research on animal behavior may be biased. Studies claiming to show something about a species as a whole that green sea turtles migrate a certain distance, say, or how chaffinches respond to the song of a rival may say more about individual animals that were captured or housed in a certain way, or that share certain genetic features. Thats a problem for researchers who seek to understand how animals sense their environments, gain new knowledge and live their lives.

The samples we draw are quite often severely biased, Rutz says. This is something that has been in the air in the community for quite a long time.

In 2020, Rutz and his colleague Michael Webster, also at the University of St. Andrews, proposed a way to address this problem. They called it STRANGE.

Why STRANGE? In 2010, anarticlein Behavioral and Brain Sciencessuggested that the people studied in much of published psychology literature are WEIRD drawn from Western, Educated, Industrialized, Rich and Democratic societies and are among the least representative populations one could find for generalizing about humans. Researchers might draw sweeping conclusions about the human mind when really theyve studied only the minds of, say, undergraduates at the University of Minnesota.

A decade later, Rutz and Webster, drawing inspiration from WEIRD, published a paper in the journal Naturecalled How STRANGE are your study animals?

They proposed that their fellow behavior researchers consider several factors about their study animals, which they termed Social background, Trappability and self-selection, Rearing history, Acclimation and habituation, Natural changes in responsiveness, Genetic makeup, and Experience.

I first began thinking about these kinds of biases when we were using mesh minnow traps to collect fish for experiments, Webster says. He suspected and thenconfirmed in the labthat more active sticklebacks were more likely to swim into these traps. We now try to use nets instead, Webster says, to catch a wider variety of fish.

Thats Trappability. Other factors that might make an animal more trappable than its peers, besides its activity level, include a bold temperament, a lack of experience or simply being hungrier for bait.

Other research has shown that pheasants housed in groups of fiveperformed betteron a learning task (figuring out which hole contained food) than those housed in groups of only three thats Social background. Jumpingspidersraised in captivity wereless interested in preythan wild spiders (Rearing history), and honeybeeslearned bestin the morning (Natural changes in responsiveness). And so on.

It might be impossible to remove every bias from a group of study animals, Rutz says. But he and Webster want to encourage other scientists to think through STRANGE factors with every experiment, and to be transparent about how those factors might have affected their results.

We used to assume that we could do an experiment the way we do chemistry by controlling a variable and not changing anything else, says Holly Root-Gutteridge, a postdoctoral researcher at the University of Lincoln in the United Kingdom who studies dog behavior. But research has been uncoveringindividual patterns of behavior scientists sometimes call it personality in all kinds of animals, from monkeys tohermit crabs.

Just because we havent previously given animals the credit for their individuality or distinctiveness doesnt mean that they dont have it, Root-Gutteridge says.

This failure of human imagination, or empathy, mars some classic experiments, Root-Gutteridge and coauthors noted in a2022 paperfocused on animal welfare issues. For example, experiments by psychologist Harry Harlow in the 1950s involved baby rhesus macaques and fake mothers made from wire. They allegedly gave insight into how human infants form attachments. But given that these monkeys were torn from their mothers and kept unnaturally isolated, are the results really generalizable, the authors ask? Or do Harlows findings apply only to his uniquely traumatized animals?

All this individual-based behavior, I think this is very much a trend in behavioral sciences, says Wolfgang Goymann, a behavioral ecologist at the Max Planck Institute for Biological Intelligence and editor-in-chief of Ethology. The journal officiallyadoptedthe STRANGE framework in early 2021, after Rutz, who is one of the journals editors, suggested it to the board.

Goymann didnt want to create new hoops for already overloaded scientists to jump through. Instead, the journal simply encourages authors to include a few sentences in their methods and discussion sections, Goymann says, addressing how STRANGE factors might bias their results (or how theyve accounted for those factors).

We want people to think about how representative their study actually is, Goymann says.

Several other journals have recently adopted the STRANGE framework, and since their 2020 paper Rutz and Webster have run workshops, discussion groups and symposia at conferences. Its grown into something that is bigger than we can run in our spare time, Rutz says. We are excited about it, really excited, but we had no idea it would take off in the way it did.

His hope is that widespread adoption of STRANGE will lead to findings in animal behavior that are more reliable. The problem of studies that cant be replicated has lately received much attention in certain other sciences, human psychology in particular.

Psychologist Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia and a coauthor of the 2022 paper Replicability, Robustness, and Reproducibility in Psychological Science in the Annual Review of Psychology, says animal researchers face similar challenges to those who focus on human behavior. If my goal is to estimate human interest in surfing and I conduct my survey on a California beach, I am not likely to get an estimate that generalizes to humanity, Nosek says. When you conduct a replication of my survey in Iowa, you may not replicate my finding.

The ideal approach, Nosek says, would be to gather a study sample thats truly representative, but that can be difficult and expensive. The next best alternative is to measure and be explicit about how the sampling strategy may be biased, he says.

Thats just what Rutz hopes STRANGE will achieve. If researchers are more transparent and thoughtful about the individual characteristics of the animals theyre studying, he says, others might be better able to replicate their work and be sure the lessons theyre taking away from their study animals are meaningful, and not quirks of experimental setups. Thats the ultimate goal.

In his own crow experiments, he doesnt know whether giving shyer birds extra time has changed his overarching results. But it did give him a larger sample size, which can mean more statistically robust results. And, he says, if studies are better designed, it could mean that fewer animals need to be caught in the wild or tested in the lab to reach firm conclusions. Overall, he hopes that STRANGE will be a win for animal welfare.

In other words, whats good for science could also be good for the animals seeing them not as robots, Goymann says, but as individual beings that also have a value in themselves.

Read more:
Animal personalities can trip up science, but theres a solution - The Hindu

Droughts bring disease: Here are four ways they do it – Phys.org

Credit: Riccardo Mayer / Shutterstock

Countries in the Horn of Africa have been hit by a multiyear drought. Ethiopia, Kenya, Somalia and Uganda are expected to continue getting below-normal rainfall in 2023. Excluding Uganda, 36.4 million people are affected and 21.7 million are in need of food assistance.

Climate change projections show changes in temperature and rainfall extremes, especially without emissions reductions. Some parts of Africa are projected to become wetter and others drier. Prolonged dry spells, particularly in semi-arid and arid regions, may have serious impacts, particularly if people aren't prepared.

Droughts can have wide-ranging implications for the affected population. The decreased availability of wateroften accompanied by high temperaturescan increase the risk of contamination, cause dehydration and result in an inability to wash and maintain hygiene practices.

Droughts can have an impact on non-resistant crops and livestock, causing malnutrition and food insecurity. The economic implications of agricultural losses can go on to affect mental health, gender-based violence and poverty.

The changes to the environment and human behavior caused by drought can also lead to higher exposure to disease-causing organisms. It can increase the risk of infections and disease outbreaks. Diseases that are spread through food, water, insects and other animals can all break out during times of drought and often overlap. Understanding and managing the known risk factors for these outbreaks, and how drought can exacerbate them, is important in preventing infectious disease mortality during drought.

During droughts there can be changes in what kinds of food are accessible, as less water is available to produce and process it. Food insecurity can lead to malnutrition, which has an impact on immunity. Certain foods may become less available and it may not be possible to reduce food contamination via traditional methods of acidification such as lemon juice, curdled milk, tamarind and vinegar.

Food insecurity can lead to an increased reliance on roadside food vendors. Food vendors are often linked to food-borne disease outbreaks as hygiene standards can vary widely and are often poorly regulated. Cooking fuel, particularly wood, may be in short supply, so food may be eaten cold, raw or without re-heating, increasing the chances of contamination.

Food-borne diseases linked to droughts include cholera, dysentery, salmonella and hepatitis A and E. But any food-borne pathogen can be a risk during times of water scarcity.

The impact of drought on water availability also affects water-borne pathogens. It can change the environment and human behavior in ways that increase transmission risks, similar to food-borne diseases.

During times of limited water resources, a pathogen can become more concentrated in the environment, particularly when higher temperatures suit its growth. IPC v Acute Food Insecurity Phase. Credit: The Famine Early Warning Systems Network

Risky water use behaviors may increase. People might use water sources they would normally avoid, and reduce hand-washing.

Water-borne diseases linked to droughts include cholera, dysentery, typhoid and rotavirus.

Breeding sites for vectors such as mosquitoes may be reduced during drought because there is less groundwater for females to lay their eggs. But new areas may be created. Droughts can lead to an increase in potable water, due to stockpiling or the delivery of water aid to households from the government or NGOs. If water containers are open, this can create ideal vector breeding grounds. Open containers may also move the vector breeding groundand therefore the vectorcloser to the household.

Changes in temperature and water can affect egg and larval survival and intermediate or animal host transmission, helping the pathogen to survive longer. Higher temperature can affect vector behavior, mainly biting frequency and timing of feeding, altering transmission.

Vector-borne diseases linked to droughts include West Nile virus, St Louis encephalitis, Rift Valley fever, chikungunya and dengue.

Zoonotic diseases are those that can be transmitted from animals to humans. Water scarcity increases the pressure on water sources, and so water is used for several purposes and may be shared by livestock, wildlife and people. Interactions between humans, livestock and wildlife increase, expanding the opportunity for contact and disease transmission. Food supply issues and agricultural losses may also increase reliance on bushmeat for food and income, which can be a risk for zoonotic disease spillover.

Recent examples of zoonotic disease spillover include Nipah virus, Ebola and monkeypox (recently renamed mpox).

At an individual level, education around disease risks is important. This will allow people to make informed choices to protect their health to the best of their abilities. Household water should be covered. And personal and food hygiene should be maintained as much as possible.

To prevent drought-related disease outbreaks, pre-existing vulnerability (poverty, access to water, education) needs to be addressed. It is not the drought that causes the outbreak, but instead how society deals with these dry conditions.

Better water resource management is needed at a regional and international level, to treat large water sources as a common resource for all. Authorities need to act to provide drought assistance. This includes safe water to prevent the use of poor quality water sources, and agricultural and food aid to mitigate dehydration and malnutrition.

Excerpt from:
Droughts bring disease: Here are four ways they do it - Phys.org

Annual Shaw Biology Lecture to feature New York Times best … – University of Southern Indiana

The University of Southern Indiana will host its 9th annual Shaw Biology Lecture at 7 p.m. Monday, April 17 in Mitchell Auditorium, located in the Nursing and Health Professions Building. Frans de Wall, New York Times bestselling author, will present Politics, Cognition, Morality: You Name It Our Fellow Primates Have It All. The presentation is open to the public at no charge.

De Waal is a C.H. Candler Professor Emeritus of Psychology at Emory University, and is former Director of Living Links, a division of the Yerkes National Primate Research Center, established for primate studies to shed light on human behavioral evolution. A Dutch/American biologist, de Waal is known for his work on the behavior and social intelligence of primates.

In 2011, Discover Magazine named him among the 47 (All Time) Great Minds of Science and in 2019, Prospect Magazine ranked him fourth for the Worlds Top Thinkers. His scientific work has been published in hundreds of articles and journals, such as Science and Nature, and volumes specialized in animal behavior. His dozen popular books, translated into over 20 languages, made him one of the worlds most visible primatologists.

De Waals bestsellers include Are We Smart Enough to Know How Smart Animals Are? and Mamas Last Hug. His latest book is titled Different: Gender Through the Eyes of a Primatologist. Following his presentation, de Waal will be available for a book signing.

The Shaw Lecture Series is funded by a USI Foundation endowment with support by the USI Biology Department and the Pott College of Science, Engineering, and Education.

For questions, contact Dr. Marlene Shaw, Professor Emerita of Biology,at mshaw@usi.edu.

Read more from the original source:
Annual Shaw Biology Lecture to feature New York Times best ... - University of Southern Indiana

Use This Powerful Theory to Be a Better Leader – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Adept and nimble leadership is essential in today's fast-paced and ever-changing business world. Those in such positions are responsible for setting the tone, driving innovation and inspiring others to achieve. This is a heady mix of tasks, but how to perfect them? One powerful way is by leveraging Rene Girard's mimetic theory.

Girard, a French historian, literary critic and philosopher, developed a theory of human behavior that emphasizes the role of imitation and desire in social interactions. His concepts were based on the idea that, from a very young age, human beings are fundamentally imitative creatures, and that our desires and behaviors are largely shaped by the desires and behaviors of those around us. The resulting theory has gained a significant amount of attention in recent years, particularly among business leaders and entrepreneurs, not least because it provides a powerful framework for understanding both employee and consumer behavior.

The process plays out simply: When we see someone else achieve or acquire something we desire, we are more likely to imitate their behavior in the hopes of doing the same. And leaders might be well advised to apply this incite in the process of motivating and inspiring teams.

Related: To Be Heard and To be Admired

In a sense, we are always in competition with others, trying to outdo them in our pursuit of shared desires. However, this competition can often lead to conflict and rivalry, especially in a business setting where individuals may have different goals and aspirations. Mimetic theory helps leaders understand this, and ideally to find ways of channeling it positively, such as promoting healthy competition and collaboration in which team members work together to achieve shared goals. In such a culture of camaraderie and innovation, employees can feel valued, engaged and motivated to achieve their full potential.

To leverage Girard's theory, leaders can choose from several strategies (or apply them all):

Lead by example and demonstrate the behaviors and attitudes that they want others to emulate in an organization.

Identify shared desires and goals, and align those with the goals of the organization as a whole.

Create a culture of collaboration that values teamwork, open communication and shared ownership.

Encourage innovation and creativity by creating an environment that values pioneering ideas.

Related: 9 Ways Your Company Can Encourage Innovation

To put these strategies into action, follow these steps:

1: Evaluate the current company culture and identify areas for improvement.

2: Set goals and objectives that align with the company's vision and mission.

3: Communicate this new approach to employees and provide training and resources to support their success.

4: Monitor progress and make adjustments as needed.

To illustrate a few key aspects of mimetic theory, consider the example of Microsoft. In 2014, the company's new CEO, Satya Nadella, adopted a "growth mindset" that emphasized collaboration, creativity and innovation. He encouraged employees to work together to achieve shared goals and provided platforms for them to exchange ideas. Under Nadella's leadership, Microsoft's stock price nearly tripled, and the company's market capitalization grew to more than $2 trillion.

An example of a different kind can be found in F. Scott Fitzgerald's classic novel, The Great Gatsby. The character of Jay Gatsby, who supposedly embodies the American Dream, becomes the object of desire for many other characters in the novel, including narrator Nick Carraway and Gatsby's former lover, Daisy Buchanan. They imitate his behaviors and embrace similar desires, hoping to achieve the same success and happiness. Ultimately, however, the desire for imitation and competition leads to conflict and tragedy, which helps highlight the dangerous potential of unchecked mimetic desire. Business leaders can learn from this, too, by finding ways to channel desire positively fostering healthy competition and collaboration.

Related: Entrepreneurship and Eudaimonia: The Pursuit Of Lasting Happiness

Giraud's theory offers a roadmap for understanding the power of imitation, and so achieving success. With the right strategies, leaders can leverage it to their teams to achieve greatness and take companies to the next level.

See original here:
Use This Powerful Theory to Be a Better Leader - Entrepreneur

What are bio-computers? How can they help us dive deep into the human brain? – Jagran Josh

Science never stops evolving, and this time, it has come up with a novel research area known as organoid intelligence. Can Science and Technology read the human mind? Lets know better.

Hopkin University scientists very recently brought forward a plan for a novel area of research known as organoid intelligence. This particular field of study intends to create biocomputers. In such an invention, a blend of brain cultures developed and grown in laboratories, input and output devices, and real-world sensors are intended. The aim is to control the brains processing power and dive deep into the biological basis of learning, cognition, learning, and a myriad of neurological disorders.

Humans have always been inquisitive about the human mind. However, unlike all other parts of the body, studying the human mind has never been easy. Earlier, methods like ablation were used on animals, especially rats to study the human brain parts that are similar in both rats and humans. The use of brain studying techniques on animals and sometimes harming them eventually in the process of enhancing human behavior understanding has always been a controversial discussion. Moreover, while studying the rat brain was an easier and perhaps more accessible option to study the human brain, one cannot ignore the massive differences in the structures and functions of the rat brain and the human brain. Next came advanced methods like EEG, MEG, and fMRI to study the human brain.

Now, the technology is perhaps at its best, and thus, 3D cultures of the brain would be the next big thing. Modern-day scientists are designing brain organoids, which are actually 3D cultures of the brain designed in laboratories. These organoids would actually be called "mini-brains" and would be built with the use of human stem cells. These brains would be able to hold a myriad of structural and functional features of a developing human brain. Who thought mankind would be able to create a mini-human brain in the 21st century?

Human behavior is based on some internal or external stimulation. Various sensory inputs like vision, smell, touch, and more are required by the human brain, and that's what makes it a complex yet incredible organ of the body. A body of science just at its birth stage cannot compete with nature. The brain organoids not only lack sensory inputs like the normal human brain but also do not have blood circulation.

Bio-computers would be designed and created by combining brain organoids and modern computing methods. Machine learning will be used to couple the organoids. Organoids would be grown inside flexible structures affixed with various electrodes. One can visualize them similar to the ones used in the case of Electroencephalogram readings.

Such structures would be able to dive deep into recording and studying the firing patterns of neurons. They would also offer electrical stimuli in order to mimic sensory stimuli. Machine learning techniques would then be used to analyze human behavior and biology.

Not long ago, scientists grew human neurons on top of a microelectrode array that was able to not only record but also stimulate such neurons. With the help of positive or negative electric feedback derived from the sensors, neurons could be trained to generate an electrical activity pattern that would be generated in case the neurons were playing a sport like a table tennis.

Visit link:
What are bio-computers? How can they help us dive deep into the human brain? - Jagran Josh

Miscalibration of Trust in Human Machine Teaming – War On The Rocks

A recent Pew survey found that 82 percent of Americans are more or equally wary than excited about the use of artificial intelligence (AI). This sentiment is not surprising tales of rogue or dangerous AI abound in pop culture. Movies from 2001: A Space Odyssey to The Terminator warn of the dire consequences of trusting AI. Yet, at the same time, more people than ever before are regularly using AI-enabled devices, from recommender systems in search engines to voice assistants in their smartphones and automobiles.

Despite this mistrust, AI is becoming increasingly ubiquitous, especially in defense. It plays a role in everything from predictive maintenance to autonomous weapons. Militaries around the globe are significantly investing in AI to gain a competitive advantage, and the United States and its allies are in a race with their adversaries for the technology. As a result, many defense leaders are concerned with ensuring these technologies are trustworthy. Given how widespread the use of AI is becoming, it is imperative that Western militaries build systems that operators can trust and rely on.

Enhancing understanding of human trust dynamics is crucial to the effective use of AI in military operational scenarios, typically referred to in the defense domain as human-machine teaming. To achieve trust and full cooperation with AI teammates, militaries need to learn to ensure that human factors are considered in system design and implementation. If they do not, military AI use could be subject to the same disastrous and deadly errors that the private sector has experienced. To avoid this, militaries should ensure that personnel training educates operators both on the human and AI sides of human-machine teaming, that human-machine teaming operational designs actively account for the human side of the team, and that AI is implemented in a phased approach.

Building Trust

To effectively build human-machine teams, one should first understand how humans build trust, specifically in technology and AI. AI here refers to models with the ability to learn from data, a subset called machine learning. Thus far, almost all efforts to develop trustworthy AI focus on addressing technology challenges, such as improving AI transparency and explainability. The human side of the human-machine interaction has received little attention. Dismissing the human factor, however, risks limiting the positive impacts that purely technology-focused improvements could have.

Operators list many reasons why they do not trust AI to complete tasks for them, which is unsurprising given the generally untrustworthy cultural attitude outlined in the Pew survey above towards the technology. However, research shows that humans often do the opposite with new software technologies. People trust websites with their personal information and use smart devices that actively gather that information. They even engage in reckless activity in automated vehicles not recommended by the manufacturer, which can pose a risk to ones life.

Research shows that humans struggle to accurately calculate appropriate levels of trust in the technology they use. Humans, therefore, will not always act as expected when using AI-enabled technology often they may put too much faith in their AI teammates. This can result in unexpected accidents or outcomes. Humans, for example, have a propensity toward automation bias, which is the tendency to favor information shared by automated systems over information shared by non-automated systems. The risk of this occurring with AI, a notorious black-box technology with frequently misunderstood capabilities, is even higher.

Humans often engage in increasingly risky behavior with new technology they believe to be safe, a phenomenon known as behavioral adaption. This is a well-documented occurrence in automobile safety research. A study conducted by University of Chicago economist Sam Peltzman found no decreased death rate from automobile accidents after the implementation of safety measures. He theorized this was because drivers, feeling safer as the result of the new regulations and safety technology, took more risks while driving than they would have before the advent of measures made to keep them safe. For example, drivers who have anti-lock braking were found to drive faster and closer behind other vehicles than those who did not. Even using adaptive cruise control, which maintains a distance from the car in front of you, leads to an increase in risk-taking behavior, such as looking at a phone while driving. While it was laterdetermined that the correlation between increased safety countermeasures and risk-taking behavior was not necessarily as binary as Peltzman initially concluded, the theory and the concept of behavioral adaption itself have gained a renewed focus in recent years to explain risk-taking behavior in situations a diverse as American football and the COVID-19 pandemic. Any human-machine teaming should be designed with this research and knowledge in mind.

Accounting for the Human Element in Design

Any effective human-AI team should be designed to account for human behavior that could negatively affect the teams outcomes. There has been extensive research into accidents involving AI-enabled self-driving cars, which have led some question whether human drivers can be trusted with self-driving technology. A majority of these auto crashes using driver assistance or self-driving technology have occurred as a result of Teslas Autopilot system in particular, leading to a recent recall. While the incidents are not exclusively a product of excessive trust in the AI-controlled vehicles, videos of these crashes indicate that this outsized trust plays a critical role. Some videos showed drivers were asleep at the wheel, while others pulled off stunts like putting a dog in the drivers seat.

Tesla says its autopilot program is meant to be used by drivers who are also keeping their eyes on the road. However, studies show that once the autopilot is engaged, humans tend to pay significantly less attention. There have been documented examples of deadly crashes with no one in the drivers seat or while the human driver was looking at their cell phone. Drivers made risky decisions they would not have in a normal car because they believed the AI system was good enough to go unmonitored, despite what the company says or the myriad of examples to the contrary. A report published as part of the National Highway Traffic Safety Administrations ongoing investigation into these accidents recommends that important design considerations include the ways in which a driver may interact with the system or the foreseeable ranges of driver behavior, whether intended or unintended, while such a system is in operation.

The military should take precautions when integrating AI to avoid a similar mis-calibration of trust. One such precaution could be to monitor the performance not only of the AI, but also of the operators working with it. In the automobile industry, video monitoring to ensure drivers are paying attention while the automated driving function is engaged is an increasingly popular approach. Video monitoring may not be an appropriate measure for all military applications, but the concept of monitoring human performance should be considered in design.

A recent Proceedings article framed the this dual monitoring in the context of military aviation training. Continuous monitoring of the health of the AI system is like aircraft pre-flight and in-flight system monitoring. Likewise, aircrew are continuously evaluated in their day-to-day performance. Just as aircrew are required to undergo ongoing training on all aspects of an aircrafts employment throughout the year, so too should AI operators be continuously trained and monitored. This would not only ensure that military AI systems were working as designed and that the humans paired with those systems were also not inducing error, but also build trust in the human-machine team.

Education on Both Sides of the Trust Dynamic

Personnel should also be educated about the capabilities and limitations of both the machine and human teammates in any human-machine teaming situation. Civilian and military experts alike widely agree that a foundational pillar of effective human-machine teaming is going to be the appropriate training of military personnel. This training should include education on both the AI systems capabilities and limitations, incorporating a feedback loop from the operator back into the AI software.

Military aviation is deeply rooted in a culture of safety through extensive training and proficiency through repetition, and this military aviation safety culture could provide a venue for necessary AI education. Aviators learn not just to interpret the information displayed in the cockpit but also to trust that information. This is a real-life demonstration of research showing that humans will more accurately perceive risks when they are educated on how likely they are to occur.

Education specifically relating to how humans themselves establish and maintain trust through behavioral adaptation can also help operators become more self-aware of their own, potentially damaging, behavior. Road safety research and other fields have repeatedly proven that this kind of awareness training helps to mitigate negative outcomes. Humans are able to self-correct when they realize theyre engaging in undesirable behavior. In a human-machine teaming context, this would allow the operator to react to a fault or failure in that trusted system but retain the benefit of increased situational awareness. Therefore, implementing AI early in training will give future military operators confidence in AI systems, and through repetition the trust relationship will be solidified. Moreover, by having a better understanding not only of the machines capabilities but also its constraints will decrease the likelihood of the operator incorrectly inflating their own levels of trust in the system.

A Phased Approach

Additionally, a phased approach should be taken when incorporating AI to better account for the human element of human-machine teaming. Often, new commercial software or technology is rushed to market to outpace the competition and ends up failing when in operation. This often costs a company more than if they had delayed rollout to fully vet the product.

In the rush to build military AI applications for a competitive advantage, militaries risk pushing AI technology too far, too fast, to gain a perceived advantage. A civilian sector example of this is the Boeing 737 Max software flaws, which resulted in two deadly crashes. In October 2018, Lion Air Flight 610 crashed, killing all 189 people on board, after the pilots struggled to control rapid and un-commanded descents. A few months later, Ethiopian Airlines Flight 302 crashed, killing everyone on board, after pilots similarly struggled to control the aircraft. While the flight-control software that caused these crashes is not an example of true AI, these fatal mistakes are still a cautionary tale. Misplaced trust in the software at multiple levels resulted in the deaths of hundreds.

The accident investigation for both flights found that an erroneous inputs from an angle of attack sensor to the flight computer caused a cascading and catastrophic failure. These sensors measure the angle of the wing relative to airflow and give an indication of lift, the ability of the aircraft to stay in the air. In this case, the erroneous input caused the Maneuvering Characteristics Augmentation System, an automated flight control system, to put the plane into repeated dives because it thought it needed to gain lift quickly. These two crashes resulted in the grounding of the entire 737 Max fleet worldwide for 20 months, costing Boeing over $20 billion.

This was all caused by a design decision and a resultant software change, assumed to be safe. Boeing, in a desire to stay ahead of their competition, updated a widely used aircraft, the base model 737. Moving the engine location on the wing of the 737 Max helped the plane gain fuel efficiency but significantly changed flight characteristics. These changes should have required Boeing to market it as a completely new airframe, which would mean significant training requirements for pilots to remain in compliance with the Federal Aviation Administration. This would have cost significant time and money. To avoid this, the flight-control software was programmed to make the aircraft fly like an older model 737. While flight-control software is not new, this novel use allowed Boeing to market the 737 Max as an update to an existing aircraft, not a new airframe. There were some issues noted during testing, but Boeing trusted the software due to previous flight control system reliability and pushed the Federal Aviation Administration for certification. Hidden in the software, however, was erroneous code that caused the cascading issues seen on the Ethiopian and Lion Air flights. Had Boeing not put so much trust in the software, or the regulator similarly put such trust in Boeings certification of the software, these incidents could have been avoided.

The military should take this as a lesson. Any AI should be phased in gradually to ensure that too much trust is not placed in the software. In other words, when implementing AI, militaries need to consider cautionary tales such as the 737 Max. Rather than rushing an AI system into operation to achieve a perceived advantage, it should be carefully implemented into training and other events before full certification to ensure operator familiarity and transparency into any potential issues with the software or system. This is currently being demonstrated by the U.S. Air Forces 350th Spectrum Warfare Wing, which is tasked with integrating cognitive electromagnetic warfare into its existing aircraft electromagnetic warfare mission. The Air Force has described the ultimate goal of cognitive electromagnetic warfare as establishing a distributed, collaborative system which can make real-time or near-real-time adjustments to counter advanced adversary threats. The 350th, the unit tasked with developing and implementing this system, is taking a measured approach to implementation to ensure that warfighters have the capabilities they need now while also developing algorithms and processes to ensure the future success of AI in the electromagnetic warfare force. The goal is to first use machine learning to speed up the aircraft software reprogramming process, which can sometimes take up to several years. The use of machine learning and automation will significantly shorten this timeline while also familiarizing engineers and operators with the processes necessary to implementing AI in any future cognitive electromagnetic warfare system.

Conclusion

To effectively integrate AI into operations, there needs to be more effort devoted not only to optimizing software performance but also to monitoring and training human teammates. No matter how capable an AI system is, if human operators mis-calibrate their trust in the system they will be unable to effectively capitalize on AIs technological advances, and potentially make critical errors in design or operation. In fact, one of the strongest and most repeated recommendations to come out of the Federal Aviation Administrations Joint Investigation of the 737 Max accidents was that human behavior experts needed to play a central role in research and development, testing, and certification. Likewise, research has shown that in all automated vehicle accidents, operators did not monitor the system effectively. This means that operators need to be monitored as well. Militaries should account for the growing body of evidence that human trust in technology and software is often mis-calibrated. Through incorporating human factors into AI system design, building relevant training, and utilizing a carefully phased approach, the military can establish a culture of human-machine teaming that is free of the failures seen in the civilian sector.

John Christianson is an active-duty U.S. Air Force colonel and current military fellow at the Center for Strategic and International Studies. He is an F-15E weapons systems officer and served as a safety officer while on an exchange tour with the U.S. Navy. He will next serve as vice commander of the 350th Spectrum Warfare Wing.

Di Cooke is a visiting fellow at the International Security Program in the Centre for Strategic and International Studies, exploring the intersection of AI and the defense domain. She has been involved in policy-relevant research and work at the intersection of technology and security across academia, government, and industry. Previous to her current role, she was seconded to the U.K. Ministry of Defence from the University of Cambridge to inform the UK Defence AI operationalization approach and ensure alignment with its AI Ethical Principles.

Courtney Stiles Herdt is an active-duty U.S. Navy commander and current military fellow at the Center for Strategic and International Studies. He is an MH-60R pilot and just finished a command tour at HSM-74 as part of the Eisenhower Carrier Strike Group. Previously, he has served in numerous squadron and staff tours, as an aviation safety and operations officer, and in various political-military posts around Europe and the western hemisphere discussing foreign military sales of equipment that utilized human-machine teaming.

The opinions expressed are those of the authors and do not represent to official position of the U.S. Air Force, U.S. Navy, or the Department of Defense.

Image: U.S. Navy photo by John F. Williams

Continued here:
Miscalibration of Trust in Human Machine Teaming - War On The Rocks

Students share perspectives on new design and data science majors – The Stanford Daily

In September, Stanford announced two major changes to its undergraduate education offerings: the former product design major was rebranded to the new design major, and the former data science minor would now be offered as both a B.A. and B.S. degree.

Current and prospective students from the programs shared their thoughts with The Daily.

New Design Major

The design major now belongs under the d.schools interdisciplinary programs (IDPs), and is categorized as a Bachelor of Science (B.S.) degree in Design. Previously, the product design major resulted in the conferral of a B.S. in Engineering. However, students may still choose to complete the product design engineering subplan if they matriculated before the 2022-2023 academic year.

The design major now has three methods tracks: Physical Design and Manufacturing, AI and Digital User Experience, and Human Behavior and Multi-stakeholder Research. From there, students also select one Domain Focus area, which may be Climate and Environment, Living Matter, Healthcare and Health Technology Innovation, Oceans and Global Development, and Poverty. While not possible in the 2022-23 academic year, students will be able to propose their own Domain Focus area as an honors option in the future.

Sydney Yeh 26 said that the major is a great way to use my creative skills, apply it to technology and move with the current times.

She also believes that the shift from product design to more broad design offerings is beneficial. [While] people are pretty split [on this issue], I think its a good change because theres more variety in what you can specialize in, Yeh said. Before, it was mostly physical design and designing products.

Yeh intends to pursue the digital design track, as she is interested in designing apps and interfaces. She says the design major effectively weaves together her interests in art and computer science. Originally, I was going to combine art and CS and design my own major, but found that the design major fits my goals, Yeh said.

Hannah Kang 26, another prospective design major, echoed Yehs sentiments about combining interests in computer science and art. [The major allows me] to integrate the art aspect and the STEM aspect that I know for sure that Stanford is excelling in, Kang said.

Kang also expressed her appreciation for the CS requirements of the design major, saying, Im trying to take more CS classes so that I can have at least the most fundamental CS knowledge [and can] seek ways to use my engineering skills to create something.

Sosi Day 25, a design major on the human behavior track, praised the collaborative and multidisciplinary aspects of design. Theres a lot of communal learning, she said. Its also very creative, and it engages a lot of different parts of my brain. A lot of it is artistic, but theres also problem solving skills involved.

Day said that as someone who seeks to apply design thinking to other issues beyond manufacturing, the change in major has been a positive one for her. I never considered doing a product design major last year, but now that theyve added two new tracks, its changed my mind, she said.

New Data Science Major

The new data science major was also announced this year. Whereas previously, students could only minor in data science, undergraduates now have the option of majoring on either the B.S. or B.A. track.

Professor Chiara Sabatti, associate director of Data Sciences B.S. track, said that the B.A. has similar foundational requirements to the B.S., but has a concentration of interest in applying data science methods to solve problems in the social sciences.

According to Sabatti, the B.S. track is closely aligned with the former mathematical and computational science (MCS) major, which was phased out this year. She explained that the change to a data science major with more broad offerings was to more closely match with MCS graduates career paths, saying that [the changes] are in response to the needs of the students and the demands of society.

Professor Emmanuel Candes, the Barnum-Simons Chair of math and statistics, said that the formal name change from MCS to data science occurred last spring, though the process of changing the curriculum and developing the B.S. and B.A. paths began in 2019.

Candes echoed Sabattis reflections about students career paths, saying, we realized that more and more of our graduates [of Mathematical and Computational Science] were entering the workforce as data scientists, and it seems like the [new] name represents more of a reality.

The major program has shifted to accommodate this growing interest in data, according to Sabatti.

The structure of the program has changed to make sure that we prepare students for this sustained interest in data science, Sabatti said. For example, theres some extra requirements in computing, because the data sets that people need to work with require substantial use of computational devices, [and] theres some extra classes on inference and how you actually extract information from this data.

Similar to the new design major, many prospective data science majors say the interdisciplinary offerings of the major are enticing.

I like [data science] because its an intersection between technical fields and humanities-focused fields, said Caroline Wei 26, a prospective B.A. data science major on the Technology and Society pathway. What makes data science so powerful is it gives you the option to draw conclusions about society and present that to the rest of the world.

Similarly, Savannah Voth 26, another prospective data science major, shared the humanities and technical skills she feels the major helps her build. The data science B.A. allows me to use quantitative skills and apply it to the humanities and social sciences, she said.

Voth expressed some concerns regarding the ability to connect required coursework with data science more directly.

One issue is that the requirements include classes in statistics and classes in areas you want to apply data science to, but there arent as many opportunities to connect them, Voth said. It would be cool if for each pathway, there was at least one class that is about data science applied to that topic.

Despite this concern, Voth praised the openness of the majors coursework. I like how [the requirements] are very flexible and you can choose which area to focus on through the pathways.

Wei highlighted the effectiveness of the core requirements in building skills and perspectives, saying, The ethics [requirement] is relevant since you have to know how to handle data in an ethical way, the compsci core combines the major aspects of technical fields..and the social science core helps you see why those technical skills are important.

Go here to read the rest:
Students share perspectives on new design and data science majors - The Stanford Daily

How disgust-related avoidance behaviors help animals survive – Phys.org

Overlooked species in risk perception research and how disease avoidance and disgust may be used in different contexts of conservation and wildlife management. Credit: Dr Cecile Sarabian

Animals risk getting sick every day, just like humans, but how do they deal with that risk? An international team led by Dr. Cecile Sarabian from the University of Hong Kong (HKU) examines the use of disgust-related avoidance behaviors amongst animals and their role in survival strategy.

The feeling of disgust is an important protective mechanism that has evolved to protect us from diseases risks. Triggered by sensory cues, we feel disgust surrounding things such as the sight of infected wounds. This releases a set of behavioral, cognitive and/or physiological responses that enable animals to avoid pathogens and toxins.

An international team, led by Dr. Cecile Sarabian from the University of Hong Kong, has turned their attention to the emotion's role in animal disease avoidancean area of study typically neglected. The team developed a framework to test disgust and its associated disease avoidance behaviors across species, social systems and habitats.

Characteristics such as whether a species lives in groups or alone are important when analyzing their response to disease. The paper, published in Journal of Animal Ecology, highlights the positives and negatives of experiencing disgust to avoid disease.

Over 30 species use disease avoidance strategies in the wild, according to previous reports, however the authors provided predictions for seven additional species that were previously overlooked. These include the common octopus, a species native to Hong Kong, and the red eared slideran invasive species.

Species exhibit varying levels of disease avoidance behavior depending on their social systems and ecological niches. Solitary species can be less vulnerable to socially transmitted diseases, and thus less adapted to recognize and avoid that risk. But a group-living species are more prone, but also more likely to recognize and avoid sick animals.

However, species living in colonies like rabbits or penguins may be more likely to tolerate infected mates. As the species depend on each other to survive, collective immunity can be less costly than having to isolate. This model could also apply to human diseases, for instance, the COVID-19 pandemic.

Furthermore, the authors suggest five practical applications of disgust-related avoidance behaviors in wildlife management and conservation. These include endangered species rehabilitation, crop damage and urban pests. For example, modulating the space use and food consumption of crop-damaging species, disgust-related behaviors could be useful. This could involve creating an environment that is unappealing to pests.

"Given the escalation of conflicts between humans and wildlife, the translation of such knowledge on disease risk perception and avoidance into relevant conservation and wildlife management strategies is urgent," says Dr. Sarabian.

More information: Ccile Sarabian et al, Disgust in animals and the application of disease avoidance to wildlife management and conservation, Journal of Animal Ecology (2023). DOI: 10.1111/1365-2656.13903

Journal information: Journal of Animal Ecology

Original post:
How disgust-related avoidance behaviors help animals survive - Phys.org

Microbiomes Connected More than Ever to Psychological Well Being – Greenwich Sentinel

New research has shown that the microbiome the vast communities of microbes in our digestive tract can affect our emotions and cognition. Studies have suggested that the microbiome plays a role in influencing moods and the state of psychiatric disorders, as well as information processing. However, the mechanisms behind how the microbiome interacts with the brain have remained elusive.

Recent research has built on earlier studies that demonstrate the microbiomes involvement in responses to stress. Focusing on fear and how it fades over time, researchers have identified differences in cell wiring, brain activity, and gene expression in mice with depleted microbiomes. The study also identified four metabolic compounds with neurological effects that were far less common in the blood serum, cerebrospinal fluid, and stool of the mice with impaired microbiomes.

The researchers were intrigued by the concept that microbes inhabiting our bodies could affect our feelings and actions. The studys lead author and a postdoctoral associate at Weill Cornell Medicine, Coco Chu, set out to examine these interactions in detail with the help of psychiatrists, microbiologists, immunologists, and scientists from other fields.

The research has pinpointed a brief window after birth when restoring the microbiome could still prevent adult behavioral deficits. The microbiome appeared to be critical in the first few weeks after birth, which fits into the larger idea that circuits governing fear sensitivity are impressionable during early life.

The research on microbial effects on the nervous system is a young field, and there is even uncertainty around what the effects are. Previous experiments reached inconsistent or contradictory conclusions about whether microbiome changes helped animals to unlearn fear responses.

The findings from the recent study have given extra weight to the specific mechanism causing the behavior observed, pointing to the possibility of predicting who is most vulnerable to disorders like post-traumatic stress disorder.

Although the interactions of the brain and the gut microbiome differ in humans and mice, the study has identified potential interventions targeting the microbiome that might be most effective in infancy and childhood when the microbiome is still developing, and early programming takes place in the brain.

The studys findings could have significant implications for the future of potential therapies and deepen scientific knowledge around the mechanisms that influence core human behaviors

Read this article:
Microbiomes Connected More than Ever to Psychological Well Being - Greenwich Sentinel