Human Behavior And Group Dynamics Can Be Reshaped By AI Use, Including Via AI Self-Driving Cars – Forbes

AI will substantively impact human behavior including social group interactions.

Suppose you interact with an AI system, such as a robot, and in so doing your behavior changes based on that interaction.

This makes sense in that we already today interact with the likes of Alexa and Siri, AI systems employing a limited capability of Natural Language Processing (NLP), and find ourselves perhaps changing what we do next as a result of the AI interaction (Ill go ahead and put on my raincoat and take my umbrella, after discussing the forecasted weather with Alexa).

Lets rev this up a notch.

Suppose you and your buddies opt to interact with an AI system, doing so collectively, as a group, and have some form of substantive interaction that takes place.

Would the group dynamics and social interaction be potentially altered as a result of having the AI system engaged in the interaction with you all?

Yes, indeed, and furthermore the manner in which the AI interacted and what it had to say could impact too the viewpoints and perceptions of those humans in the group that were undertaking the interaction, along with causing the human-to-human group or social dialogue to also be impacted (such as cohesion of the group, tenor and tone of the group, focus, and engagement within the group, etc.).

A recent Yale study conducted an experiment in which humans in small groups of three people interacted with an AI system, deployed as a likable looking robot, doing so to play a game, and the robot was pre-programmed to provide varying kinds of experimental treatments: (1) Robot expresses a self-deprecating commentary which ostensibly reveals a sense of robot personal vulnerability to the group, (2) Robot is neutral in its commentary, and (3) Robot is silent.

The researchers reported that the vulnerable utterances by the robot not only influenced groups to talk more, but also positively shaped the directionality of the utterances to be more evenly balanced between the other two human members of the group (see this link for the research paper, the authors of the study are Margaret L. Traeger,Sarah Strohkorb Sebo,Malte Jung,Brian Scassellati, andNicholas A. Christakis).

Having conducted similar AI research that explores the impact of AI on human behavior and likewise having deployed AI systems in industry, Ive found it useful to characterize these efforts as follows.

Well use the letter R to represent robots, and the letter H to represent humans.

The nomenclature of 1R <-> 1H means that we have one robot that is interacting with one human.

This is a commutative expression in that well say that 1R <-> 1H is equal to and no different than if we were to indicate 1H <-> 1R.

Next, well introduce intentionality and the changing of behavior.

If we have 1R -> 1H, it means that one robot is interacting with one human and that the end result is some form of behavioral change exhibited by the human (which can arise via intentional actions of the R, or unintentionally so).

Of course, in the real world, we could have more than one human involved, having the humans participating as a group, so the group aspect is: 1R -> nH.

This means that we have one robot that is changing the behavior of a group of humans, wherein n is some number of 2 or greater.

To make it clear that group dynamics are involved, this is included too: 1R -> nH : nH <-> nH.

The latter portion of nH <-> nH helps to remind us that the group of humans are interacting with each other (since otherwise, it could be that the humans are told to not interact with each other or for some reason decide to purposely not interact, which, admittedly, could also be shaped via the R, but thats an additional variant for another day).

One other important point is that even though the R is used to represent a robot, the other way to more fully envision this aspect is to think of the R as any AI system that is reasonably intelligent-like and decidedly does not need to be the kind of space-age robot that we often have in mind, i.e., there doesnt necessarily need to be a slew of mechanical arms, legs, and other such human-like mechanization's.

Why care about all of this?

Because we are going to soon enough have widespread advanced AI systems that interact with humans, doing so beyond just occurring on a one-on-one basis (the 1R <-> 1H), though even for one-on-one nonetheless still being able to impact human behavior (the 1R -> 1H), and will certainly be ratcheting up to impacting human behavior that affects social interaction as a group (the 1R -> nH : nH <-> nH).

Developers and those fielding AI systems ought to be thinking carefully about how their AI is going to potentially impact humans and the inner group of dynamics among humans, during the interaction that those AI systems undertake with us.

In addition, humans need to be mindful that the AI system can potentially change our behavior, for the good or possibly for the bad, along with changing how we behave in a group setting with our fellow humans.

If we dont make sure that we are on our toes, the AI can cleverly lead us down a primrose path, getting groups of humans to become incensed, perhaps take to violent action, or express untoward outcomes (getting humans to among ourselves furtively work ourselves into a tizzy).

Of course, you can also take the glass-is-half-full viewpoint, and suggest that perhaps the AI system might stoke humans in a group setting to be more productive, more open to each other, and otherwise spur humans to be more, well, humane with each other.

This is why the recent spate of AI Ethics guidelines are so important and why I keep pounding away at having the AI community be mindful of how they are designing, developing, and fielding the myriad of AI systems that are appearing in a dizzying fashion and going to become integral to our daily lives.

For my analysis of the Rome Call For AI Ethics see this link, for my analysis of the Department of Defense principles of AI Ethics, see this link.

The AI genie is being let out of the bottle, so quickly and without sufficient scrutiny and caution, we might either be shooting our own foot as humanity, or we might be boosting ourselves to new heights, yet all-in-all it right now is taking place with little thought as to which way this is going to go.

Id prefer that things end-up on the side of enhancing mankind, the so-called AI For Good, and avoid or mitigate what we know will certainly equally emerge too, which is the AI For Bad.

On the topic of research studies, there are ways to further explore this question about AI and human behavior encompassing group dynamics.

For example, first consider this: 1H -> 1R

This use case looks at how the human can potentially change the behavior of the robot or AI system, perhaps convincing the robot to take actions that without the human interaction might not otherwise have taken place.

Amplifying that further, consider this: 1H -> nR : nR <-> nR.

In this use case, there is a group of robots or AI systems that are interacting jointly as a group (thats the nR <-> nR), and the human is impacting the robots, in both an individual robot instance, and along with how and what robots are doing as a federated or group interaction.

Many are caught off-guard on that formulation, not realizing that yes, we are gradually going to have multiple robots that are interacting with each other, doing so in a manner of human-like group dynamic interactions (for my discussion of federated AI, see this link here).

For those that like twisters and puzzles, heres something you might enjoy: 1R -> nR : nR <-> nR

Thats the case of a robot that is interacting with a multitude of other robots, and for which the group dynamics of the other robots are being changed as a result of the robot that is initiating or potentially leading the interaction.

Finally, we can also reflect on humans in the same manner, namely this: 1H -> nH : nH <-> nH.

No robots are in that equation, its a human-only instance.

We experience this every day.

Your boss comes into a conference room and announces to you and your fellow employees that the company is going to provide a bonus to those that exceed their quota (thats a behavior spark of the 1H -> nH). The group of employees engage in a discussion among themselves about what each will do (the nH <-> nH), in order to earn that potential bonus.

Thats a happy face version.

Revise the example somewhat for a sad face version.

Your boss comes into the conference room and announces to you and your fellow employees that the company is going to start laying off people, those as rated as subpar by their employee colleagues. Imagine what would happen next in the group dynamics among the employees, a potential nightmare of alliances, backstabbing, and the like.

Those of you that want to pursue the whole enchilada, consider this:

nR -> nH : nH <-> nH

nH -> nR: nR <-> nR

(nR -> nH) + (nH -> nR): nR <-> nR; nH <-> nH

Plus other variations.

Ill leave that as an exercise for those of you at home or are in your research labs.

As mentioned earlier, the R is not merely or solely a traditional kind of robot that comes to mind and can be any intelligent-like AI system, which includes, for example, AI-based self-driving cars.

Heres the question then for today: Can AI-based self-driving cars potentially impact human behavior on both an individual basis and on a social dynamic or group interaction among humans too?

Id like to keep you in suspense, and gradually reveal the answer, though I realize you are undoubtedly anxiously perched on the edge of your seat, so, yes, AI-based self-driving cars can indeed have such impacts.

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Human Behavior

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Some people perceive the AI driving system as nothing more than a simple machine. It is easy for us as human drivers to say that driving is a mundane task and readily undertaken.

Indeed, it is somewhat staggering to realize that in the United State alone there are about 220 million licensed drivers (see driver stats link here). Obviously, the driving of a car must be relatively simplistic if you can get that many people to presumably be able to do it (as some suggest, it isnt rocket science).

Yet, also consider how much life-or-death risks and consequences there are in the act of driving a car.

There are about 40,000 deaths per year due to car crashes in the U.S., and around 2.5 million bodily injuries to people involved in car crashes.

Turns out that getting an AI system to drive a car could be said to be easy, but the trick is getting it to drive a car safely, and do so in the midst of the raucous and dangerous wilds of human drivers and everyday driving circumstances (in essence, getting AI to drive a car on a closed track that is utterly controlled is readily viable, but once you put that same AI self-driving car into the real-world with the rest of us, all bets are off, for now, and its a doozy of a problem).

Once you put an AI self-driving car onto the public roadways, youve essentially added a new social actor into our midst.

Social actor, you might ask?

Yes, the AI system is now contending with all the same roadway social interactions that we humans do.

Think about your actions as a human driver.

Is that pedestrian going to suddenly dart into the street, and if so, should I slam on my brakes or instead keep going to scare them back onto the sidewalk in a game of chicken?

Thats social interaction.

Now, with the advent of self-driving cars, rather than having a human driver in the drivers seat, the social actor becomes the AI system thats driving the self-driving car.

But, there isnt anyone or anything sitting in the drivers seat anymore (though, as Ive posted here, some are working on robot drivers that look and act like a traditional robot, which would sit inside the car and drive the vehicle, but this is not likely in the near-term and certainly not prior to the advent of todays version of self-driving cars).

Ive exhorted that we are going to find ourselves confronted with a head nod problem (see my analysis here), whereby as pedestrians we can no longer look at the head of the driver to get subtle but telling clues about what the driver is intending to do.

Thus, this vital social interaction is going to be broken, meaning that the pedestrian wont know what the AI driving system is thinking (theres not as yet a theory-of-mind that we can have about AI driving systems), and likewise, the AI if not properly developed wont be gauging what the pedestrian might do.

There are various technological solutions being explored to deal with this social interaction, including for example putting LED displays on the exterior of the car to provide info to pedestrians, and there is the hope that V2P (vehicle-to-pedestrian) electronic messaging will help, though all of this has yet to be figured out.

Lets tie this together with the earlier equations presented.

A self-driving car is coming down the street and meanwhile, a pedestrian is getting ready to jaywalk.

We are on the verge of a social interaction, namely a 1R <-> 1H situation.

The AI of the self-driving car wants to stand its ground and intends to proceed unabated, so it somehow communicates this to the pedestrian, attempting a 1R -> 1H.

In what way will the communication occur, and will the human pedestrian acquiesce or resist and opt to jaywalk?

Thats yet to be well-formulated.

Lets bump things up.

A group of strangers are standing on a street corner, waiting to cross the street (this is nH).

As a self-driving car reaches the corner, it wants to try and make sure that those pedestrians stand away from the corner, since the AI system is going to make that right turn without pausing.

We have this: 1R -> nH

It could be that the pedestrians do nothing and standstill.

Or, they might look at each other and try to figure out which has the greater will, namely they as a pack of humans might decide to flow off the curb into the street, doing so to basically tell the self-driving car to back off and let them cross, though it could also be that they briefly confer and decide that it is better to let the AI do its things and make the turn.

In essence, this happened: 1R -> nH: nH <-> nH.

Suppose the AI system had proffered a gentle, friendly indication, asking the group to remain out of the way, how might that have played out among the group in a social interaction about what to do?

Or, suppose the AI system had been stern, essentially threatening the group to stay put, what might have been the group dynamics in that case?

For more on the use of social reciprocity by AI in human-AI interactions, see my discussion at the link here.

Read the rest here:
Human Behavior And Group Dynamics Can Be Reshaped By AI Use, Including Via AI Self-Driving Cars - Forbes

Related Posts