Category Archives: Human Behavior

Human Behavior ‘Trumps’ Battery Tech in Solar Energy Future – Inverse

Solar energy is the future, says the president of Californias Public Utilities Commission. The only question is how to get people to realize the future can be right now.

Solar is not a boutique product anymore, its cheaper than just about any other fossil fuel on the market, commission president Michael Picker tells Inverse. And one way we bring the cost down is enlisting people to step in and dance with the grid.

Dancing with the grid means charging electric cars during the day, when the grid is brimming with solar energy. It also means generally being cognizant about energy use in homes. One-quarter of energy use in households is being gulped by things that arent being used, but are simply plugged in, like toasters and rarely-used televisions.

Picker wants every Californian to become a part of the states modern energy infrastructure. The technology has arrived, works, and will be an integral part of Californias mission to quit using fuel for power.

Peoples behavior trumps everything, says Picker. Its valuable to get people to stop being passive subjects in this dynamic world and provide more reliability on the grid. The solar eclipse shows people how to do it.

Californians will get a chance to adapt to the grid on August 21, when the coming solar eclipse will darken skies in the usually sun-drenched state. This will dramatically reduce the output from solar farms, which produce a quarter of Californias renewable energy. Foreseeing the cut in energy production, the California Public Utilities Commission is asking folks to turn up the thermostat, switch to LED lights, and not charge electronics during the dimming event. The campaign, called Do Your Thing For The Sun, reveals the importance of peoples behavior in making solar energy a resilient, reliable energy source even more so than burgeoning battery technologies, which could supply power when the solar farms cant.

For two hours, the moon will block up to three-quarters of the suns rays, deflecting that valuable light back into space. These darkened skies mean a loss of 5,600 Megawatts enough to power 900,000 homes. If Californians dont reduce their energy use, the state will have to ignite backup fuel-burning engines to provide adequate power.

This energy-saving campaign, however, prompted Tesla CEO Elon Musk, who in 2014 directed his company to begin constructing the 1.9 million square foot battery-producing Gigafactory in the Nevada Desert, to respond to an article about the campaign with a succinct tweet: Batteries! Musks implication is that Californias renewable future requires batteries to provide backup power when solar farms arent generating enough electricity.

Musk, a battery and electric car innovator, has obvious incentives in promoting state investment in battery projects. But Picker also acknowledges that batteries are important for Californias energy future, noting that theyre small, can fit in neighborhoods, and that the state plans to have 1.3 Gigawatts of battery storage capacity by 2020. (One Gigawatt can light 100 million LED bulbs or charge 12,500 Nissan Leafs, according to the Department of Energy.)

Still, Picker says peoples actions, not batteries, will ultimately enable the states renewable energy future, a future the California government is hell-bent on achieving. California is legally bound to produce a whopping 50 percent of its energy from renewable sources by 2030, which doesnt include hydroelectric power generated from rivers pouring through dams. Picker says that if hydroelectricity is included in the equation, the state could achieve 70 percent renewable energy use by 2025 or 2026.

And the current governor, Jerry Brown, is fully intent on abandoning fossil fuels. While arguing for a climate cap and trade bill (which passed) earlier this month, the 79-year old Brown made it clear that renewable energy in which solar farms will loom large will be a crucial part of Californias future. This isnt about some cockamamie legacy. This isnt for me, Im going to be dead. Its for you, and its damn real, he said.

Battery facilities will still spring up around California as part of Californias grand renewable energy effort, and theyre likely to look similar to Elon Musks local battery projects in the state. In January, Tesla completed an energy storage facility for the utility Southern California Edison in just 90 days. The facility is capable of storing enough energy for 2,500 homes.

But human behavior, not battery technology, will ultimately make the worlds sixth largest economy run largely on the sun, says Picker. Its an entirely different creature than weve had in our electric supply and we have to think about it differently, he says.

Read more:
Human Behavior 'Trumps' Battery Tech in Solar Energy Future - Inverse

Johns Hopkins center receives $300M from USAID to encourage healthy behaviors in developing countries – The Hub at Johns Hopkins

ByStephanie Desmon

The Johns Hopkins Center for Communication Programs has received a five-year award with a $300 million ceiling from the United States Agency for International Development to lead its social and behavior change programs around the world.

Breakthrough-ACTION will use evidence-based tools to encourage people in developing countries to adopt healthy behaviors, from using modern contraceptive methods to sleeping under bed nets to being tested for HIV.

Susan Krenn

Executive director, CCP

Much of the work will harness the power of communicationfrom mass media campaigns to TV and radio dramas to simple posters in a health clinicto inspire long-lasting change. It will be led by CCP, which is based at the Johns Hopkins Bloomberg School of Public Health.

"American security is advanced by supporting social and behavioral interventions, which improve health and promote social stability for people living in low- and middle-income countries," says Michael J. Klag, dean of the Bloomberg School. "Such evidence-based, innovative, and creative interventions should be part and parcel of every international health development program. This new award emphasizes the value of investing in social and behavior change programs."

The program builds on a prior five-year, $144 million, 31-country project called the Health Communications Capacity Collaborative, or HC3, and is expected to be double the size.

CCP will partner with:

Breakthrough-ACTION will also be supported in the field by ActionSprout, the International Center for Research on Women, and Human Network International.

"Harmful social norms and behavioral challenges stand in the way of better health, education, and livelihood for far too many people around the globe," says Susan Krenn, CCP's executive director. "With this investment, we have an incredible opportunity to test and scale new approaches, increase efficiency, and to serve more people. We can't wait to get started."

The Breakthrough-ACTION agreement was effective July 21, with work expected to begin immediately. While the exact geographic scope of the project has not yet been finalized, CCP expects to work in dozens of countries, primarily in Africa and Asia. CCP will build on previous successes in some countries and establish new partnerships in others.

While communication is at the heart of Breakthrough-ACTION, the project will also use other behavioral science approaches such as human-centered design and behavioral economics to create social and behavior change at the global, regional, and country level. CCP will use the expertise it gained during the recent West African Ebola outbreak to do similar emergency response work if needed.

Krenn says the award reflects new understandings about what works in international development.

"People are now appreciating that you need to do more than just build a health clinic and expect people to come," she says. "You have to motivate them, give them a reason to go. People need the information to make decisions for themselves and their families, especially when you're asking them to do something that isn't common practice such as sleeping under bed nets or accessing modern contraception. This kind of work provides the missing link, helping to motivate people to make better health decisions."

USAID administers the U.S. foreign assistance program providing economic and humanitarian assistance in 100 countries worldwide.

David Holtgrave, professor and chair of the Bloomberg School's Department of Health, Behavior and Society, says the work of CCP shows the vital role behavior change can play in saving lives.

"Too often, when people think of development, they think only of food aid or drugs for health clinics and the like," he says. "What our work proves is that communication is an essential part of any comprehensive, effective development program."

See more here:
Johns Hopkins center receives $300M from USAID to encourage healthy behaviors in developing countries - The Hub at Johns Hopkins

Athletics: AIU must recruit more investigators – Reuters

(Reuters) - The Athletics Integrity Unit needs more investigators to deal with violations in the sport, the chair of the organization said.

The AIU replaced the International Association of Athletics Federations's former anti-doping department in April and is an independent body that handles aspects including testing, intelligence and investigations related to misconduct within the sport.

While David Howman said he was pleased with the progress of the organization ahead of World Athletics Championships that begin in London on Friday, he underlined the need to hire the right investigators.

"We have a huge remit, we have a mandate which covers things from anti-doping to age manipulation. We need to have investigators to look at issues when they rise," Howman told Reuters TV.

"We need to make sure that we've got the right people in those places to conduct those investigations."

It was not immediately clear how many investigators the unit already has.

The AIU will collect more than 600 blood and urine samples prior to the championships as part of their anti-doping campaign and Howman could not guarantee that all athletes will be clean but expressed his confidence in the program.

"You can't guarantee that human behavior is such that won't happen," he said.

"So what we can do is say we're going to have best practice, best anti-doping program you could possibly have at the moment based on information and intelligence gathering and we'll see the outcome following the event."

Howman said their existing measures could help them expose multiple athletes and officials in a single investigation and says that an indicator of the organization's success would be clean athletes acknowledging its progress.

Reporting by Reuters TV; Writing by Aditi Prakash in Bengaluru; Editing by Alison Williams

The rest is here:
Athletics: AIU must recruit more investigators - Reuters

Geckos rapidly evolve bigger heads in response to human activity – New Atlas

Life is adept at adapting to changes in the environment and the environment is changing faster than ever, thanks to us. Evolution is normally thought of on the scale of millions of years, but a new study has observed how human activity has directly driven separate populations of geckos to evolve new attributes in the space of just 15 years.

The human activity in question began in 1996, with the building of the Serra da Mesa Hydroelectric Plant in Brazil. An artificial reservoir was created by flooding 656 sq mi (1,700 sq km), and in the process almost 300 new islands were now cut off from the "mainland."

UPGRADE TO NEW ATLAS PLUS

More than 1,500 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.

Researchers from the University of Brasilia and the University of California, Davis studied the newly-separated populations of animals on these islands, focusing on the most common gecko species in the area, Gymnodactylus amarali. The team found that over 15 years, G. amarali on the islands had grown bigger heads on average than those of the same species found on the mainland.

Before the dam was built, the geckos in the area had lived mostly off termites, with larger lizard species eating the bigger bugs and leaving the smaller ones to G. amarali. But it turns out that flooding the valley had wiped out those larger lizards, and with less inter-species competition for food, G. amarali adapted to fill the niche they left behind. The geckos grew larger mouths and heads to help them chow down on the newfound bounty of bigger termites.

It's a great "Petri dish" example of natural selection at work. Essentially, those G. amarali with bigger heads had access to more food, leading to them being more successful at survival and reproduction. Over time, the big-head genes were passed down to later generations in higher numbers, until it became a common characteristic of the island-dwelling geckos. Those still on the mainland, meanwhile, still faced competition from the larger lizards and so saw no change in head size, making them a perfect control group.

While their heads grew, the lizards' bodies stayed more or less the same size. The researchers say this is most likely a matter of efficiency: bigger bodies require more energy to run, which would offset the advantage of a larger head. And as further evidence that a bigger head relative to body size was the most efficient evolutionary path, the researchers found that the trait independently became common among populations on five islands isolated from each other.

The story of G. amarali isn't necessarily a sad one, but it does highlight just how much influence human behavior has on the environment, both directly and indirectly.

The research was published in the journal PNAS.

Source: Keele University via The Conversation

Read the original:
Geckos rapidly evolve bigger heads in response to human activity - New Atlas

Here’s How Pheromones Are Driving Your Sex Life – The Alternative Daily (blog)

Cupids arrow has long symbolized the mysteries of sexual attraction. But what factors really drive romantic interest? Scientists speculate that airborne chemical signals known as pheromones may explain the biochemistry of love and lust.

The existence of human pheromones remains controversial. Its clear that many plants and animals species use hormonal secretions to communicate information relating to reproduction. For example, in 1959 researchers discovered that female silkworms secreted a powerful aphrodisiac, called bombykol, that can attract male silkworms from miles away. To date, however, ironclad evidence that human behavior is governed by pheromones remains elusive.

Nevertheless, there are a number of intriguing studies, which suggest the surprising ways that scents, secretions and body odors containing pheromones may influence human behavior unconsciously.

According to Bettina Pause, a psychologist, Weve just started to understand that there is communication below the level of consciousness. My guess is that a lot of our communication is influenced by chemosignals.

Scientists explain that pheromones in animals are released in sweat, urine and saliva. These chemical messengers appear to have both an emotional and physical effect on other members of their species.

In mammals, for instance, pheromones are detected by a structure in the nose called the vomeronasal organ, which relays signals to the hypothalamus a region of the brain that controls emotional states, hormonal regulation and sexual arousal.

Some of the most important evidence for the existence of human pheromones comes from a 1998 study by Dr. Martha McClintock, who found that women who live in close proximity (the same dorm, for example) tend to have synchronized menstrual cycles. Scientists believe that chemical messages in sweat are responsible for this harmonization of periods.

One powerful form of evidence that pheromones exist comes from PET scanning technology, which can examine the effect of chemical odors on male and female brains. In one study, researchers found that certain hormone-like smells activated specific areas in the hypothalamus related to sexuality, which are not triggered by other odors.

In the words of Dr. David Berliner, These findings corroborate that human pheromones do exist, and that women can communicate chemically with men and vice versa. This is a very important finding because it shows specific areas of the brain that are activated by these chemicals.

As you might expect, the brains of heterosexual men and women respond very differently to specific chemical messengers. For example, the brain regions in the female hypothalamus are highly active when women are exposed to testosterone-like chemicals (while exposure to estrogen-like messengers has no effect). Conversely, the brain areas in the male hypothalami light up like a Christmas tree when men are exposed to estrogen-like hormones.

Scientists believe this gender-specific response to chemical secretions shapes the way men and women to perceive each other on an unconscious level.

If pheromones govern sexual arousal, then can they be harnessed to make people more attractive? More specifically, could pheromones be added to perfumes, which could be used to lure desired mates?

One study from the University of Chicago found that pheromone-type chemical can heighten the heart rate, increase body temperature and change mood. As of yet, however, scientists have been unable to isolate the specific chemicals that trigger attraction and sexual desire.

Of course, many perfume manufacturers claim that their fragrances can spark desire. In fact, most of these products contain pheromones from animals. However, most scientists insist that pheromones are species specific. In other words, until researchers can isolate specific human pheromones or develop synthetic analogs, then a true love potion of love will remain elusive.

Nevertheless, scientists are continuing to investigate pheromones for their scientific, commercial and therapeutic potential. For example, a company called Pherin Pharmaceuticals is looking into ways to use pheromones messengers to alleviate stress, anxiety and menstrual cramps.

The science of pheromones is still very unsettled. However, lets look at some ways researchers believe these chemical signals may be influencing you and driving your sex life:

Research by Wysocki and others indicates that women prefer the musky scent of men who happen to have gene characteristics that match up well with their own DNA. In other words, the nose knows. That is, odor prints may be a huge driver of attractiveness in so far as they help people pick mates with DNA that complements their own. This unconscious form of selection benefits offspring.

Scientists are still a long way off from unraveling the mysteries of attraction and the role that pheromones may play in influencing sexual behavior. For centuries, people have used expressions like love is in the air and love is a matter of chemistry. The emerging science of pheromones suggests that these proverbial adages may be far truer than anyone imagined.

Scott OReilly

Read more:
Here's How Pheromones Are Driving Your Sex Life - The Alternative Daily (blog)

The path of the solar eclipse is already altering real-world behavior – Washington Post

The upcoming solar eclipse is poised to become the most photographed, most shared, most tweeted event in human history, in the words of one astronomer.Millions of people will watchit, potentially overwhelming the cities and towns along the eclipse's path of totality.

According to Google, interest in the eclipse has exploded nationwide in the past few months, mirroring national media attention. The county-level search data above, provided by Google, paints a striking picture: Interest in the eclipse is concentrated in the path of totality that cuts through the middle of the country, recedingsharplythe farther you go from that path.

The searches arean uncanny virtual reflection of the eclipse itself. Experts say the difference between a total eclipse (viewable only in the path of totality) and a partial one (everywhere else) is quite literally the difference between night and day. Web users in counties within the path of the totality arelooking up information on the eclipse five to 10 times more often than those well outside, according to Google's data.

In the past week, interest was highest in rural Clark County, Idaho, which lies directly in the eclipse's path. Nearby Idaho Falls plans to hold a four-day outdoor country music festivalit's calling Moonfest.

[Q&A: Do you have a question about the total solar eclipse coming in August?]

Nebraska's Pawnee and Banner counties, situated at opposite ends of the state, show the next-highest amount of interest. Banner county lies just outside the path of totality, while Pawnee is directly within it.

Rounding out the top five counties are Rabun and Towns counties in northeast Georgia, both squarely within the eclipse's path.

In the past week, people searching the Web forthe eventare mostly looking up basic facts a map of the eclipse's path, its exact time and information on the special glasses you'll need to avoid burning your eyeballs while looking at it.

The physical world asserts itself in our virtual lives in myriad ways. Searches for seasonal affective disorder follow a north-south gradient, for instance, and you can use Google searches to track everything from flu season to mosquito hatchings.

The eclipse searches are perhaps the most striking example of this phenomenon yet, as millions of Americans along an invisible celestial pathtap their keyboards together, unknown to one another.

Capital Weather Gang's Angela Fritz breaks down what will happen when a total solar eclipse crosses the U.S. on Aug. 21. (Claritza Jimenez,Daron Taylor,Angela Fritz/The Washington Post)

[A total solar eclipse is happening Aug. 21, and heres what you need to know]

Read this article:
The path of the solar eclipse is already altering real-world behavior - Washington Post

Celebrity Twitter accounts display ‘bot-like’ behavior – Phys.Org

'Celebrity' Twitter accounts - those with more than 10 million followers - display more bot-like behaviour than users with fewer followers, according to new research.

The researchers, from the University of Cambridge, used data from Twitter to determine whether bots can be accurately detected, how bots behave, and how they impact Twitter activity.

They divided accounts into categories based on total number of followers, and found that accounts with more than 10 million followers tend to retweet at similar rates to bots. In accounts with fewer followers however, bots tend to retweet far more than humans. These celebrity-level accounts also tweet at roughly the same pace as bots with similar follower numbers, whereas in smaller accounts, bots tweet far more than humans. Their results will be presented at the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) in Sydney, Australia.

Bots, like people, can be malicious or benign. The term 'bot' is often associated with spam, offensive content or political infiltration, but many of the most reputable organisations in the world also rely on bots for their social media channels. For example, major news organisations, such as CNN or the BBC, who produce hundreds of pieces of content daily, rely on automation to share the news in the most efficient way. These accounts, while classified as bots, are seen by users as trustworthy sources of information.

"A Twitter user can be a human and still be a spammer, and an account can be operated by a bot and still be benign," said Zafar Gilani, a PhD student at Cambridge's Computer Laboratory, who led the research. "We're interested in seeing how effectively we can detect automated accounts and what effects they have."

Bots have been on Twitter for the majority of the social network's existence - it's been estimated that anywhere between 40 and 60% of all Twitter accounts are bots. Some bots have tens of millions of followers, although the vast majority have less than a thousand - human accounts have a similar distribution.

In order to reliably detect bots, the researchers first used the online tool BotOrNot (since renamed BotOMeter), which is one of the only available online bot detection tools. However, their initial results showed high levels of inaccuracy. BotOrNot showed low precision in detecting bots that had bot-like characteristics in their account name, profile info, content tweeting frequency and especially redirection to external sources. Gilani and his colleagues then decided to take a manual approach to bot detection.

Four undergraduate students were recruited to manually inspect accounts and determine whether they were bots. This was done using a tool that automatically presented Twitter profiles, and allowed the students to classify the profile and make notes. Each account was collectively reviewed before a final decision was reached.

In order to determine whether an account was a bot (or not), the students looked at different characteristics of each account. These included the account creation date, average tweet frequency, content posted, account description, whether the user replies to tweets, likes or favourites received and the follower to friend ratio. A total of 3,535 accounts were analysed: 1,525 were classified as bots and 2010 as humans.

The students showed very high levels of agreement on whether individual accounts were bots. However, they showed significantly lower levels of agreement with the BotOrNot tool.

The bot detection algorithm they subsequently developed achieved roughly 86% accuracy in detecting bots on Twitter. The algorithm uses a type of classifier known as Random Forests, which uses 21 different features to detect bots, and the classifier itself is trained by the original dataset annotated by the human annotators.

The researchers found that bot accounts differ from humans in several key ways. Overall, bot accounts generate more tweets than human accounts. They also retweet far more often, and redirect users to external websites far more frequently than human users. The only exception to this was in accounts with more than 10 million followers, where bots and humans showed far more similarity in terms of the volume of tweets and retweets.

"We think this is probably because bots aren't that good at creating original Twitter content, so they rely a lot more on retweets and redirecting followers to external websites," said Gilani. "While bots are getting more sophisticated all the time, they're still pretty bad at one-on-one Twitter conversations, for instance - most of the time, a conversation with a bot will be mostly gibberish."

Despite the sheer volume of Tweets produced by bots, humans still have better quality and more engaging tweets - tweets by human accounts receive on average 19 times more likes and 10 times more retweets than tweets by bot accounts. Bots also spend less time liking other users' tweets.

"Many people tend to think that bots are nefarious or evil, but that's not true," said Gilani. "They can be anything, just like a person. Some of them aren't exactly legal or moral, but many of them are completely harmless. What I'm doing next is modelling the social cost of these bots - how are they changing the nature and quality of conversations online? What is clear though, is that bots are here to stay."

Explore further: Why was MacronLeaks' influence limited in the French election?

Continue reading here:
Celebrity Twitter accounts display 'bot-like' behavior - Phys.Org

Using Numbers To Comprehend And Control Human Behavior – NPR – NPR

Since the Enlightenment, champions of progress have urged us to break free of the chains of tradition.

Just because "we've always done it this way," is no reason to keep doing it this way. It is irrational, it is dumb, indeed, it is frequently dishonest, to cling to traditions, they say. If we aim to understand the world and control it the abiding ambition of all empirically minded thinkers then surely we can dispense with the baggage of inherited convention.

Keith Law has just published a book that explores this question. The book is opinionated and it is sparked by fury. Indeed, Law writes as one who speaks truth to power. It is written by someone who thinks of himself as at the vanguard, the revolutionary forefront.

It is now possible, he insists, indeed, it is now mandatory, that we use mathematical analysis and statistics not only to evaluate human achievement, but also to learn how to predict it in the future.

I exaggerate maybe just a little bit.

Law's book is about the use of statistics in baseball. And while his assault on the Old Ways is driven by a real sense of outrage at the way irrational tradition shackles progressive thinking, he confines himself, by and large, to bad thinking in the domain of baseball. It is baseball he wants us to learn to think right about.

The Story Behind the Old Stats That Are Ruining the Game, the New Ones That Are Running It, and the Right Way to Think About Baseball

by Keith Law

Law is a writer at ESPN and his his book, published in April, is called Smart Baseball: The Story Behind The Old Stats That Are Ruining The Game, The New Ones That Are Running It, And The Right Way To Think About Baseball.

For Law, the "old stats" are ruining the game. Batting Average, for example, is a terrible measure of a batter's offensive "value," since it considers hits-per-at-bat. This is doubly wrong-headed, he contends: It ignores the fact that not all hits are created equally (a home run is worth more than a single), and it disregards the batter's offensive achievements (e.g. walks), which don't happen during at bats (since not all plate appearances count as at bats). Likewise, Runs Batted In is not only uninformative about how good a player is offensively, it is dishonest, for it confuses his accomplishments with those of his teammates, Law says. You can only drive batters in, after all, if there are runners on base to be driven in.

Or consider the evaluation of pitching performance by wins; this is even more outrageous, he says. You can only win if your team scores, and the pitcher has no control over that. The idea that it is the pitcher who wins is premised on the idea that good pitchers have a kind of magic that leads their teams to victory. And that, Law is certain, is so much nonsense. Praising an individual player for results over which he has nothing resembling control isn't very bright. It isn't going to help you figure out what's really going on on the field, and might very well lead you to make bad baseball decisions.

We use statistics, Law holds, to evaluate performance. We want to understand what a player actually does on the field, and we want to predict likely performance going forward. We need objectivity to do this. We need data. We need metrics that cut through the noise to the reality. The last thing we need are old fashioned prejudices about pitchers winning games and RBI being a measure of a player's offensive value to his team, he says.

Can we do what Law and his fellow "quants" demand? Can we use numbers to assign value, to sort through praise and blame, and to ground baseball decisions in matters of value-neutral fact? I get it that this is something baseball executives want. Michael Lewis explained in Moneyball that the new statistics make it possible to discover sources of baseball value that traditional thinking has tended to ignore. And I get it that if you're a player, or a manager, or a fan, the problem of evaluating and predicting is of the greatest importance.

But is it actually possible, in baseball, or in life, so to regiment, comprehend and control human behavior?

I think there are reasons to doubt this.

One of the things that particularly bugs Law about the RBI stat is that there are cases, as he notes, where the official scorer has discretion over whether to award the RBI. He continues:

"[A]ny stat that involves such human objectivity [I think he meant to write "subjectivity"] is immediately reduced in value as a result. People are prone to so many cognitive biases and are so inconsistent in their judgments..."

But in fact, I would argue, all baseball stats rest, finally, on just this sort of subjectivity. Consider, at the lowest level, baseball is about hits and outs. For example, Law argues that the basic job of a batter is to not make an out that is, to get on base.

But are outs determined in a value-free, objective way? Not really. Very frequently, at least, the question of whether an out was made is a judgment call. Instant replay hasn't changed this. It's just removed the required judgment call to a remote location.

And the same is true of hits themselves. When is a hit a hit, and when is it the result of a fielder's error? Nothing determines this other than the decision of the official scorer.

And let's not even get into balls and strikes!

However you look at it, the low-level facts on the ground, the smallest units of meaningful baseball hits, outs, balls, strikes, foul or fair are themselves intrinsically soft, squishy, value-laden matters of interpretation.

Bring the biggest quantificational canon you can find. It won't shoot straight if you set in down on shifting sands.

But maybe this is not a bad thing. Maybe this is what we love about baseball. We are called on to evaluate, to make choices, to make predictions, to lay odds, precisely when there are no algorithms or mathematical rules to do this for us.

I don't advocate a return to tradition. I think Law and his colleagues are right that there is a value in new analytical tools for thinking about baseball. But that's a far cry from accepting his idea that it is possible to use numbers, by themselves, to identify and control value, in baseball, or anywhere else.

Want to know what happened on the field? You'd better take a look.

Alva No is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

Continue reading here:
Using Numbers To Comprehend And Control Human Behavior - NPR - NPR

Antisocial bees share genetic profile with people with autism – Science Magazine

Honey bee workers tending an egg-laying queen.

Zachary Huang, beetography.com and Michigan State University

By Elizabeth PennisiJul. 31, 2017 , 3:00 PM

Most honey bees are as busy as, well, a bee, tending to the queen and her young, guarding the hive, and generally buzzing and flitting around in near constant motion. But some bees just sit around and rarely interact with their comrades. A new study reveals that these antisocial insects share a genetic profile with people who have autism spectrum disorders, which can affect how well they respond to social situations.

The work speaks to how evolution may tap the same molecular pathways in very different animals, even for traits as complex as social behavior, says Hans Hofmann, an evolutionary neuroscientist at the University of Texas in Austin who was not involved with the study. The neural circuits underlying social behavior must be very different for humans and honey bees, yet it appears at the molecular level, the genes are employed in a similar manner, he says. Thats kind of striking.

To look for variation in honey bee social behavior, Hagai Shpigler, a postdoctoral fellow at the University of Illinois (UI) in Urbana,designed two tests where he and colleagues video recorded a group of bees and analyzed each individuals reaction to a social situation. In one test, he stuck an unfamiliar bee in with the group. Bees instinctively guard and typically react by mobbing the stranger and sometimes harming it. In the second test, Shpigler put an immature queen larva in with the group. Queen larvae bring out mothering instincts, and worker bees tend to feed the larva. He subjected 245 groups of bees from seven different colonies, 10 bees per group, to these tests multiple times, then ranked how eagerly the bees responded to these situations.

Most bees reacted to at least one situation, but about 14% were unresponsive to both, he and his colleagues report today in the Proceedings of the National Academy of Sciences. The team sacrificed some of the bees and isolated the genes active in the insects mushroom bodies, a part of the brain responsible for complex actions such as social behavior. They found a distinctive subset of genes was active in the nonresponsive bees. Then they compared that set of genes to sets of genes implicated in autism spectrum disorder, schizophrenia, and depression. Even though bees and people are very different evolutionarily, they have many genes in common.

There was a good match only between the gene activity of the nonresponsive bees and genes associated with autism, the team reports. Some of the genes involved help regulate the flow of ions in and out of the cells, particularly nerve cells; others code for so-called heat shock proteins that are typically induced during stress.

The researchers dont yet know how exactly these genes influence social behavior in either bees or people, but manipulating the genes in honey bees may shed light on what they do in humans, says Alan Packer, a geneticist at the Simons Foundation in New York City, which funds autism research, including this bee work. Packer was not involved with this project but has been compiling a list of genes implicated in autism spectrum disorders.

Claire Rittschof, an entomologist at the University of Kentucky in Lexington who was not involved with the work, cautions that the nonresponsive bees might prove to be responsive in a different social situation. Its difficult to separate social responsiveness from behavioral variation in general, she notes. But shes fascinated by the idea that similar genes shape social behavior in different species.

No one is drawing exact parallels between honey bee and human behaviors, Packer notes. We do not want to give the impression that bees are little people or humans are big bees, says team leader Gene Robinson, a behavioral genomicist and director of the UI Carl R. Woese Institute for Genomic Biology. But, says Packer, if you want to understand how these genes interact, the honey bee might be a useful model. Hes eager to know whether this same set of genes is involved in social responsiveness of other animals. The more models that are available to study how these genes give rise to these behaviors, the better.

Its not clear why these asocial bees are tolerated by the rest of the hive. Rittschof thinks these individuals are considered part of the group despite their unusual behavior. Both human and bee societies contain and accommodate a range of different personality types, strengths, and weaknesses, she suggests.

Read this article:
Antisocial bees share genetic profile with people with autism - Science Magazine

What a nerdy debate about p-values shows about science and how to fix it – Vox

Theres a huge debate going on in social science right now. The question is simple, and strikes near the heart of all research: What counts as solid evidence?

The answer matters because many disciplines are currently in the midst of a replication crisis where even textbook studies arent holding up against rigorous retesting. The list includes: ego depletion, the idea that willpower is a finite resource; the facial feedback hypothesis, which suggested if we activate muscles used in smiling, we become happier; and many, many more.

Scientists are now figuring out how to right the ship, to ensure scientific studies published today wont be laughed at in a few years.

One of the thorniest issues on this question is statistical significance. Its one of the most influential metrics to determine whether a result is published in a scientific journal.

Most casual readers of scientific research know that for results to be declared statistically significant, they need to pass a simple test. The answer to this test is called a p-value. And if your p-value is less than .05 bingo! you got yourself a statistically significant result.

Now a group of 72 prominent statisticians, psychologists, economists, biomedical researchers, and others want to disrupt the status quo. A forthcoming paper in the journal Nature Human Behavior argues that results should only be deemed statistically significant if they pass a higher threshold.

We propose a change to P< 0.005, the authors write. This simple step would immediately improve the reproducibility of scientific research in many fields.

This may sound nerdy, but its important. If the change is accepted, the hope is that fewer false positives will corrupt the scientific literature. Its become too easy using shady techniques known as p-hacking, and outcome switching to find some publishable result that reaches the .05 significance level.

Theres a major problem using p-values the way we have been using them, says John Ioannidis, a Stanford professor of health research and one of the authors of the paper. Its causing a flood of misleading claims in the literature.

Dont be mistaken: This proposal wont solve all the problems in science. I see it as a dam to contain the flood until we make sure we have the more permanent fixes, Ioannidis says. He calls it a quick fix. Though not everyone agrees its the best course of action.

At best, the proposal is an easy change to implement to protect academic literature from faulty change. At worst, its a patronizing decree that avoids addressing the real problem at the heart of sciences woes.

There is a lot to unpack and understand here. So were going to take it slow.

Even the simplest definitions of p-values tend to get complicated. So bear with me as I break it down.

When researchers calculate a p-value, theyre putting to the test whats known as the null hypothesis. First thing to know: This is not a test of the question the experimenter most desperately wants to answer.

Lets say the experimenter really wants to know if eating one bar of chocolate a day leads to weight loss. To test that, they assign 50 participants to eat one bar of chocolate a day. Another 50 are commanded to abstain from the delicious stuff. Both groups are weighed before the experiment, and then after, and their average weight change is compared.

The null hypothesis is the devils advocate argument. It states: There is no difference in the weight loss of the chocolate eaters versus the chocolate abstainers.

Rejecting the null is a major hurdle scientists need to clear to prove their theory. If the null stands, it means they havent eliminated a major alternative explanation for their results. And what is science if not a process of narrowing down explanations?

So how do they rule out the null? They calculate some statistics.

The researcher basically asks: How ridiculous would it be to believe the null hypothesis is true answer, given the results were seeing?

Rejecting the null is kind of like the innocent until proven guilty principle in court cases, Regina Nuzzo, a mathematics professor at Gallaudet University, explains. In court, you start off with the assumption that the defendant is innocent. Then you start looking at the evidence: the bloody knife with his fingerprints on it, his history of violence, eyewitness accounts. As the evidence mounts, that presumption of innocence starts to look naive. At a certain point, jurors get the feeling, beyond a reasonable doubt, that the defendant is not innocent.

Null hypothesis testing follows a similar logic: If there are huge and consistent weight differences between the chocolate eaters and chocolate abstainers, the null hypothesis that there are no weight differences starts to look silly. And you can reject it.

You are correct!

Rejecting the null hypothesis is indirect evidence of an experimental hypothesis. It says nothing about whether your scientific conclusion is correct.

Sure, the chocolate eaters may lose some weight. But is it the because of the chocolate? Maybe. Or maybe they felt extra guilty eating candy every day, and they knew they were going to be weighed by strangers wearing lab coats (weird!), so they skimped on other meals.

Rejecting the null doesnt tell you anything about the mechanism by which chocolate causes weight loss. It doesnt tell you if the experiment is well designed, or well controlled for, or if the results have been cherry-picked.

It just helps you understand how rare the results are.

But and this is a tricky, tricky point its not how rare the results of your experiment are. Its how rare the results would be in the world where the null hypothesis is true. That is, its how rare the results would be if nothing in your experiment worked, and the difference in weight was due to random chance alone.

Heres where the p-value comes in: The p-value quantifies this rareness. It tells you how often youd see the numerical results of an experiment or even more extreme results if the null hypothesis is true and theres no difference between the groups.

If the p-value is very small, it means the numbers would rarely (but not never!) occur by chance alone. And so, when the p is small, researchers start to think the null hypothesis looks improbable. And they take a leap to conclude their [experimental] data are pretty unlikely to be due to random chance, Nuzzo explains.

And heres another tricky point: Researchers can never completely rule out the null (just like jurors are not firsthand witnesses to a crime). So scientists instead pick a threshold where they feel pretty confident that they reject the null. Thats now set at less than .05.

Ideally, a p of .05 means if you ran the experiment 100 times again, assuming the null hypothesis is true youd see these same numbers (or more extreme results) five times.

And one last, super-thorny concept that almost everyone gets wrong: A p<.05 does not mean theres less than a 5 percent chance your experimental results are due to random chance. It does not mean theres only a 5 percent chance youve landed on a false positive. Nope. Not at all.

Again: A p of .05 means theres a less than 5 percent chance that in the world where the null hypothesis is true, the results youre seeing would be due to random chance. This sounds nitpicky, but its critical. Its is the misunderstanding that leads people to be unduly confident in p-values. The false-positive rate for experiments at p=.05 can be much, much higher than 5 percent.

Okay. Still with me? Its okay if you need to take a break. Grab a soda. Catch up with Mom. Shes wondering why you havent called in a while. Tell her about your summer plans.

Because now were going to dive into...

Generally, p-values should not be used to make conclusions, but rather to identify possibilities like a sniff test, Rebecca Goldin, the director for Stats.org and a math professor at George Mason University, explains in an email.

And for a long while, a sniff of p thats less than .05 smelled pretty good. But over the past several years, researchers and statisticians have realized that a p<.05 is not as strong of evidence as they once thought.

And to be sure, evidence for this is abundant.

Heres the most obvious, easy-to-understand piece of evidence: Many papers that have used the .05 significance threshold have not replicated with more methodologically rigorous designs.

A famous 2015 paper in Science attempted to replicate 100 findings published in a prominent psychological journal. Only 39 percent passed. Other disciplines have fared somewhat better. A similar replication effort in economic papers found 60 percent of findings replicated. Theres a reproducibility crisis in biomedicine too, but it hasnt been so specifically quantified.

The 2015 Science paper on psych studies offered some clues to which papers were more likely to replicate. Studies that yielded highly significant results (less than p=.01) are more likely to reproduce than those that are just barely significant at the .05 level.

Reporting effects that really arent there undermine the credibility of science, says Valen Johnson, a co-author of the Nature Human Behavior proposal who heads the statistics department at Texas A&M. Its important that science adopt these higher standards, before they claim they have made a discovery.

Elsewhere, researchers find evidence of an epidemic of statistical significance. Practically everything that you read in a published paper has a nominally statistically significant result, say Ioannidis. The large majority of these p-values of less than .05 do not correspond to some true effect.

For a long while, scientists thought p<.05 represented something rare. New work in statistics shows that its not.

In a 2013 PNAS paper, Johnson used more advanced statistical techniques to test the assumption researchers commonly make: that a p of .05 means theres a 5 percent chance the null hypothesis is true. His analysis revealed that it didnt. In fact theres a 25 percent to 30 percent chance the null hypothesis is true when the p-value is 05, Johnson said.

Remember: The p-value is supposed to assure researchers that their results are rare. Twenty-five percent is not rare.

For another way to think about all this, lets flip the question around: What if instead of assuming the null hypothesis is true, lets assume an experimental hypothesis is true?

Scientists and statisticians have shown that if assuming experimental hypotheses are true, it should actually be somewhat uncommon for studies to keep churning out p-values of around .05. More often, assuming an effect is true, the p-value should come in lower.

Psychology PhD student Kristoffer Magnusson has designed a pretty cool interactive calculator that estimates the probability of obtaining a range of p-values for any given true difference between groups. I used it to create the following scenario.

Lets say theres a study where the actual difference between two groups is equal to half a standard deviation. (Yes, this is a nerdy way of putting it. But think of it like this: It means 69 percent of those in the experimental group show results higher than the mean of the control group. Researchers call this a medium-sized effect.) And lets say there are 50 people each in the experimental group and the control group.

In this scenario, you should only be able to obtain a p-value between .03 and .05 around 7.62 percent of the time.

If you ran this experiment over and over and over again, youd actually expect to see a lot more p-values with a much lower number. Thats what the following chart shows. The x-axis are the specific p-values, and the y-axis is the frequency youd find them repeating this experiment. Look how many p-values youd find below .001.

(And from this chart youll see: Yes, you can obtain a p-value of greater than .05 while your experimental hypothesis being true. It just shouldnt happen as often. In this case, around 9.84 percent of all p-values should fall between .05 and .1.)

This is a specific, hypothetical scenario. But in general, its weird when so many p-values in the published literature dont match this distribution. Sure, a few studies on a question should get a p-value of .05. But more should find lower numbers.

The biggest change the paper is advocating for is rhetorical: Results that currently meet the .05 level will be called suggestive, and those that reach the stricter standard of .005 will be called statistically significant.

Journals can still publish weak (and of course null) results just like they always could, says Simine Vazire, a personality psychologist who edits Social Psychological and Personality Science (though is not speaking on the behalf of the journal). The language tweak will hopefully trickle down to press releases and news reports, which might avoid buzzwords such as breakthroughs.

The change, Vazire says, should make it so that authors need stronger results before they can make strong claims. That's all.

Historians of science are always quick to point out that Ronald Fisher, the UK statistician who invented the p-value, never intended it to be the final word on scientific evidence. That statistical significance means the hypothesis is worthy of a follow-up investigation. In a way, were proposing to returning to his original vision of what statistical significance means, Daniel Benjamin, a behavioral economist at the University of California and the lead author of the proposal, says.

If labs do want to publish statistically significant results, its going to be much harder.

Most concretely, it mean labs will need to increase the number of participants in their studies by 70 percent. The change essentially requires six times stronger evidence, Benjamin says.

The increased burden of proof the proposal authors hope would nudge labs into adopting other practices science reformers have been calling for, such as sharing data with other labs to reach consensus conclusion and thinking more long-term about their work. Perhaps their first experiment doesnt reach this new threshold. But a second experiment might. The higher threshold encourages labs to reproduce their own work before submitting to a publication.

The proposal has critics. One of them is Daniel Lakens, a psychologist at Eindhoven University of Technology in the Netherlands, who is currently organizing a rebuttal paper with dozens of authors.

Mainly, he says the significance proposal might work to stifle scientific progress.

A good metaphor is driving a car and setting a maximum speed, Lakens says. You can set the maximum speed in your country to 20 miles an hour, and no one is going to get killed. You hit someone, they wont die. So thats pretty good, right? But we dont do this. We set the maximum speed a little higher, because then we actually get somewhere a little bit quicker. ... The same is for science.

Ideally, Lakens says, the level of statistical significance needed to prove a hypothesis depends on how outlandish the hypothesis is.

Yes, youd want a very low p-value in a study that claims mental telepathy is possible. But do you need such an extreme level testing out a well-worn idea? The high standards could impede young PhDs with low budgets from testing out their ideas.

Again, a p-value of .05 doesnt necessarily mean the experiment will be a false positive. A good researcher would know how to follow up and suss out the truth.

Another critique of the proposal: It keeps scientific communities fixated on p-values, which, as discussed in the sections above, dont really tell you much about the merits of a hypothesis.

There are better, more nuanced approaches to evaluating science.

Such as:

Ioannidis admits that statistical significance [alone] doesnt convey much about the meaning, the importance, the clinical value, utility [of research].

Ideally, he says, scientists would retrain themselves not to rely on null-hypothesis testing. But we dont live in the ideal world. In the real world, p-values are a quick and easy tool any scientist can easily use to run their tests. And in our real world, p-values still carry a lot of weight into saying what gets published.

With the proposal, you dont need to train all these millions of people in heavy statistics, Ioannidis says. And it would work. It would help.

Redefining statistical significance is not an ideal solution to the problem of replication. Its a solution that nudges people to adopt the ideal solution.

Though no one I spoke to said it directly, I wouldnt be surprised if some scientists find that a bit patronizing. Why couldnt they learn advanced statistics? Or come to appreciate more nuanced way of evaluating results?

Theres a critique of the proposal the authors whom I spoke to agree completely with: Changing the definition of statistical significance doesnt address the real problem. And the real problem is the culture of science.

In 2016, Vox sent out a survey to more than 200 scientists, asking, If you could change one thing about how science works today, what would it be and why? One of the clear themes in the responses: The institutions of science need to get better at rewarding failure.

One young scientist told us: "I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter.

The biggest problem in science isnt statistical significance. Its the culture. She felt torn because young scientists need publications to get jobs. Under the status quo, in order to get publications, you need statistically significant results. Statistical significance alone didnt lead to the replication crisis. The institutions of science incentivized the behaviors that allowed it to fester.

Keep in mind, this is all just a proposal, something to spark debate. To my knowledge, journals are not rushing to change their editorial standards overnight.

This will continue to be debated.

But if it becomes that case where its still hard to publish suggestive results, and if its still difficult to secure grant money off suggestive results, then the institutions of science will not have learned their lesson. Yes, a lot of this is just tweaking the language of how we talk about science. But we have to make words suggestive and null results matter.

Failures, on average, are more valuable than positive studies, Ioannidis says.

Scientific institutions and journals know this. They dont always act like they do.

See original here:
What a nerdy debate about p-values shows about science and how to fix it - Vox