Category Archives: Human Behavior

Stop saying Facebook’s bots ‘invented’ a new language – Mashable

Image: Shutterstock / Zapp2Photo

Tesla CEO Elon Musk made headlines last week when he tweeted about his frustrations that Mark Zuckerberg, ever the optimist, doesn't fully understand the potential danger posed by artificial intelligence.

So when media outlets began breathlessly re-reporting a weeks-old story that Facebook's AI-trained chatbots "invented" their own language, it's not surprising the story caught more attention than it did the first time around.

Understandable, perhaps, but it's exactly the wrong thing to be focusing on. The fact that Facebook's bots "invented" a new way to communicate wasn't even the most shocking part of the research to begin with.

A bit of background: Facebook's AI researchers published a paper back in June, detailing their efforts to teach chatbots to negotiate like humans. Their intention was to train the bots not just to imitate human interactions, but to actually act like humans.

You can read all about the finer points of how this went down over on Facebook's blog post about the project, but the bottom line is that their efforts were far more successful than they anticipated. Not only did the bots learn to act like humans, actual humans were apparently unable to discern the difference between bots and humans.

At one point in the process though, the bots' communication style went a little off the rails.

Facebook's researchers trained the bots so they would learn to negotiate in the most effective way possible, but they didn't tell the bots they had to follow the rules of English grammar and syntax. Because of this, the bots began communicating in a nonsensical way saying things like "I can can I I everything else," Fast Company reported in the now highly cited story detailing the unexpected outcome.

This, obviously, wasn't Facebook's intention since their ultimate goal is to use their learnings to improve chatbots that will eventually interact with humans, which, you know, communicate in plain English. So they adjusted their algorithms to "produce humanlike language" instead.

That's it.

So while the bots did teach themselves to communicate in a way that didn't make sense to their human trainers, it's hardly the doomsday scenario so many are seemingly implying. Moreover, as others have pointed out, this kind of thing happens in AI research all the time. Remember when an AI researcher tried to train a neural network to invent new names for paint colors and it went hilariously wrong? Yeah, it's because English is difficult not because we're on the verge of some creepy singularity, no matter what Musk says.

In any case, the obsession with bots "inventing a new language" misses the most notable part of the research in the first place: that the bots, when taught to behave like humans, learned to lie even though the researchers didn't train them to use that negotiating tactic.

Whether that says more about human behavior (and how comfortable we are with lying), or the state of AI, well, you can decide. But it's worth thinking about a lot more than why the bots didn't understand all the nuances of English grammar in the first place.

Read the original post:
Stop saying Facebook's bots 'invented' a new language - Mashable

Is This Dog Dangerous? Shelters Struggle With Live-or-Die Tests – New York Times

The 10- to 20-minute tests, developed by behaviorists and tweaked by practitioners, ask two basic questions: Will the dog attack humans? What about other dogs?

Evaluators may observe the dog react to a large doll (a toddler surrogate); a hooded human, shaking a cane; an unfamiliar leashed dog or a plush toy dog.

But these tests have never been rigorously validated.

Dr. Bennetts 2012 study of 67 pet dogs, which compared results of two behavior tests with owners own reporting, found that in the areas of aggression and fearfulness, the tests showed high percentages of false positives and false negatives. A 2015 study of dog-on-dog aggression testing showed that shelter dogs responded more aggressively to a fake dog than a real one.

Janis Bradley of the National Canine Research Council, co-author with Dr. Patronek of the analysis published last fall, suggested that shelters should instead devote limited resources to observing the many interactions that happen between dogs and people in the daily routine of the shelter.

But Kelley Bollen, a behaviorist and shelter consultant in Northampton, Mass., maintained that a careful evaluation can identify potentially problematic behaviors. Much depends on the assessors skill, she added.

In fact, no qualifications exist for administering evaluations. Interpreting dogs, with their diverse dialects and complex body language wiggling butts, lip-licking, semaphoric ears and tails often becomes subjective.

Indianapolis Animal Care Services, which admitted 8,380 dogs to its municipal shelter in 2016, is often overcrowded and understaffed, yet faces intense scrutiny to save dogs while protecting the public. Last year it euthanized 718 dogs for behavior, based on testing and employee interactions. The agency consulted Dr. Bennett, a shelter specialist, to better manage that difficult balance.

Even as she demonstrated assessments for staff members, Dr. Bennett noted another factor that renders results suspect: the unquantifiable impact of shelter life on dogs.

Dogs thrive on routine and social interaction. The transition to a shelter can be traumatizing, with its cacophony of howls and barking, smells and isolating steel cages. A dog afflicted with kennel stress can swiftly deteriorate: spinning; pacing; jumping like a pogo stick; drooling; and showing a loss of appetite. It may charge barriers, appearing aggressive.

Conversely, some dogs shut down in self-protective, submissive mode, masking what may even be aggressive behavior that only emerges in a safe setting, like a home.

Little dogs can become more snippy. But no matter what evaluations may show, they always seem to get a pass. Ill warn, He nips and snarls, recounted Laura Waddell, a seasoned trainer who does volunteer evaluations for Liberty Humane Society in Jersey City, N.J. And I get back: I dont care! Im in love!

One way to reduce kennel stress, Ms. Sadler, the shelter consultant, said, is through programs like hers, Dogs Playing for Life, which matches dogs for outside playgroups. Shelter directors say it is a more revealing and humane way to evaluate behavior. The approach is used at many large shelters, including in New York City, Phoenix and Los Angeles.

The most disputed of the assessments is the food test. Research has shown that shelter dogs who guard their food bowls, as Bacon did, do not necessarily do so at home.

The exercise purports to evaluate resource guarding how viciously a dog will protect a possession, such as food, toys, people. Common-sense owners wouldnt grab a dogs food while it is eating. But shelters worry about children.

Dr. Bennett suggested that Bacons bite of the fake hand didnt necessitate a draconian outcome. With counseling, she said, a household without youngsters would be fine.

The shelter workers dearly wanted to save Bacon. But they were so overwhelmed that they did not have the capability to match him appropriately and counsel new owners.

So Bacon remained at the shelter for several weeks, waiting. Finally, Lindas Camp K9, an Indiana pet-boarding business that also rescues dogs, took him on. He settled right down and recently was adopted. Linda Candler, the director, placed him in a home without young children, teaching the owners how to feed him so he wouldnt be set up to fail.

His potential made him stand out, Ms. Candler said. Bacon is amazing.

Read the original:
Is This Dog Dangerous? Shelters Struggle With Live-or-Die Tests - New York Times

Lab automation and Six Sigma levels: Here’s what we learned – GlobeNewswire (press release)

We found that to achieve this level [Six Sigma], a laboratory needs automation, says Charles Hawker, who helped develop ARUPs highly sophisticated automation system.

Salt Lake City, Utah, July 31, 2017 (GLOBE NEWSWIRE) -- Attaining Six Sigma Levels in the Laboratory: Heres What We Learned

SALT LAKE CITY, July 26, 2017This month, ARUP Laboratories published a report in Journal of Applied Laboratory Medicine (JALM) detailing its 25-year journey toward achievement of a Six Sigma score for lost specimens.

We found that to achieve this level, a laboratory needs automation, says Charles Hawker, PhD, MBA, who coauthored the article in JALM. The Six Sigma quality method seeks to achieve error rates of no more than 3.4 defects per million opportunities.

To my knowledge, ARUP is the first clinical laboratory in the country to achieve Six Sigma quality for any metric, adds Hawker. For nearly two decades, Hawker has helped develop ARUPs highly sophisticated automation system.

While the ultimate goal is perfection, particularly in healthcare, making incremental progress toward this goal is the focus of ARUPs continuous improvement system. In clinical laboratories, mistakes in the analytic area are generally minor contributors to poor laboratory quality and diagnostic error. The majority of mistakesincluding lost or misplaced specimenshappen in the realm of nonanalytic processes.

Some 55,000 specimens, destined for testing in 70 specialized laboratories, are processed daily at ARUP, so tracking the precise location of a single specimen is a herculean task. From time to time, one of these samples may lose its way.

The JALM article homes in on lost-sample solutions that involve automation and human behavior controls, but the corporate culture is another important consideration. Its a patient-centric culture here; each specimen is a patient, says David Rogers, who oversees specimen processing and also coauthored the article.

We want this report to show other laboratories that they too can attain this level of quality, emphasizes Hawker. Readers learn how the automation of nonanalytic processes decreases the number of lost specimens. In addition, the article covers a variety of engineering and behavioral controls, which relate to how humans work, that have played a role in this remarkable achievement.

Every time a human touches a sample it creates an opportunity for error, explains Bonnie Messinger, ARUPs process improvement manager and the articles lead author. She estimates that a specimen could be handled 20 or more times from the point it leaves the client until it is discarded.

Automation Improvements

Using data spanning the 25-year period, the authors show the correlation between lost specimens and the implementation dates for eight major phases of automation, along with 16 process improvements and engineering controls. While implementation of process improvements, engineering controls, and automation all contributed to overall reduction in the lost specimen rate, the data shows that automation was the most significant contributor.

With each automation enhancement, lost specimen rates decreased. It did not happen immediately, but over the succeeding months, each new level of automation led to improvement. Because the automation stages and various process improvements overlapped, it was not possible to look at any particular stage or process enhancement in isolation, but collectively, the various changes have produced a nearly 100-fold improvement in the lost-sample Six Sigma metric.

Error-Proofing and Human Behavior Management

Human behaviors are influenced by process and engineering controls. In collaboration with ARUPs in-house engineering team, zeroing in on relatively small modifications to the work environment proved to be quite effective.

We have 18 different behavioral management strategiesways of encouraging certain behaviors and preventing others, says Messinger. Such changes can be very simple, such as encouraging people to keep their work areas uncluttered or establishing a lost-sample checklist.

Sharing with Others

The article attributes the remarkable decrease in the frequency of lost specimens not to a single intervention, but to a multifaceted, cumulative approach. Our results demonstrate that two approachesautomation and designed behavioral controlsworking together, can yield remarkable results, says Messinger.

The articles coauthors emphasize that even if a laboratory doesnt have the same level of automation as ARUP, any degree of automation that replaces an error-prone process will help reduce error. They also assert that the main purpose of the article is to share stories of success and spread healthcare improvement ideas.

About ARUP Laboratories

ARUP Laboratories is a national reference laboratory with more than 90 medical experts who are available for consultation. These experts are faculty at the University of Utah School of Medicine and many participate in care teams at the Huntsman Cancer Hospital and Primary Childrens Hospital. In addition, ARUP is a worldwide leader in innovative laboratory research and development, led by the efforts of the ARUP Institute for Clinical and Experimental Pathology.

Attachments:

A photo accompanying this announcement is available at http://www.globenewswire.com/NewsRoom/AttachmentNg/14937de1-f9c8-497c-81db-daa8b6efb866

Go here to read the rest:
Lab automation and Six Sigma levels: Here's what we learned - GlobeNewswire (press release)

Should Self-Driving Cars Make Ethical Decisions Like We Do? – Singularity Hub

An enduring problem with self-driving cars has been how to program them to make ethical decisions in unavoidable crashes. A new study has found its actually surprisingly easy to model how humans make them, opening a potential avenue to solving the conundrum.

Ethicists have tussled with the so-called trolley problem for decades. If a runaway trolley, or tram, is about to hit a group of people, and by pulling a lever you can make it switch tracks so it hits only one person, should you pull the lever?

But for those designing self-driving cars the problem is more than just a thought experiment, as these vehicles will at times have to make similar decisions. If a pedestrian steps out into the road suddenly, the car may have to decide between swerving and potentially injuring its passengers or knocking down the pedestrian.

Previous research had shown that the moral judgements at the heart of how humans deal with these kinds of situations are highly contextual, making them hard to model and therefore replicate in machines.

But when researchers from the University of Osnabrck in Germany used immersive virtual reality to expose volunteers to variations of the trolley problem and studied how they behaved, they were surprised at what they found.

We found quite the opposite, Leon Stfeld, first author of a paper on the research in journal Frontiers in Behavioral Neuroscience, said in a press release. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.

The implication, the researchers say, is that human-like decision making in these situations would not be that complicated to incorporate into driverless vehicles, and they suggest this could present a viable solution for programming ethics into self-driving cars.

Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma, Peter Knig, a senior author of the paper, said in the press release. Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines act just like humans.

There are clear pitfalls with both questions. Self-driving cars present an obvious case where a machine could have to make high-stakes ethical decisions that most people would agree are fairly black or white.

But once you start insisting on programming ethical decision-making into some autonomous systems, it could be hard to know where to draw the line.

Should a computer program designed to decide on loan applications also be made to mimic the moral judgements a human bank worker most likely would if face-to-face with a client? What about one meant to determine whether or not a criminal should be granted bail?

Both represent real examples of autonomous systems operating in contexts where a human would likely incorporate ethicaljudgements in their decision-making. But unlike the self-driving car example, a persons judgement in these situations is likely to be highly colored by their life experience and political views. Modeling these kinds of decisions may not be so easy.

Even if human behavior is consistent, that doesnt mean its necessarily the best way of doing things, as Knig alludes to. Humans are not always very rational and can be afflicted by all kinds of biases that could feed into their decision-making.

The alternative, though, is hand-coding morality into these machines, and it is fraught with complications. For a start, the chances of reaching an unambiguous consensus on what particular ethical code machines should adhere to are slim.

Even if you can, though, a study in Science I covered last June suggests it wouldnt necessarily solve the problem. A survey of US residents found that most people thoughtself-driving cars should be governed by utilitarian ethics that seek to minimize the total number of deaths in a crash even if it harms the passengers.

But it also found most respondents would not ride in these vehicles themselves or support regulations enforcing utilitarian algorithms on them.

In the face of such complexities, programming self-driving cars to mimic peoples instinctive decision-making could be an attractive alternative. For a start, building models of human behavior simply required the researchers to collect data and feed it into a machine learning system.

Another upside is that it would prevent a situation where programmers are forced to write algorithms that could potentially put people in harms way. By basing the behavior of self-driving cars on a model of our collective decision making we would, in a way, share the responsibility for the decisions they make.

At the end of the day, humans are not perfect, but over the millennia weve developed some pretty good rules of thumb for life and death situations. Faced with the potential pitfalls of trying to engineer self-driving cars to be better than us, it might just be best to trust those instincts.

Stock Media provided by Iscatel / Pond5

View original post here:
Should Self-Driving Cars Make Ethical Decisions Like We Do? - Singularity Hub

Donald Trump’s fishy behavior on Russia is bigger than possible email collusion – Vox

On June 18, 2013, when he was already well-known in political circles for his birther attacks on then-President Barack Obama, Donald Trump made an exciting announcement.

The Miss Universe Pageant will be broadcast live from MOSCOW, RUSSIA on November 9th, he tweeted. A big deal that will bring our countries together!

Doing business with Russia was in no way illegal at the time (this was before the invasion of Ukraine that triggered the current level of Western sanctions) and wasnt even particularly unusual. The stated aspiration that a tacky pageant would help bring the countries together was somewhat odd, especially given the then-overwhelming consensus in Republican Party circles that the Obama administration was too soft on Russia. But Trump is nothing if not a self-promoter, and pretending that his upcoming television special would have important diplomatic ramifications seems like a bit of harmless puffery.

But the follow-up tweet was genuinely weird.

Do you think Putin will be going to The Miss Universe Pageant in November in Moscow - if so, will he become my new best friend?

By this time, the Putin regime was already infamous for its crackdown on domestic dissent, brutal war in Chechnya, the murders of journalists Anna Politkovskaya and Paul Klebnikov at home and Alexander Litvinenko in London, and the ultimately failed poisoning of former Ukrainian leader Viktor Yushchenko.

That years State Department human rights report documented several reports that the government or its agents committed arbitrary or unlawful killings, while Human Rights Watch concluded that Russias cooperation with international institutions on human rights appears perfunctory.

Theres nothing particularly unusual about the United States enjoying cordial diplomatic or even business ties with authoritarian regimes that are also geopolitical allies. But Russia was not an ally of the United States, and Putin wasnt someone average Americans especially average Republicans tended to like. For Trump to express his desire for a friendly, personal relationship with the brutal and autocratic ruler of a hostile foreign country was odd.

But it proved to be the beginning of whats become, over the years, a signature element of Trumps thinking. Hes attached much more stubbornly than he is to any of his various heterodoxies on domestic policy to the idea of a Russia-friendly foreign policy that almost nobody else (including Republican lawmakers and key members of his own administration) believes in.

Thats the great mystery looming over all of the growing Trump/Russia scandals. Firmly disavowing Putin would be just about the lowest-hanging political fruit imaginable. Why wont Trump pluck it?

Soon after Election Day, it became clear that the question of Russian meddling in the 2016 election was going to be a substantial political problem for Trump. It also became clear that, as president, he was going to have to find a way to work with Republican Russia hawks in Congress and with an American military and intelligence community thats profoundly skeptical of Russia.

But before the election, he was considerably less restrained, and claimed to have a direct line to the Kremlin back in 2013, 2014, and even through much of 2015:

Later, of course, Trumps story changed. The current line from the president and his team is that any talk of him having anything to do with Russia is fake news and that he never met Putin before taking office. And, of course, Trump has lied about many things over the years. Its entirely possible that the year he spent insisting that hed been in contact with Putin and the broader Russian governing elite was just another example of Trump lying.

But its a strange thing to do. Stranger still is Trumps willingness to publicly defend Putins dismal human rights record.

Lots of American businessmen make money in countries with deplorable human rights records, and lots of American politicians are advocates for strategic alliances and commercial ties with countries that have deplorable human rights records.

But while overlooking abuses is common, its fairly unusual to straightforwardly deny them and especially to do so in a situation where there isnt any clear political, business, or strategic rationale for doing so. But Trump spent a good deal of time acting as a Putin spokesperson in the American press:

The eagerness to make excuses for Putins conduct seemed linked, rhetorically, to a somewhat half-baked notion that under Trump the United States and Russia would enjoy warmer relations.

There's nothing I can think of that I'd rather do than have Russia friendly, he said in a July 27, 2016, news conference. As opposed to the way they are right now, so that we can go and knock out ISIS with other people.

Later that day at a campaign rally, Trump said, wouldnt it be a great thing if we could get along with Putin? During the October 9 presidential debate, Trump returned to the theme that I think it would be great if we got along with Russia because we could fight ISIS together, as an example.

Shortly before Inauguration Day, on January 11, 2017, Trump said, If Putin likes Donald Trump, I consider that an asset, not a liability, because we have a horrible relationship with Russia. Russia can help us fight ISIS.

Trumps early personnel and policy moves matched up with this desire.

He quickly tapped retired Lt. Gen. Michael Flynn, known as an outlier among American military and intelligence professionals for his pro-Russian views, to serve as his national security adviser. And he bypassed the entire range of conventionally qualified candidates to serve as secretary of state in favor of Exxon executive Rex Tillerson, a former recipient of Russias Order of Friendship award. Early in his administration, Trump aimed to relax sanctions on Russia, only to back down in the face of congressional opposition.

In the end, Trumps Russia policy has landed in a more conventional place than these early moves would have suggested. Tillerson toed the standard American foreign policy line during his confirmation hearings, Defense Secretary James Mattis is a very normal Republican Russia hawk, and Flynn got fired and replaced with the much more widely respected Lt. Gen. H.R. McMaster as national security adviser.

But Trump himself has acted in many ways like an outsider to his own administrations Russia policy. But while hes simply detached from the details on many issues, he has pushed back forcefully against both Congress and his own advisers repeatedly on Russian matters.

This oddness begins with simply the way that Trump talks about Putin.

Obama called him a thug. So did Mitt Romney. Paul Ryan called him a devious thug. Marco Rubio called him both a thug and a gangster.

Trump fairly consistently declines to adopt this conventional language among American politicians, and he does so even though he is clearly aware at this point and has been for some time that suspicions about the nature of his relationship with the Russian government are a key point of political vulnerability. It would be the easiest thing on the planet for Trump to have his communications team draw up some standard-issue US-politician Putin-bashing rhetoric hes a thug, he murders journalists, he invades his neighbors and at a minimum assure Republican Party foreign policy elites that hes now down with the program.

After all, Trump used to espouse very unconventional views on things like tax cuts for the rich, Medicaid, and the importance of establishing universal health insurance coverage.

But in order to consolidate his position as leader of the GOP, Trump has dropped those ideologically heterodox views even though the heterodox position was more popular. On Russia, however, he insists on flying in the face of bipartisan consensus.

Watch: Trump is asked if he believes Russia interfered in our election, instead attacks Obama and the media. https://t.co/IrfviRPwru

Hes reluctant to even acknowledge that Russian hacking took place, resorting even to ridiculous lies about G20 conversations to change the subject.

Everyone here is talking about why John Podesta refused to give the DNC server to the FBI and the CIA. Disgraceful!

Perhaps most shockingly, Trumps own team of advisers had to drag him kicking and screaming into affirming Americas commitment to upholding Article V of the North Atlantic Treaty. And he did it only after humiliating those very same advisers by letting them brief the media that an affirmation was coming, only to cut it on the fly from the prepared text of his speech.

It was a bizarre thing to do, it clearly benefitted Russian foreign policy objectives, and it offered nothing but political downside for Trump.

The intersection of politics and law is a funny thing.

Politicized investigations into potential presidential scandals often end up turning on charges of perjury, obstruction of justice, making false statements to investigators, and other fine-grained ways in which people can get legally tripped up when theyre trying to cover embarrassing information. The much-discussed possibility of collusion with Russian election hacking is both vaguely defined, unproven at this point, and even if it happened may not have involved the president personally in any way.

These things end up hinging on small details, and the small details can be crucially important.

But the big picture also matters, and the big picture here is that Trump remains stubbornly unwilling to break with Putin and the Kremlin. The president used to regularly brag about his contacts with the leaders of the Russian government. The president won the election with the helping hand of the Russian government. The president repeatedly expressed his desire to change US foreign policy in a more pro-Russian direction. And though the president has, so far, been largely stymied in his efforts to do this seems to be straining against constraints imposed by the leadership of his own party and his own foreign policy team.

Perhaps Trump was lying about the contacts, ignorant about the campaign proposals, and his current attitudes reflect nothing more than bull-headedness.

Certainly thats what his Republican collaborators on the Hill seem to be telling themselves even as the White House works to get House Republicans to block a Russia sanctions bill that passed the Senate with 97 votes. But the mystery remains. Trump has been willing to reverse himself on other policy issues, gets no political benefit from pursuing such a pro-Russian course in the face of bipartisan opposition, and could score easy points by doing a little formulaic Putin-bashing. The fact that he refuses to tells you a lot about why Trumps presidency remains mired in scandal and why the worst may still be to come.

Read the original post:
Donald Trump's fishy behavior on Russia is bigger than possible email collusion - Vox

Israeli startup tracks behavior to outsmart hacker bots – The Times of Israel

You might think of hackers as people sitting at computers, but custom software applications, or bots, can be the ones doing the dirty work. Bots automate the business of hacking, tearing through massive troves of stolen account data, for example, or bombarding website login pages with passwords, probing for hits.

Enter Unbotify, an Israeli tech startup that analyzes human behavior patterns to differentiate between bots and humans and weed out the fakers.

Our claim is we are not raising the bar a little bit and waiting for the fraudsters to catch up as others do said Eran Magril, vice president of product and operations. We are looking at the data points which are the hardest for them to fake in order to go undetected.

The company took first place at the 2017 Cyberstorm competition last month at Tel Aviv University. It was also ranked first among Israels most innovative companies in 2017 by Fast Company magazine. Its product uses behavioral biometrics like how long keys are held down, how a mouse is moved and how a device is held to determine whether the user is a person or a bot.

We know if you are holding your device at a specific angle, and what happens if you tap your mobile device, how does this angle change? Magril said. This is a very granulated kind of data that even if youre just putting your phone on the table, it will still be sending data about the x, y, z [axes] of your machine and how it changes all the time from very small vibrations in the room.

Bots are the preferred method for committing the most common kinds of online fraud, which can cost industries millions of dollars or sway public opinion on important issues.

Eran Magril, Unbotifys vice president of product and operations. (Courtesy)

Account data stolen in attacks on major corporations can be bought on the dark web and used to take over other accounts that use the same credentials. Those accounts can then be abused in myriad ways to cash out, including buying products with saved payment methods and stealing stored gift cards or air miles.

In one case, a bot was attempting to register new accounts with an online retailer. It continuously entered emails to see if any were already registered and built a database of those that were. Then it tested common passwords on each in order to take over any accounts it gained access to.

With an average success rate of two percent, Magril said, a hacker with one million sets of credentials can take over 20,000 accounts. Thats the power of automation for fraudsters, he said. If they have automation they can operate on a big scale.

Other common tactics include content scraping and advertising fraud. Scraping is when a website uses bots to scan for competitors price changes and deals to get an unfair competitive advantage, or copies content like an airlines flight prices and availability in order to sell airline tickets on a separate platform, which diverts valuable traffic from the original sellers website.

Online ad fraud takes many forms, including bots simulating traffic to websites advertisers pay to run ads that arent being seen or clicked on by real people. Some bots will download and install games and programs that advertisers pay platforms for. Such tactics cost the industry billions of dollars each year.

That money goes to hackers instead, who keep getting more sophisticated, said Magril. This is also where the funding comes for developing new attack tools, for developing new bots, he said. Bots are always evolving because they have the incentive to evolve.

Bots are also used to create fake social media profiles that can flood specific countries and locales with legitimate or hoax news stories to influence public opinion. Fake profiles can ratchet up a public figures or companys popularity on a given platform, then disappear on command, creating the illusion that the subject lost support.

Its a huge problem and everyone is talking about it, especially in the last year with the elections in the United States and France and other places, Magril said.

Unbotifys technology goes well beyond the leading detection and protection measures, he said, because machines cant fake human behavior in all its diversity and complexity. The companys 12 employees are also constantly adding new characteristics to what they analyze to keep hackers from knowing what needs to be mimicked.

Founded two years ago by Yaron Oliker and Alon Dayan, the company has raised some $2 million from Israeli based Maverick Ventures. It boasts as its chief data scientist Yaacov Fernandess, whom Magril called a world-class expert in machine learning, of which there are only a handful, he said. Their headquarters are in the northern Israel town of Ramat Yishai.

Company founder Yaron Oliker. (Courtesy)

While the current product targets automation only, the company has noticed that there are specific behavioral indicators that can identify a person who is creating fake accounts. Certain keystroke habits, for instance, might be common among people who repeatedly register new accounts, without the help of a bot. We saw that analysis of behavioral biometrics can also be used to differentiate between different groups of people with different intentions, Magril said.

The company is focused on its core technology for now, though, and wants to break into new markets. They have customers in the US and Europe, and want to expand their clientele to China.

Visit link:
Israeli startup tracks behavior to outsmart hacker bots - The Times of Israel

Trump supporters know Trump lies. They just don’t care. – Vox

During the campaign and into his presidency Donald Trump repeatedly exaggerated and distorted crime statistics. Decades of progress made in bringing down crime are now being reversed, he asserted in his dark speech at the Republican National Convention in July 2016. But the data here is unambiguous: FBI statistics show crime has been going down for decades.

CNNs Jake Tapper confronted Trumps then-campaign manager, Paul Manafort, right before the speech. How can the Republicans make the argument that somehow its more dangerous today, when the facts dont back that up? Tapper asked.

People dont feel safe in their neighborhoods, Manafort responded, and then dismissed the FBI as a credible source of data.

This type of exchange where a journalist fact-checks a powerful figure is an essential task of the news media. And for a long time, political scientists and psychologists have wondered: Do these fact checks matter in the minds of viewers, particularly those whose candidate is distorting the truth? Simple question. Not-so-simple answer.

In the past, the research has found that no only do facts fail to sway minds, but they can sometimes produce whats known as a backfire effect, leaving people even more stubborn and sure of their preexisting belief.

But theres new evidence on this question thats a bit more hopeful. It finds backfiring is rarer than originally thought and that fact-checks can make an impression on even the most ardent of Trump supporters.

But theres still a big problem: Trump supporters know their candidate lies, but that doesnt change how they feel about him. Which prompts a scary thought: Is this just a Trump phenomenon? Or can any charismatic politician get away with being called out on lies?

In 2010, political scientists Brendan Nyhan and Jason Reifler published one of the most talked about (and most pessimistic) findings in all of political psychology.

The study, conducted in the fall of 2005, split 130 participants into groups who read different versions of a news article about President George W. Bush defending his rationale for engaging in the Iraq War. One version merely summarized Bushs rationale There was a risk, a real risk, that Saddam Hussein would pass weapons or materials or information to terrorist networks. Another version of the article offered a correction that, no, there was not any evidence Saddam Hussein was stockpiling weapons of mass destruction.

The results were stunning: Staunch conservatives who saw the correction became more likely to believe Hussein had weapons of mass destruction. (In another experiment, the study found a backfire on a question about tax cuts. On other questions, like on stem cell research, there was no backfire.)

Backfire is a pretty radical claim if you think about it, Ethan Porter, a political scientist at George Washington University, says. Not only do attempts to correct information not sink in, but they can actually make conflicts even more intractable. It means earnest attempts to educate the public may actually making things worse. So in 2015, Porter and a colleague, Thomas Wood at the Ohio State University, set out to try to replicate the effect for a paper (which is currently undergoing peer review for publishing in the journal Political Behavior).

And among 8,100 participants and on the sort of political questions that tend to bring out hardline opinions Porter and Wood hardly found any evidence of backfire. (The one exception, interestingly, was the question of weapons of mass destruction in Iraq. But even on that, the backfire effect went away when they tweaked the wording of the question.)

Theres no evidence that backfire describes a common reflex of Americans when it comes to facts, Porter assures me. (Nyhan, for his part, never asserted that backfire was ubiquitous, just that it was a possible and particularly consequential result of fact-checking.)

Stories of failed replications in social psychology often grow ugly, with accusations of bullying and scientific misconduct flying in both directions. But in this story, researchers decided to team up to test the idea again.

The fact that Nyhan and Reiflers breakthrough study didnt replicate isnt a shocker. This happens all the time in science. One group of researchers publishes a breakthrough finding. Another lab tries to replicate it, and fails.

But instead of feuding, Nyhan, Reifler, Porter, and Wood came together to conduct a new study.

If you believe in social science, this is an ideal way to resolve a dispute, Porter says. If we can devise an experiment together, then the results are going to have something meaningful to say about our differing understandings of the world.

So the four researchers collaborated on two experiments with a wide range of people as subjects, including Trump and Hillary Clinton supporters.

The first experiment drew on Trumps exaggerations of crime statistics.

In the experiment, participants read one of five news articles. One was a control article about bird watching. Another just contained a summary of Trumps message without a correction. The third was an article that included a correction. The fourth included a correction, but then also a line of pushback from onetime Trump campaign manager Paul Manafort, who said the FBIs statistics were not to be trusted. The fifth included a line where Manafort really laid into the FBI, saying, "The FBI is certainly suspect these days after what they just did with Hillary Clinton.

The thinking here: If anyone should be able to incite a backfire effect among Trump supporters, its Trumps campaign director. Manafort gives Trump supporters cover. They can reject the correction and cite one of the most influential figures in the campaign. And if theres a time backfire ought to occur, its during a presidential campaign, when our political identities are fully activated.

But it didnt happen. On average, all the studys participants were more likely to accept the correction when they read it. Trump supporters were more hesitant to accept it than Clinton supporters. But thats not backfire; thats reluctance. Manaforts assertion that the FBI statistics were not to be trusted didnt make much of a difference either.

Everyones beliefs about changing crime over the last 10 years became more accurate in the face of a correction, Nyhan says.

The research group then conducted a second experiment during the presidential debates. This one was conducted in near-real time: On the night of the first presidential debate, the group ran an online study with 1,500-plus participants.

The study focused on one Trump claim in particular. Trump said thousands of jobs [are] leaving Michigan, Ohio ... theyre just gone.

This, again, isnt true. The Bureau of Labor Statistics actually finds both states created 70,000 new jobs in the previous year. Half of the participants saw the correction; the other half did not.

Again, the researchers found no evidence of backfire. Its worth underscoring: This was on the night of the first presidential debate. Its the Super Bowl of presidential politics. If corrections arent going to backfire during a debate, when will they?

In both experiments, the researchers couldnt find instance of backfire. Instead, they found that corrections did what they were intended to do: nudge people toward the truth. Trump supporters were more resistant to the nudge, but they were nudged all the same.

But heres the kicker: The corrections didnt change their feelings about Trump (when participants in the corrections conditions were compared with controls).

People were willing to say Trump was wrong, but it didnt have much of an effect on what they felt about him, Nyhan says.

So facts make an impression. They just dont matter for our decision-making, which is a conclusion thats abundant in psychology science.

(And if youre thinking, How could one short experimental manipulation really change how much participants like Trump? know that other research shows its possible. Notably, studies conducted during the election found that just reminding white voters they may be a racial minority one day increased support for Trump.)

The big question is: To what extent do those results generalize beyond Trump himself? says Nyhan. Many of his supporters may have to come to terms with his records of misstatements by the time this study was conducted.

Nyhan is reluctant to place the blame on Trump supporters themselves its just human nature to stand by our political partys candidates. But he says theres something wrong with our institutions, norms, and party leaders who enable the rise of candidates who constantly lie.

At least its nice to know that facts do make an impression, right? On the other hand, we tend to avoid confronting facts that run hostile to our political allegiances. Getting partisans to confront facts might be easy in the context of an online experiment. Its much harder to do in the real world.

These results have not yet been peer-reviewed or published in an academic journal so treat them as preliminary. But I did run them by several political science and psychology researchers for a sniff test.

These two experiments are well done, and the data analysis appears to straightforward and correct: we observe clear movement on subjects beliefs as a result of factual corrections, Alex Coppock, who researches political decision-making at Yale, writes in an email. This piece is nice because it adds to the (small but growing) consensus that backfire effects, if they exist at all, are rare.

Others commended the researchers for collaborating in the face of conflicting results. I think this is exactly how the scientific process should operate as we try to explain human behavior, Asheley Landrum, who researches politically motivated reasoning at Texas Tech, writes. Social scientists, arguably, should be even more aware of motivated reasoning, recognizing that it also occurs in scientists.

Nyhans research is about seeing if attitude change is possible. And this research often comes to frustrating ends. In one study, he and Reifler tested out four different interventions to try to nudge vaccine skeptics away from their beliefs. None made a difference. Though it is illusive, at the least, he found a little attitude change within himself.

Jason [Reifler] and I have definitely updated our beliefs about the prevalence of the backfire effect, Nyhan says. He wont say its been debunked. But hes moving in that direction.

Link:
Trump supporters know Trump lies. They just don't care. - Vox

From Trump Tweets to Kardashian saga, how online behavior affects kids in real life – Chicago Tribune

Young children know that name-calling is wrong. Tweens are taught the perils of online bullying and revenge porn: It's unacceptable and potentially illegal.

But celebrities who engage in flagrant attacks on social media are rewarded with worldwide attention. President Donald Trump's most popular tweet to date is a video that shows him fake-pummeling a personification of CNN. Reality TV star Rob Kardashian was trending last week after attacking his former fiance on Instagram in a flurry of posts so explicit his account was shut down. He continued the attacks on Twitter, where he has more than 7.6 million followers.

While public interest in bad behavior is nothing new, social media has created a vast new venue for incivility to be expressed, witnessed and shared. And experts say it's affecting social interactions in real life.

"Over time, the attitudes and behaviors that we are concerned with right now in social media will bleed out into the physical world," said Karen North, a psychologist and director of the University of Southern California's Digital Social Media Program. "We're supposed to learn to be polite and civil in society. But what we have right now is a situation where a number of role models are acting the opposite of that ... And by watching it, we vicariously feel it, and our own attitudes and behaviors change as a result."

Catherine Steiner-Adair, a psychologist and author of "The Big Disconnect: Protecting Childhood and Family Relationships in the Digital Age," said she's already seeing the effects.

She said she's been confronted by students across the country asking why celebrities and political leaders are allowed to engage in name-calling and other activities for which they would be punished.

On some middle-school campuses, "Trumping" means to grab a girl's rear end, she said.

And teenagers have killed themselves over the kind of slut-shaming and exposure of private images Kardashian leveled at Blac Chyna, with whom he has an infant daughter.

"We are normalizing behaviors, and it's affecting some kids," Steiner-Adair said. "And what's affecting kids that is profound is their mistrust of grown-ups who are behaving so badly. Why aren't they stopping this?"

Social media satisfies a human need for connection. Users bond over common interests and establish digital relationships with their favorite public figures, following and commenting on their lives just like they do their friends'.

Gossip is a bonding activity, and it doesn't take a Real Housewife to know people love to share dirt about others' perceived misdeeds. Collective disapproval creates a feeling of community, regardless of which side you're on. Having a common enemy is "one of the strongest bonding factors in human nature," North said.

With 352,000 retweets, Trump's CNN-pummeling post isn't in the realm of Ellen DeGeneres' Oscar selfie (3.4 million retweets). And Kardashian's rant against Chyna paled in popularity with Beyonce's Instagram pregnancy announcement, which collected 8 million likes.

Still, Trump's attack tweets have proven his most popular, according to a new study by Ohio State University Professor Jayeon "Janey" Lee.

"Attacks on the media were most effective," Lee said of her analysis of tweets posted during the presidential campaign. "Whenever Trump criticized or mocked the media, the message was more likely to be retweeted and 'favorited.' "

Trump, who has 33.4 million Twitter followers, has defended his social-media approach as "modern day presidential."

Cyber incivility, particularly when practiced by cultural leaders, can have a profound impact on human relations, North said.

Studies show that young people who witness aggressive behavior in adults model and expand on that behavior. She pointed to Stanford University psychologist Albert Bandura's famous "Bobo Doll Experiment," which found that kids who saw adults hit a doll in frustration not only hit the doll as well, but attacked it with weapons.

Social media is an atmosphere devoid of the social cues that mitigate behavior in real life, she said. When violating social norms in person, there's immediate feedback from others through body language and tone of voice. No such indicators exist online, and retweets can feel like validation.

Cruel and humiliating posts often become "an instant hit online," Steiner-Adair said. "It's one of the best ways to become popular."

Viral posts then get mainstream media attention, spreading digital nastiness into everyday conversation.

By not expressly rejecting cruel or hateful online behavior, "we are creating a bystander culture where people think this is funny," she said.

"When we tolerate leaders in the popular media like a Kardashian, or a president behaving in this way, we are creating a very dangerous petri dish for massive cultural change," Steiner-Adair said.

Young people, who may be the most plugged in, are getting mixed messages as they form their moral concepts.

"It behooves us all to question why we are participating in this mob of reactivity," Steiner-Adair said, "and what are the character traits we need to model for our children."

Here is the original post:
From Trump Tweets to Kardashian saga, how online behavior affects kids in real life - Chicago Tribune

NEW: Man accused of lewd behavior, faces human trafficking charge – Palm Beach Post

BOYNTON BEACH

Authorities are investigating a suspected case ofhuman trafficking involving a 26-year-old suburban West Palm Beach man and a teenage girl.

Boynton Beach police arrested Steven Snipe on June 30 on three counts of lewd and lascivious battery against a person under the age of 18. Additional charges for human trafficking and production of child pornography are pending, waiting on the completion of a search warrant for a hotel room where Snipe is suspected of selling the teen into prostitution, and for his cellphone devices, according to a police report.

As of Monday morning, Snipe remained in custody at the Palm Beach County Jail after a judge set his bail bond amount at $45,000.

At least 10 people in Palm Beach County and one in Martin County have been arrested for human trafficking this year as local-law enforcement agencies have increasingly focused on a crime described by many as modern-day slavery.

Police say that Snipe met a juvenile runaway about two months ago through the Backpage website and forced her into prostitution.

This is a developing story. Check back later for more details.

Read the original post:
NEW: Man accused of lewd behavior, faces human trafficking charge - Palm Beach Post

How a Syrian Writer Takes on War – New Republic

Many of the stories are about power, and the violence, both implicit and explicit, imbued in its existence. The entirety of Ants reads When I crushed a large number of ants by accident with my feet, I realized that weakness is punishment without wrongdoing. It has that special quality that give allegories their power: It seems obvious, but only after you finish it. Many of the stories use animals or household objects as a window into the human. Later, in Greatest Creatures, a mother ant and a son ant are discussing which species is better, humans or ants. The story ends when the mother points out that, though humans have many geniuses among them, theyve been unable to prevent the catastrophic from occurring, and the fact that ants have prevented it makes them better. Like Alomars best work, it makes a point that is equal parts silly and compelling: By most metrics, humans seem a great deal more important than ants, but it also seems obvious that whichever species finds a way to avoid destroying itself is the better one.

In Who Deserves a Muzzle? a dog watches his owners shout at one another and considers whether it makes sense that he be required to wear a muzzle and collar when his behavior is so much better than theirs. Later, in They Dont Know How to Bark, two dogs reflect with sympathy and pity on humans poor sense of smell and ugly language. Again, Alomar is being fundamentally ridiculous while making an odd sort of sense: Hes writing against the arrogance that can come from a limited perspective. But these are not childrens fables: Alomar often centers greed, arrogance, cruelty, and above all, folly. When inanimate objects attempt to replicate what they see from humans as a means of self-determination, it has disastrous consequences, like in the collections title story:

Some of the teeth of the comb were envious of human class differences. They strived to increase their height, and, when they succeeded, began to look with disdain on their colleagues below. After a little while, the combs owner felt a desire to comb his hair. But when he found it in this state, he threw it in the garbage.

In Alomars world, human behavior seems destructive, even to people, as long as theyre not the ones theyre observing. The change in perspective is what reveals human beings as ridiculous: If it is ridiculous for a comb to be vain, how ridiculous is it for a person?

Throughout the stories, humanity is often portrayed as the enemy of everything within its striking distance. But the harm is often inflicted in the background, like its just something that happens. When Alomar turns his attention to the elements that make that harm possible, things begin to feel much less silly. In A Taste, the devil tastes a drop of human hatred, is poisoned, and dies. Alomar hits this note again in Human Malice, where an argument between a nuclear bomb and a grenade over which is more evil is ended when human malice intervenes and points out that it created them both. Alomar posits hatred and malice as elements of human nature, not its sum total, but in emphasizing their destructive powers, he recognizes their control over the way huge swaths of the world lives. The effect is that Alomars stories give brief flashes of insight into the magnitude of human evil, like staring directly into the sun for a moment before having to look away.

Its not that Alomar is cynical; hes exhausted. Journey of Life, the first story in The Teeth of the Comb, follows a nameless, sexless character as they pour over maps and walk through crowds shouting for a beloved they never find. The true object of the search is only revealed in the last line: I stood on my shaking legs and continued my journey, searching for humanity until the last moment. The character maintains hope because they are willing to continue searching, but the reader can see the truth: theyll be looking forever. Importantly, Alomar does not denigrate his character for their wrongheadedness; instead, he casts the quest as noble, in spite of its futility.

Read the rest here:
How a Syrian Writer Takes on War - New Republic