Category Archives: Human Behavior

Scanalytics wins Cisco award – BizTimes.com (Milwaukee)

Milwaukee-based startup Scanalytics Inc. has earned a top award from technology giant Cisco in an IoT for Business development competition.

Joe Scanlin and Matt McCoy founded Milwaukee startup Scanalytics.

Scanalytics, which developed smart flooring technology, earned a first place in the Best Smart Building technology category in the contest, which was held this month at Viva Technology 2017 in Paris. More than 5,000 startups and 68,000 people attended Viva Technology.

Scanalytics has developed a sensor-based engagement and analytics platform that monitors human behavior through foot traffic and predictive analytics. Among the uses the company has tested are retail product sales, in which a salesperson could be alerted if a customer is standing near a product, and business conferences, in which organizers can track booth traffic. The company deployed sensors to more than 100 clients globally in 2016.

Cisco and Scanalytics have now collaborated to create a new use for the smart flooring technology. Joe Scanlin, chief executive officer of Scanalytics, and Petr Bambasek, director of product, worked with Cisco engineers to link its technology to Ciscos Spark cloud collaboration program.

The team created a product through which a chat bot pulls data from Scanalytics application programming interface in real time, and communicates it to a buildings occupants via Spark. Among the potential uses was sending a reminder to employees to get up and move during the work day, and to inform building maintenance staff whether a bathroom needs cleaning based on usage.

We spend a majority of our lives indoors and these environments have a huge impact on everything from business efficiencies and productivity to our overall wellness, Scanlin said.Physical environments need to be properly equipped to capture, store and access information on how we interact with them, so they can operate like an autonomous nervous system and adjust themselves accordingly.

Milwaukee-based startup Scanalytics Inc. has earned a top award from technology giant Cisco in an IoT for Business development competition.

Joe Scanlin and Matt McCoy founded Milwaukee startup Scanalytics.

Scanalytics, which developed smart flooring technology, earned a first place in the Best Smart Building technology category in the contest, which was held this month at Viva Technology 2017 in Paris. More than 5,000 startups and 68,000 people attended Viva Technology.

Scanalytics has developed a sensor-based engagement and analytics platform that monitors human behavior through foot traffic and predictive analytics. Among the uses the company has tested are retail product sales, in which a salesperson could be alerted if a customer is standing near a product, and business conferences, in which organizers can track booth traffic. The company deployed sensors to more than 100 clients globally in 2016.

Cisco and Scanalytics have now collaborated to create a new use for the smart flooring technology. Joe Scanlin, chief executive officer of Scanalytics, and Petr Bambasek, director of product, worked with Cisco engineers to link its technology to Ciscos Spark cloud collaboration program.

The team created a product through which a chat bot pulls data from Scanalytics application programming interface in real time, and communicates it to a buildings occupants via Spark. Among the potential uses was sending a reminder to employees to get up and move during the work day, and to inform building maintenance staff whether a bathroom needs cleaning based on usage.

We spend a majority of our lives indoors and these environments have a huge impact on everything from business efficiencies and productivity to our overall wellness, Scanlin said.Physical environments need to be properly equipped to capture, store and access information on how we interact with them, so they can operate like an autonomous nervous system and adjust themselves accordingly.

See the rest here:
Scanalytics wins Cisco award - BizTimes.com (Milwaukee)

Facebook Makes a Step Towards Messenger Monetization – nwitimes.com

In order for Messenger to take off financially, Facebook (NASDAQ: FB) needs to "get a lot of businesses using it organically and build the behavior for people that they reach out to businesses for different things," according to CEO Mark Zuckerberg on the last earningscall. Therein lies the real opportunity for Messenger to transform the way that consumers interact with companies.

Earlier this year at its F8 developer conference, Facebook announced a new Discover section for Messenger, which is intended to showcase "amazing experiences" for people and businesses. Yesterday, the social network announced that the Discover section is now rolling out across the U.S. There are different types of things in Discover, like reading articles or getting sports news, but by far the most meaningful from an investing perspective is the potential to introduce users to branded chatbots.

Changing established human behaviors is hard, but there's been a growing trend of users turning to social media to reach customer support. That movement mostly started on Twitter (NYSE: TWTR), but Facebook has a real opportunity here to steal the show and run with it, particularly when it comes to creating a business out of the trend.

When this shift starting nearly a decade ago, there was speculationthat Twitter would start charging businesses that were using its service for customer service. Co-founder Biz Stone penned a blog post shooting down the idea, promising that Twitter would remain free for all accounts (corporate accounts or dedicated support accounts) with existing services. At the time, the company was still trying to brainstorm new services that it could offer companies for a fee, but Stone (who just recently returned to Twitter) said Twitter had no announcements to make then. There is now a small "Provides support" indicator next to official support accounts, but that's about it. Indirectly, support accounts may garner some user data from their interactions that could perhaps be used for ad targeting.

In the years since, Twitter has touted itself as an effective customer service platform -- examples here and here -- but has not announced any new revenue-generating products, which is a huge missed opportunity considering the simple fact that there's always more money in enterprise offerings than consumer ones. So while Twitter has seemingly hit a wall in terms of monetizing the growing trend of social media-based customer service, Facebook not only has an opportunity to become a leader, it also has a more viable route to monetization.

What's less clear is if Facebook is currently charging companies a fee to be included in the new Discover section. Considering how new it is, it wouldn't make much sense to charge. In His Zuckness' words, building that behavior is "the first thing that we need to do on Messenger and WhatsApp." Sending a message directly to a company is but a small behavioral step from tweeting at a company.

Automation has long been the hardest part about scaling up customer service for any organization. The chatbots that Facebook has been developing hope to solve that conundrum once and for all, much to the dismay of the roughly 2.7 million customer service representatives in the U.S. (as of May 2016, accordingto the Bureau of Labor Statistics) that could see their jobs threatened by chatbots.

Rudimentary chatbots have been around for decades, since the dawn of computing in the '50s and '60s. You've probably heard of the Turing test. The big difference between then and now is that the modern generation of chatbots hopes to carry conversations that are more organic and intuitive. They need to be able to follow conversations, understand context, and more. This is no easy task: Facebook's chatbot failure rate was recently estimated at 70%. Let's also not forget Microsoft's experimental chatbot Tay from last year, which was immediately commandeered into a genocidal, racist, sexist murderbot by Twitter trolls.

These are the two critical pieces to this puzzle: Facebook needs to both build up the consumer behavior while tackling the technical challenge of creating compelling chatbots for companies to use. Neither one is easy, but Facebook can work on both concurrently. Importantly, Facebook is not under financial pressure, since its core ad business is booming and shares are trading at all-time highs. Facebook can take its time to make sure it executes. You can't say the same about Twitter.

10 stocks we like better than Facebook

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Facebook wasn't one of them! That's right -- they think these 10 stocks are even better buys.

*Stock Advisor returns as of June 5, 2017

Evan Niu, CFA owns shares of Facebook. Evan Niu, CFA has the following options: long January 2018 $120 calls on Facebook. The Motley Fool owns shares of and recommends Facebook and Twitter. The Motley Fool has a disclosure policy.

See original here:
Facebook Makes a Step Towards Messenger Monetization - nwitimes.com

Documentary ‘Food Evolution’ turns to reason to discuss GMO controversy – Los Angeles Times

Calm, careful, potentially revolutionary, "Food Evolution" is an iconoclastic documentary on a hot-button topic. Persuasive rather than polemical, it's the unusual issue film that deals in counterintuitive reason rather than barely controlled hysteria.

As directed by Scott Hamilton Kennedy, "Food Evolution" wades into the controversy that makes the term GMO (genetically modified organisms) what Jon Stewart once called "the three most terrifying letters in the English language."

For what right-thinking citizen hasn't quailed at the thought of armies of artificially conceived zombie fruits and vegetables marching in lockstep under the command of monster corporation Monsanto until they take over the world.

As environmental activist Mark Lynas says, "its difficult to pay Monsanto a compliment. It's like praising witchcraft."

But taking as his theme a quote attributed to Mark Twain that posits, "It's easier to fool people than to convince them they have been fooled," filmmaker Kennedy wants us to consider the notion that much of what we feel about GMOs may be wrong.

Previously responsible for the splendid "OT: Our Town" and the Oscar-nominated "The Garden," about the plight of a 14-acre community garden in South Los Angeles, Kennedy is a veteran documentarian.

Here he's engaged the mellifluous voice of science celebrity Neil deGrasse Tyson as narrator and made sure to talk to people on both sides of the issue, partisans who, ironically, all have the same goal: safe, abundant food for everyone without the use of excessive toxic chemicals.

It is in fact the question of how to feed the staggering amount of people in the world more than 7 billion now, 9 billion by 2050 that was one of the stimuli that started Kennedy on this project. And he wants you to remember that trying to modify plants to emphasize desirable aspects is something farmers have been doing for a long time.

"Food Evolution" begins in Hawaii in 2013 when the big island's Hawaii County Council held hearings on whether to make the location into the world's first GMO-free zone.

That was ironic because Hawaii turns out to be a state with a major GMO success story, the rainbow papaya, which enabled papaya farming to come back from the dead after a devastating attack of disease in the 1990s.

While anti-GMO activists like Jeffrey Smith talk darkly of GMOs as "thoughtless, invasive species," the other side wrings its hands about pervasive doomsday tactics and distrust of scientific data.

"It's so much easier to scare people than reassure them," says writer Mark Lynas, with food authority Michael Pollan adding, "I don't believe fear-mongering has helped. I'm careful never to say GMOs are dangerous."

One statistic the film cites reveals the considerable gap 88% versus 37% between what scientists and laypeople say about whether GMOs are safe to eat.

"Food Evolution" takes time to carefully parse several issues that arise in the debate, like tumors in rats who eat GMO food (they get tumors no matter what they eat) and poundage versus toxicity in pesticide use.

The film also emphasizes that decisions made in the developed world can have global implications, exploring difficulties farmers in Uganda are having gaining access to the GMO bananas they want to combat decimation by disease.

"Food Evolution" certainly understands the larger factors that put GMO foods in the crosshairs: societal fury at corporate lying and greed, and distrust of Monsanto in particular as the developer of DDT and Agent Orange.

But finally the film is more troubled by the erosion of trust in science and by anti-GMO activists like Zen Honeycutt who says on camera that she trusts personal experiences of mothers more than the conclusions of scientists. As writer Lynas says, "If you throw science out, there is nothing."

Though it ultimately sides with the pro-GMO camp, "Food Evolution" makes some fascinating points about human behavior along the way, about how we don't make decisions based on facts as often as we think we do. This documentary may not change your mind, but it will make you consider what caused you to decide in the first place.

-------------

Food Evolution

Not rated

Running time: 1 hour, 32 minutes

Playing: Laemmle Monica, Santa Monica

See the most-read stories in Entertainment this hour

kenneth.turan@latimes.com

@KennethTuran

Excerpt from:
Documentary 'Food Evolution' turns to reason to discuss GMO controversy - Los Angeles Times

Want to understand Russia’s economy? Try reading Tolstoy. – Marketplace.org

ByDavid Brancaccio

June 28, 2017 | 5:00 AM

Economics is fundamentally the study of human behavior. Yes, it's steeped in equations and math, but some argue it's equally based on philosophy and the arts. A new book by Morton Schapiro and Gary Saul Morson looks at what insight economists can gain from reading classic literature.

Northwestern University President Morton Schapiro joined Marketplace Morning Report host David Brancaccio to discuss Cents and Sensibility: What Economics Can Learn from the Humanities.

Click on the audio player above to hear their conversation.

View post:
Want to understand Russia's economy? Try reading Tolstoy. - Marketplace.org

A leading Silicon Valley engineer explains why every tech worker needs a humanities education – Quartz

In 2005, the late writer David Foster Wallace delivered a now-famous commencement address. It starts with the story of the fish in water, who spend their lives not even knowing what water is. They are naively unaware of the ocean that permits their existence, and the currents that carry them.

The most important education we can receive, Wallace goes on to explain, isnt really about the capacity to think, but rather about the choice of what to think about. He talks about finding appreciation for the richness of humanity and society. But it is the core concept of meta-cognition, of examining and editing what it is that we choose to contemplate, that has fixated me as someone who works in the tech industry.

As much as code and computation and data can feel as if they are mechanistically neutral, they are not. Technology products and services are built by humans who build their biases and flawed thinking right into those products and serviceswhich in turn shapes human behavior and society, sometimes to a frightening degree. Its arguable, for example, that online medias reliance on clickbait journalism, and Facebooks role in spreading fake news or otherwise sensationalized stories influenced the results of the 2016 US presidential election. This criticism is far from outward-facing; it comes from a place of self-reflection.

I studied engineering at Stanford University, and at the time I thought that was all I needed to study. I focused on problem-solving in the technical domain, and learned to see the world through the lens of equations, axioms, and lines of code. I found beauty and elegance in well-formulated optimization problems, tidy mathematical proofs, clever time- and space-efficient algorithms. Humanities classes, by contrast, I felt to be dreary, overwrought exercises in finding meaning where there was none. I dutifully completed my general education requirements in ethical reasoning and global community. But I was dismissive of the idea that there was any real value to be gleaned from the coursework.

Upon graduation, I went off to work as a software engineer at a small startup, Quora, then composed of only four people. Partly as a function of it being my first full-time job, and partly because the company and our producta question and answer sitewas so nascent, I found myself for the first time deeply considering what it was that I was working on, and to what end, and why.

As my teammates and I were building Quora, we were also simultaneously defining what it should be, whom it would serve, and what behaviors we wanted to incentivize amongst our users.I was no longer operating in a world circumscribed by lesson plans, problem sets and programming assignments, and intended course outcomes. I also wasnt coding to specs, because there were no specs. As my teammates and I were building the product, we were also simultaneously defining what it should be, whom it would serve, what behaviors we wanted to incentivize amongst our users, what kind of community it would become, and what kind of value we hoped to create in the world.

]I still loved immersing myself in code and falling into a state of flowthose hours-long intensive coding sessions where I could put everything else aside and focus solely on the engineering tasks at hand. But I also came to realize that such disengagement from reality and societal context could only be temporary.

The first feature I built when I worked at Quora was the block button. Even when the community numbered only in the thousands, there were already people who seemed to delight in being obnoxious and offensive. I was eager to work on the feature because I personally felt antagonized and abused on the site (gender isnt an unlikely reason as to why). As such, I had an immediate desire to make use of a blocking function. But if I hadnt had that personal perspective, its possible that the Quora team wouldnt have prioritized building a block button so early in its existence.

Our thinking around anti-harassment design also intersected a great deal with our thinking on free speech and moderation. We pondered the philosophical questionalso very relevant to our productof whether people were by default good or bad. If people were mostly good, then we would design the product around the idea that we could trust users, with controls for rolling back the actions of bad actors in the exceptional cases. If they were by default bad, it would be better to put all user contributions and edits through approvals queues for moderator review.

We pondered the philosophical questionalso very relevant to our productof whether people were by default good or bad.We debated the implications for open discourse: If we trusted users by default, and then we had an influx of low quality users (and how appropriate was it, even, to be labeling users in such a way?), what kind of deteriorative effect might that have on the community? But if we didnt trust Quora members, and instead always gave preference to existing users that were known to be high quality, would we end up with an opinionated, ossified, old-guard, niche community that rejected newcomers and new thoughts?

In the end, we chose to bias ourselves toward an open and free platform, believing not only in people but also in positive community norms and our ability to shape those through engineering and design. Perhaps, and probably, that was the right call. But weve also seen how the same bias in the design of another, pithier public platform has empowered and elevated abusers, harassers, and trolls to levels of national and international concern.

At Quora, and later at Pinterest, I also worked on the algorithms powering their respective homefeeds: the streams of content presented to users upon initial login, the default views we pushed to users. It seems simple enough to want to show users good content when they open up an app. But what makes for good content? Is the goal to help users to discover new ideas and expand their intellectual and creative horizons? To show them exactly the sort of content that they know they already like? Or, most easily measurable, to show them the content theyre most likely to click on and share, and that will make them spend the most time on the service?

It worries me that so many of the builders of technology today are people who havent spent time thinking about these larger questions.Ruefullyand with some embarrassment at my younger selfs condescending attitude toward the humanitiesI now wish that I had strived for a proper liberal arts education. That Id learned how to think critically about the world we live in and how to engage with it. That Id absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That Id had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish Id even realized that these were worthwhile thoughts to fill my mind withthat all of my engineering work would be contextualized by such subjects.

It worries me that so many of the builders of technology today are people like me; people havent spent anywhere near enough time thinking about these larger questions of what it is that we are building, and what the implications are for the world.

But it is never too late to be curious. Each of us can choose to learn, to read, to talk to people, to travel, and to engage intellectually and ethically. I hope that we all do soso that we can come to acknowledge the full complexity and wonder of the world we live in, and be thoughtful in designing the future of it.

Follow Tracy on Twitter. Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Visit link:
A leading Silicon Valley engineer explains why every tech worker needs a humanities education - Quartz

Siri Needs Her Own Personal Assistant – TheStreet.com

Apple Inc (AAPL) is searching for a "Siri Event Maven" to help its digital personal assistant keep track of holidays, events and pop culture trending with people.

As Apple prepares to launch the HomePod laterthis year, it's focusing on improving Siri's understanding and strategic awareness.As of yesterday, the company is looking for a full time employee to help Siri keep track of social media trends, culture and human behavior as the Siri Event Maven.

The company said the new hire will work with engineers, analysts and designers to "provide strategic awareness of cultural happenings in the collective zeitgeist" and assist in developing responses for Siri to events original creators might have missed - Talk Like a Pirate Day, for example.

Apple stock was slightly down during midday trading.

What's Hot On TheStreet

Another bank is bullish on Alibaba: JP Morgan initiated Chinese e-commerce giant Alibaba (BABA) with an overweight rating and $190 price target in a new note Tuesday, representing more than 30% growth over Monday's closing price of $142.73. In JP Morgan's eyes, Alibaba is entering a transformation from a pure play e-commerce company to a data-driven beast that stands to power its bottom line more than most expect.

"We believe Alibaba's core commerce is expanding from traffic monetization to data monetization and such trend will quickly expand to its media/cloud businesses," writes JP Morgan analyst Alex Yao. "Such expansion not only allows Alibaba to tap into non-transaction-based corporate budget (e.g. market research, brand awareness, and customer service), but also supports our investment thesis based on sustainable revenue/earnings growth."

Link:
Siri Needs Her Own Personal Assistant - TheStreet.com

W(h)ither the Humanities? – HuffPost

Review of Cents and Sensibility: What Economics Can Learn From the Humanities. By Gary Saul Morson and Morton Schapiro. Princeton University Press. 307 pp. $29.95

Although they make lots of mistakes, economists are in demand. By contrast, the humanities are in deep trouble. In 2014, President Obama opined that folks can make a lot more with skilled manufacturing or the trades than they might with an art history degree. A year later, Jeb Bush acknowledged that liberal arts is a great thing, only to remind potential philosophy majors to realize youre going to be working at Chick fil-A.

More and more college students seem to agree. In the late 1960s, almost twenty percent of recipients of bachelors degrees majored in in a humanities discipline; in 2010, the figure was 8 percent. Little wonder that many taxpayers and state legislators have concluded that philosophy, literature, linguistics, history, art history, anthropology, and gender studies are luxuries we can no longer afford.

In Cents and Sensibility, Gary Morson, a professor of Slavic languages and literature at Northwestern University, and Morton Schapiro, a professor of economics and the president of Northwestern, maintain that humanistic disciplines contribute essential ingredients (the role of contingency and context and the limitations of abstract one-size-fits-all models) to studies of human behavior and the complex challenges of our time. In this book, Morson and Schapiro identify concrete ways in which economists studying higher education, the family, and the development of poor countries can benefit from three fundamental humanistic capabilities: an appreciation of people as inherently cultural; stories as an essential form of explanation; and ethics in all its irreducible complexity.

Cents and Sensibility offers one argument, among many, on behalf of the humanities. Their argument is often, but not always persuasive. That said, the authors call for a dialogue between economists and humanists is welcome. Their indictment of humanists for being spectacularly inept and clueless in making the case for their disciplines is urgently necessary. As is their claim that quantitative rigor and focus on policy can and should be supplemented with the empathy, judgment and wisdom that defines the humanities at their best.

The authors use fresh and fascinating examples to bolster the oft-repeated claim that ethical considerations should be incorporated into the analysis of economists and policy makers. To bolster the standing of their institutions in the highly influential national rankings of colleges and institutions, Morson and Schapiro point out, some administrators cross ethical lines. To increase the yield (the percentage of accepted students who matriculate), they reject excellent students who they have reason to believe will go elsewhere. They count students who send in a postcard expressing interest (but dont submit essays and recommendations) as applicants. They ignore the standardized test scores of international students in English, but include scores in mathematics. They cook the books about the percentage of alumni who make an annual gift to their alma mater. Worst of all, Morson and Schapiro report that rating agencies do not fact-check the data provided by colleges and universities.

Cents and Sensibility also documents the failure of rational choice and behavioral economists and psychologists to consider the culture, traditions, and values of the people they are investigating. Although cultural evidence cannot be quantified, Morson and Schapiro show how it helps explain why such a small percentage of African-American students with high grade point averages and test scores do not attend selective colleges and universities (even when they are offered financial aid).

While acknowledging that income and family backgrounds are important variables in predicting decisions about marriage, divorce, and family planning, the authors make a compelling case that social and cultural context matters as well.

In important respects, Cents and Sensibility reminds us of the capaciousness of the humanities. A recent study, the authors reveal, found that readers of fiction did better on tests measuring empathy, social perception, and emotional intelligence. One reason, Morson and Schapiro suggest, is that fiction, more than real life, connects inner states to outward behavior, and encourages intimacy between characters and readers.

In other ways, however, Cents and Sensibility provides a rather narrow view of the humanities. Although Morson and Schapiro put culture front and center, they barely mention the discipline of history. They limit their discussion of literature to realistic novels. They do not emphasize sufficiently the unique capacity of the humanities to teach students how to analyze texts, conduct research, and write clear and persuasive essays.

Despite these caveats, Cents and Sensibility sends a powerful and timely message. The humanities, the authors conclude, if humanists will only believe in them, have a critical role to play in education, nurturing in students of all ages truths about human beings other disciplines have not attained, a respect for diverse points of view, culture, and ethics, and an escape from the prison house of self, limitations of time and place.

The humanities are in danger. Americans inside and outside the academy need to act before its too late.

The Morning Email

Wake up to the day's most important news.

Visit link:
W(h)ither the Humanities? - HuffPost

Why Fake News Goes Viral: Science Explains – Live Science

People's limited attention spans, plus the sheer overloadof information on social media may combine to make fake news and hoaxes go viral, according to a new study.

Understanding why and how fake news spreads may one day help researchers develop tools to combat its spread, the researchers said.

For example, the new research points toward curbing the use of social bots computer programs that automatically generate messages such as tweets that inundate social media with low-quality information to prevent the spread of misinformation, the researchers said. [Our Favorite Urban Legends Debunked]

However, "Detecting social bots is a very challenging task," said study co-author Filippo Menczer, a professor of informatics and computer science at the Indiana University School of Informatics and Computing.

Previous research has shown that some of people's cognitive processes may help to perpetuate the spread of misinformation such as fake news and hoaxes, according to the study, published today (June 26) in the journal Nature Human Behavior. For example, people tend to show "confirmation bias" and pay attention to and share only the information that is in line with their beliefs, while discarding information that is not in line with their beliefs. Studies show that people do this even if the information that confirms their beliefs is false.

In the new study, the researchers looked at some other potential mechanisms that may be at play in spreading misinformation. The researchers developed a computer model of meme sharing to see how individual attention and the information load that social media users are exposed to affect the popularity of low-quality versus high-quality memes. The researchers considered memes to be of higher quality if they were more original, had beautiful photos or made a claim that was true.

The investigators found that low- and high-quality memes were equally likely to be shared because social media users' attention is finite and people are simply too overloaded with information to be able to discriminate between low- and high-quality memes. This finding explains why poor-quality information such as fake news is still likely to spread despite its low quality, the researchers said.

One way to help people better discriminate between low- and high-quality information on social media would be to reduce the extent of information load that they are exposed to, the researchers said. One key way to do so could involve decreasing the volume of social media posts created by social bots that amplify information that is often false and misleading, Menczer said.

Social bots can act as followers on social media sites like Twitter, or they can be run as fake social media accounts that have their own followers. The bots can imitate human behavior online and generate their own online personas that can in turn influence real, human users of social media. [25 Medical Myths that Just Won't Go Away]

"Huge numbers" of these bots can be managed via special software, Menczer said.

"Ifsocialmedia platforms were able to detect and suspend deceptive socialbots there would be less low-quality information in the system to crowd out high-quality information," he told Live Science.

However, both detecting and suspending such bots is challenging, he said. Although machine-learning systems for detecting social bots exist, these systems are not always accurate. Socialmedia platforms have to be conservative when using such systems, because the cost of a false positive error in other words, suspending a legitimate account is generally much higher than that of missing abot, Menczer said.

More research is needed to design fast and more accurate social bot detection systems, he said.

Originally published on Live Science.

Go here to see the original:
Why Fake News Goes Viral: Science Explains - Live Science

Training the cyber Sherlocks – The Herald Bulletin

With cyberattacks on the rise, so too is the need for experts to protect companies, government agencies and individuals from those attacks and the damage they can cause.

That need has prompted Ivy Tech Community College student Dave Houchin to pursue a degree in cybersecurity/information assurance at the colleges Terre Haute campus.

It is an exponentially growing career choice, said the 34-year-old, who will earn his degree later this year. Demand for services, such as securing and maintaining networks, will only increase, as will job opportunities, he said.

Many cybercrimes go unreported, he said, often because businesses are worried news of such crimes could hurt their reputation.

The internet as we know it is still a wide-open frontier filled with lawlessness much as was seen in the early days of pioneers and cattle drives of the wild west, he said, and cybercriminals are taking advantage of the security lapses.

While one of his goals is career advancement, he also believes being educated in cybersecurity is important to protect our economy from theft, our citizens from harm and our nation from discord, he wrote in an email. His I.T. internship is with ThyssenKrupp Presta, where he has worked production for several years.

Ivy Tech has offered a two-year degree in cybersecurity/information assurance since 2013 and it offers a number of related certificate programs.

Purdue and Indiana universities have several well-established programs and research initiatives, and now, Indiana State University is working on a cybersecurity program that focuses on the human missteps that can lead to security breaches.

Indiana State faculty member Bill Mackey has a cybersecurity firm that employs ISU interns.

A growing need

According to the National Security Agency, The newest threats we face, and perhaps the fastest growing, are those in cyberspace. Cyber threats to U.S. national and economic security increase each year in frequency, scope and severity of impact. Cyber criminals, hackers and foreign adversaries are becoming more sophisticated and capable every day in their ability to use the internet for nefarious purposes.

The issue came to the forefront with Russias hacking of Democratic National Committee emails, an act intended to influence the U.S. presidential election.

The FBI websites describes the collective impact of cybercrime as staggering. Billions of dollars are lost every year repairing systems hit by such attacks. Some take down vital systems, disrupting and sometimes disabling the work of hospitals, banks, and 9-1-1 services around the country.

Who is behind such attacks? It runs the gamut from computer geeks looking for bragging rights to businesses trying to gain an upper hand in the marketplace by hacking competitor websites, from rings of criminals wanting to steal your personal information and sell it on black markets to spies and terrorists looking to rob our nation of vital information or launch cyber strikes, according to fbi.gov.

Earlier this month, the NSA and the Department of Homeland Security designated Ivy Tech as a National Center of Academic Excellence in its cyber defense education program. According to NSA, its goal is to reduce vulnerability in the countrys information infrastructure by promoting higher education and research in cyber defense.

The recognition is kind of a big deal, said Charles Peebles, department chair, School of Computing and Informatics at Ivy Techs Wabash Valley Region.

The two-year program is pretty thorough, he said. It covers all major areas you need to know to prevent a hack.

Students must know networks, software and server administration. They have to know a little of everything to be a good cyber agent, he said.

The program is a popular one, especially with all the breaches weve had that are getting publicized and with all the Ransomware, where people are clicking on links that end up taking control of their network and they have to pay someone money to get access back to their files and information, Peebles said.

Everybody should be concerned, with todays criminals out there, he said. Everybody should have some kind of protection on their computer.

Those who earn the degree, can do just about anything, he said. They work as a network or server administrator, he said. The average mean salary for cyber information analysis in Indiana is about $37.50 per hour, which translates into about $78,000 annually, he said.

On average, there are 629 annual job openings in cybersecurity in Indiana, according to the 2014-2024 Department of Workforce Development/Bureau of Labor Statistics Occupational Demand Report.

New offering at ISU

At Indiana State, a new cybersecurity studies program is in the works that focuses on the human missteps that can lead to security breaches; it will be offered through the Department of Criminology and Criminal Justice.

Faculty member Bill Mackey said the new program will be human behavior focused.

We already have a lot of people that know how to work with computers and code and create and analyze viruses and malware, he said. But reports from recent years show us that human exploits are 90 percent plus of the actual cybercrime intrusion.

Rather than trying to infiltrate a companys expensive computer technology systems, hackers find its easier to just get the administrative assistants name and password ... Then they dont need to hack into the system, he said.

Students in the future ISU program will learn to analyze employee behavior, determine who is vulnerable and look at training programs to change the behavior so those employees are not the weak leak that ends up creating a security breach. Were teaching them how to be a human anti-virus, he said.

For example, if some employees are vulnerable to phishing emails, How do we train employees to not click on things? Mackey said.

Four ISU students have interned at his cybersecurity business, called Alloy Cybersecurity.

Everyone in every workplace needs to be concerned about cybersecurity because it takes just one person to not care and its all gone, Mackey said. This is not slowing down. This is not going to stop. Its getting worse every year.

The average person should be concerned, but not paranoid, he said. He suggests people can do a lot to protect themselves by taking five seconds before responding to an email if they are not sure who it came from, and taking 10 minutes once a year to learn about new frauds and scams out there.

Madison Meyer, an ISU senior and criminology major, has been working with Mackey for about six months on cybercrime research and with Alloy.

Prior to that, she had no experience with cybersecurity. What shes learned has been eye-opening, she said.

At Alloy, students created phishing emails to assess a businesss employee vulnerabilities. We were more successful than we expected, she said. Students monitored what happened but never actually hacked the system.

The Sellersburg native said her career interests include law enforcement and the FBI.

Sue Loughlin writes for the Tribune-Star in Terre Haute and can be reached at sue.loughlin@tribstar.com.

See the article here:
Training the cyber Sherlocks - The Herald Bulletin

The Ethics of Using AI in Advertising – AdAge.com

Credit: iStock

As an industry, advertising has long been obsessed with understanding human behavior. The ability of artificial intelligence (AI) systems to transform vast amounts of complex, ambiguous information into insight is driving personal analysis into market behavior. There are nearly 2 billion Facebook users globally. About 200 billion tweets are shared on Twitter every year. Google processes 40,000+ searches every second. We can now assess the entirety of an individual's social activity: every word, every picture, every emoji.

Add to that location-based data from mobile phones, transactional data from credit cards and adjacent data sets like news and weather. When machine learning and advanced algorithms are applied to these oceans of digital information, we can intimately understand the motivations of almost every consumer.

These are undeniably powerful tools, and no one can blame the advertising industry for rapidly adopting them.

But AI also introduces troubling ethical considerations. Advertisers may soon know us better than we know ourselves. They'll understand more than just our demographics. They'll understand our most personal motivations and vulnerabilities. Worrisomely, they may elevate the art of persuasion to the science of behavior control.

Aside from these fears, there are more practical considerations around the use of AI in advertising: inherently biased data, algorithms that make flawed decisions and violations of personal privacy.

For these reasons, we need a code of ethics that will govern our use of AI in marketing applications, and ensure transparency and trust in our profession.

A system of trust

The more complete our understanding of an individual, the more persuasive our marketing can be. But each new insight into a consumer raises new questions about our moral obligations to that individual -- and to society at large.

For example, most would agree it's acceptable to leverage AI to target a consumer who shows interest in sports cars. But what if you also knew that consumer was deep in debt and lacked impulse control, had multiple moving violations, and had a history of drug and alcohol abuse? Is it still okay to market a fast car to this person, in a way that would make it nearly irresistible?

Rather than judging each case on its moral merits, it's more effective to establish guidelines that remove the guesswork. A system of transparency -- in which the consumer is more of a partner in his or her marketing, rather than an unwitting target of it -- is the ethical way forward.

Such a system would include three primary aspects: data, algorithms and consumer choice.

Data -- AI is fueled by data, which is used to train algorithms and sustain the system. If data is inaccurate or biased in any way, those weaknesses will be reflected in decisions made by the AI system.

Often, these data sets reflect preexisting human biases. Microsoft's unfortunate experience with Tay, the conversation bot that reproduced the hateful speech of those that engaged it, is probably the most infamous case study.

Algorithms -- AI engines contain codes that refine raw data into insight. They dictate how the AI system operates, but are designed and developed by humans. Which means that their instructions should be "explainable."

Some call this "algorithmic transparency." Transparency, however, is not realistic in this context, because the most valuable intellectual property of an AI lives in the algorithm, and agencies aren't eager to share that code openly. In addition, sophisticated machine-learning systems can be a black box, unable to adequately explain their rationale for any particular choice. When you don't know the internal functions and benefits -- the recipe for authentic trust isn't there. Explainability means ensuring the ability to clearly explain the decisions an AI makes and why.

Consumer choice -- Simply put, consumers should be aware of the techniques being used to market to them, and have the option of participating in those campaigns. In order to make an informed choice, consumers need a clear explanation of the value exchange in any given campaign. What are they giving up? What are they getting in return? And they should be allowed to opt out if they are uncomfortable with the transaction.

We are advertisers, not ethicists. However, that doesn't excuse us from considering the social impact of the work we do. We know there's a line that can -- and probably will -- be crossed with AI. Therefore, we must establish best practices for the use of AI in advertising, and understand the differences between what we can know, should know, and shouldn't know.

The rest is here:
The Ethics of Using AI in Advertising - AdAge.com