The star of the show at Teslas annual AI Day (for artificial intelligence) on Sept. 30 was a humanoid robot introduced by Tesla Chief Executive Elon Musk as Optimus.
The robot could walk, if gingerly, and perform a few repetitive mechanical tasks such as waving its arms and wielding a watering can over plant boxes. The demo was greeted enthusiastically by the several hundred engineers in the audience, many of whom hoped to land a job with Tesla.
This means a future of abundance, Musk proclaimed from the stage. A future where there is no poverty. ... It really is a fundamental transformation of civilization as we know it.
We still dont have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.
AI researcher Yann LeCun
Robotics experts watching remotely were less impressed. Not mind-blowing was the sober judgment of Christian Hubicki of Florida State University.
Some AI experts were even less charitable. The event was quite the dud, Ben Shneiderman of the University of Maryland told me. Among other shortcomings, Musk failed to articulate a coherent use case for the robot that is, what would it do?
Newsletter
Get the latest from Michael Hiltzik
Commentary on economics and more from a Pulitzer Prize winner.
Enter email address
Sign Me Up
You may occasionally receive promotional content from the Los Angeles Times.
To Shneiderman and others in the AI field, the Tesla demo embodied some of the worst qualities of AI hype; its reduction to humanoid characters, its exorbitant promises, its promotion by self-interested entrepreneurs and its suggestion that AI systems or devices can function autonomously, without human guidance, to achieve results that outmatch human capacities.
When news articles uncritically repeat PR statements, overuse images of robots, attribute agency to AI tools, or downplay their limitations, they mislead and misinform readers about the potential and limitations of AI, Sayash Kapoor and Arvind Narayanan wrote in a checklist of AI reporting pitfalls posted online the very day of the Tesla demo.
When we talk about AI, Kapoor says, we tend to say things like AI is doing X artificial intelligence is grading your homework, for instance. We dont talk about any other technology this way we dont say, the truck is driving on the road or a telescope is looking at a star. Its illuminating to think about why we consider AI to be different from other tools. In reality, its just another tool for doing a task.
That is not how AI is commonly portrayed in the media or, indeed, in announcements by researchers and firms engaged in the field. There, the systems are described as having learned to read, to grade papers or to diagnose diseases at least as well as, or even better than, humans.
Kapoor believes that the reason some researchers may try to hide the human ingenuity behind their AI systems is that its easier to attract investors and publicity with claims of AI breakthroughs in the same way that dot-com was a marketing draw around the year 2000 or crypto is today.
What is typically left out of much AI reporting is that the machines successes apply in only limited cases, or that the evidence of their accomplishments is dubious. Some years ago, the education world was rocked by a study purporting to show that machine- and human-generated grades of a selection of student essays were similar.
The claim was challenged by researchers who questioned its methodology and results, but not before headlines appeared in national newspapers such as: Essay-Grading Software Offers Professors a Break. One of the studys leading critics, Les Perelman of MIT, subsequently built a system he dubbed the Basic Automatic B.S. Essay Language Generator, or Babel, with which he demonstrated that machine grading couldnt tell the difference between gibberish and cogent writing.
The emperor has no clothes, Perelman told the Chronicle of Higher Education at the time. OK, maybe in 200 years the emperor will get clothes. ... But right now, the emperor doesnt.
A more recent claim was that AI systems may be as effective as medical specialists at diagnosing disease, as a CNN article asserted in 2019. The diagnostic system in question, according to the article, employed algorithms, big data, and computing power to emulate human intelligence.
Those are buzzwords that promoted the false impression that the system actually did emulate human intelligence, Kapoor observed. Nor did the article make clear that the AI systems purported success was seen in only a very narrow range of diseases.
AI hype is not only a hazard to laypersons understanding of the field but poses the danger of undermining the field itself. One key to human-machine interaction is trust, but if people begin to see a field having overpromised and underdelivered, the route to public acceptance will only grow longer.
Oversimplification of achievements in artificial intelligence evokes scenarios familiar from science fiction: futurescapes in which machines take over the world, reducing humans to enslaved drones or leaving them with nothing to do but laze around.
A persistent fear is that AI-powered automation, supposedly cheaper and more efficient than humans, will put millions of people out of work. This concern was triggered in part by a 2013 Oxford University paper estimating that future computerization placed 47% of U.S. employment at risk.
Shneiderman rejected this forecast in his book Human Centered AI, published in January. Automation eliminates certain jobs, as it has ... from at least the time when Gutenbergs printing presses put scribes out of work, he wrote. However, automation usually lowers costs and increases quality.... The expanded production, broader distribution channels, and novel products lead to increased employment.
Technological innovations may render older occupations obsolete, according to a 2020 MIT report on the future of work, but also bring new occupations to life, generate demands for new forms of expertise, and create opportunities for rewarding work.
A common feature of AI hype is the drawing of a straight line from an existing accomplishment to a limitless future in which all the problems in the way of further advancement are magically solved, and therefore success in reaching human-level AI is just around the corner.
Yet we still dont have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do, Yann LeCun, chief AI scientist at Meta Platforms (formerly Facebook) and a professor of computer science at NYU, observed recently on Facebook. The solution is not just around the corner. We have a number of obstacles to clear, and we dont know how.
So how can readers and consumers avoid getting duped by AI hype?
Beware of the sleight of hand that asks readers to believe that something that takes the form of a human artifact is equivalent to that artifact, counsels Emily Bender, a computational linguistics expert at the University of Washington. That includes claims that AI systems have written nonfiction, composed software or produced sophisticated legal documents.
The system may have replicated those forms, but it doesnt have access to the multitude of facts needed for nonfiction or the specifications that make a software program work or a document legally valid.
Among the 18 pitfalls in AI reporting cited by Kapoor and Narayanan are the anthropomorphizing of AI tools through images of humanoid robots (including, sadly, the illustration accompanying this article) and descriptions that utilize human-like intellectual qualities such as learning or seeing these tend to be simulations of human behavior, far from the real thing.
Readers should beware of phrases such as the magic of AI or references to superhuman qualities, which implies that an AI tool is doing something remarkable, they write. It hides how mundane the tasks are.
Shneiderman advises reporters and editors to take care to clarify human initiative and control. ... Instead of suggesting that computers take actions on their own initiative, clarify that humans program the computers to take these actions.
Its also important to be aware of the source of any exaggerated claims for AI. When an article only or primarily has quotes from company spokespeople or researchers who built an AI tool, Kapoor and Narayanan advise, it is likely to be over-optimistic about the potential benefits of the tool.
The best defense is healthy skepticism. Artificial intelligence has progressed over recent decades, but it is still in its infancy, and claims for its applications in the modern world, much less into the future, are inescapably incomplete.
To put it another way, no one knows where AI is heading. Its theoretically possible that, as Musk claimed, humanoid robots may eventually bring about a fundamental transformation of civilization as we know it. But no one really knows when or if that utopia will arrive. Until then, the road will be pockmarked by hype.
As Bender advised readers of an especially breathless article about a supposed AI advance: Resist the urge to be impressed.
Read more from the original source:
Hiltzik: The overhyping of AI - Los Angeles Times
- The Smell Of Death Has A Strange Influence On Human Behavior - IFLScience - October 26th, 2024 [October 26th, 2024]
- "WEIRD" in psychology literature oversimplifies the global diversity of human behavior. - Psychology Today - October 2nd, 2024 [October 2nd, 2024]
- Scientists issue warning about increasingly alarming whale behavior due to human activity - Orcasonian - September 23rd, 2024 [September 23rd, 2024]
- Does AI adoption call for a change in human behavior? - Fast Company - July 26th, 2024 [July 26th, 2024]
- Dogs can smell human stress and it alters their own behavior, study reveals - New York Post - July 26th, 2024 [July 26th, 2024]
- Trajectories of brain and behaviour development in the womb, at birth and through infancy - Nature.com - June 18th, 2024 [June 18th, 2024]
- AI model predicts human behavior from our poor decision-making - Big Think - June 18th, 2024 [June 18th, 2024]
- ZkSync defends Sybil measures as Binance offers own ZK token airdrop - TradingView - June 18th, 2024 [June 18th, 2024]
- On TikTok, Goldendoodles Are People Trapped in Dog Bodies - The New York Times - June 18th, 2024 [June 18th, 2024]
- 10 things only introverts find irritating, according to psychology - Hack Spirit - June 18th, 2024 [June 18th, 2024]
- 32 animals that act weirdly human sometimes - Livescience.com - May 24th, 2024 [May 24th, 2024]
- NBC Is Using Animals To Push The LGBT Agenda. Here Are 5 Abhorrent Animal Behaviors Humans Shouldn't Emulate - The Daily Wire - May 24th, 2024 [May 24th, 2024]
- New study examines the dynamics of adaptive autonomy in human volition and behavior - PsyPost - May 24th, 2024 [May 24th, 2024]
- 30000 years of history reveals that hard times boost human societies' resilience - Livescience.com - May 12th, 2024 [May 12th, 2024]
- Kingdom of the Planet of the Apes Actors Had Trouble Reverting Back to Human - CBR - May 12th, 2024 [May 12th, 2024]
- The need to feel safe is a core driver of human behavior. - Psychology Today - April 15th, 2024 [April 15th, 2024]
- AI learned how to sway humans by watching a cooperative cooking game - Science News Magazine - March 29th, 2024 [March 29th, 2024]
- We can't combat climate change without changing minds. This psychology class explores how. - Northeastern University - March 11th, 2024 [March 11th, 2024]
- Bees Reveal a Human-Like Collective Intelligence We Never Knew Existed - ScienceAlert - March 11th, 2024 [March 11th, 2024]
- Franciscan AI expert warns of technology becoming a 'pseudo-religion' - Detroit Catholic - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - messenger-inquirer - March 11th, 2024 [March 11th, 2024]
- Astrocytes Play Critical Role in Regulating Behavior - Neuroscience News - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Sunnyside Sun - March 11th, 2024 [March 11th, 2024]
- Freshwater resources at risk thanks to human behavior - Blue Mountain Eagle - March 11th, 2024 [March 11th, 2024]
- 7 Books on Human Behavior - Times Now - March 11th, 2024 [March 11th, 2024]
- Euphemisms increasingly used to soften behavior that would be questionable in direct language - Norfolk Daily News - February 29th, 2024 [February 29th, 2024]
- Linking environmental influences, genetic research to address concerns of genetic determinism of human behavior - Phys.org - February 29th, 2024 [February 29th, 2024]
- Emerson's Insight: Navigating the Three Fundamental Desires of Human Nature - The Good Men Project - February 29th, 2024 [February 29th, 2024]
- Dogs can recognize a bad person and there's science to prove it. - GOOD - February 29th, 2024 [February 29th, 2024]
- What Is Organizational Behavior? Everything You Need To Know - MarketWatch - February 4th, 2024 [February 4th, 2024]
- Overcoming 'Otherness' in Scientific Research Commentary in Nature Human Behavior USA - English - USA - PR Newswire - February 4th, 2024 [February 4th, 2024]
- "Reichman University's behavioral economics program: Navigating human be - The Jerusalem Post - January 19th, 2024 [January 19th, 2024]
- Of trees, symbols of humankind, on Tu BShevat - The Jewish Star - January 19th, 2024 [January 19th, 2024]
- Tapping Into The Power Of Positive Psychology With Acclaimed Expert Niyc Pidgeon - GirlTalkHQ - January 19th, 2024 [January 19th, 2024]
- Don't just make resolutions, 'be the architect of your future self,' says Stanford-trained human behavior expert - CNBC - December 31st, 2023 [December 31st, 2023]
- Never happy? Humans tend to imagine how life could be better : Short Wave - NPR - December 31st, 2023 [December 31st, 2023]
- People who feel unhappy but hide it well usually exhibit these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- If you display these 9 behaviors, you're being passive aggressive without realizing it - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- Men who are relationship-oriented by nature usually display these 9 behaviors - Hack Spirit - December 31st, 2023 [December 31st, 2023]
- A look at the curious 'winter break' behavior of ChatGPT-4 - ReadWrite - December 14th, 2023 [December 14th, 2023]
- Neuroscience and Behavior Major (B.S.) | College of Liberal Arts - UNH's College of Liberal Arts - December 14th, 2023 [December 14th, 2023]
- The positive health effects of prosocial behaviors | News | Harvard ... - HSPH News - October 27th, 2023 [October 27th, 2023]
- The valuable link between succession planning and skills - Human Resource Executive - October 27th, 2023 [October 27th, 2023]
- Okinawa's ants show reduced seasonal behavior in areas with more human development - Phys.org - October 27th, 2023 [October 27th, 2023]
- How humans use their sense of smell to find their way | Penn Today - Penn Today - October 27th, 2023 [October 27th, 2023]
- Wrestling With Evil in the World, or Is It Something Else? - Psychiatric Times - October 27th, 2023 [October 27th, 2023]
- Shimmying like electric fish is a universal movement across species - Earth.com - October 27th, 2023 [October 27th, 2023]
- Why do dogs get the zoomies? - Care.com - October 27th, 2023 [October 27th, 2023]
- How Stuart Robinson's misconduct went overlooked for years - Washington Square News - October 27th, 2023 [October 27th, 2023]
- Whatchamacolumn: Homeless camps back in the news - News-Register - October 27th, 2023 [October 27th, 2023]
- Stunted Growth in Infants Reshapes Brain Function and Cognitive ... - Neuroscience News - October 27th, 2023 [October 27th, 2023]
- Social medias role in modeling human behavior, societies - kuwaittimes - October 27th, 2023 [October 27th, 2023]
- The gift of reformation - Living Lutheran - October 27th, 2023 [October 27th, 2023]
- After pandemic, birds are surprisingly becoming less fearful of humans - Study Finds - October 27th, 2023 [October 27th, 2023]
- Nick Treglia: The trouble with fairness and the search for truth - 1819 News - October 27th, 2023 [October 27th, 2023]
- Science has an answer for why people still wave on Zoom - Press Herald - October 27th, 2023 [October 27th, 2023]
- Orcas are learning terrifying new behaviors. Are they getting smarter? - Livescience.com - October 27th, 2023 [October 27th, 2023]
- Augmenting the Regulatory Worker: Are We Making Them Better or ... - BioSpace - October 27th, 2023 [October 27th, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 27th, 2023 [October 27th, 2023]
- WashU Expert: Some parasites turn hosts into 'zombies' - The ... - Washington University in St. Louis - October 27th, 2023 [October 27th, 2023]
- Is secondhand smoke from vapes less toxic than from traditional ... - Missouri S&T News and Research - October 27th, 2023 [October 27th, 2023]
- How apocalyptic cults use psychological tricks to brainwash their ... - Big Think - October 27th, 2023 [October 27th, 2023]
- Human action pushing the world closer to environmental tipping ... - Morung Express - October 27th, 2023 [October 27th, 2023]
- What We Get When We Give | Harvard Medicine Magazine - Harvard University - October 27th, 2023 [October 27th, 2023]
- Psychological Anime: 12 Series You Should Watch - But Why Tho? - October 27th, 2023 [October 27th, 2023]
- Roosters May Recognize Their Reflections in Mirrors, Study Suggests - Smithsonian Magazine - October 27th, 2023 [October 27th, 2023]
- June 30 Zodiac: Sign, Traits, Compatibility and More - AZ Animals - May 13th, 2023 [May 13th, 2023]
- Indiana's Funding Ban for Kinsey Sex-Research Institute Threatens ... - The Chronicle of Higher Education - May 13th, 2023 [May 13th, 2023]
- Have AI Chatbots Developed Theory of Mind? What We Do and Do ... - The New York Times - March 31st, 2023 [March 31st, 2023]
- Scoop: Coming Up on a New Episode of HOUSEBROKEN on FOX ... - Broadway World - March 31st, 2023 [March 31st, 2023]
- Here's five fall 2023 classes to fire up your bookbag - Duke Chronicle - March 31st, 2023 [March 31st, 2023]
- McDonald: Aspen's like living in a 'Pullman town' - The Aspen Times - March 31st, 2023 [March 31st, 2023]
- Children Who Are Exposed to Awe-Inspiring Art Are More Likely to Become Generous, Empathic Adults, a New Study Says - artnet News - March 31st, 2023 [March 31st, 2023]
- DataDome Raises Another $42M to Prevent Bot Attacks in Real ... - AlleyWatch - March 31st, 2023 [March 31st, 2023]
- Observing group-living animals with drones may help us understand ... - Innovation Origins - March 31st, 2023 [March 31st, 2023]
- Mann named director of School of Public and Population Health - Boise State University - March 31st, 2023 [March 31st, 2023]
- Irina Solomonova's bad behavior is the star of Love Is Blind - My Imperfect Life - March 31st, 2023 [March 31st, 2023]
- Health quotes Dill in article about rise of Babesiosis - UMaine News ... - University of Maine - March 31st, 2023 [March 31st, 2023]
- There's still time for the planet, Goodall says, if we stay hopeful - University of Wisconsin-Madison - March 31st, 2023 [March 31st, 2023]
- Relationship between chronotypes and aggression in adolescents ... - BMC Psychiatry - March 31st, 2023 [March 31st, 2023]