AI’s dirty little secret: It’s powered by people

SAN FRANCISCO (AP) — There’s a dirty little secret about artificial intelligence: It’s powered by hundreds of thousands of real people.

From makeup artists in Venezuela to women in conservative parts of India, people around the world are doing the digital equivalent of needlework —drawing boxes around cars in street photos, tagging images, and transcribing snatches of speech that computers can’t quite make out.

Such data feeds directly into “machine learning” algorithms that <a target=”&mdash;blank” href=”https://apnews.com/b94668ba50444843821ba41850e47059″>help self-driving cars wind through traffic</a> and let Alexa figure out that you want the lights on. Many such technologies wouldn’t work without massive quantities of this human-labeled data.

These repetitive tasks pay pennies apiece. But in bulk, this work can offer a decent wage in many parts of the world — even in the U.S. This burgeoning but largely unseen cottage industry represents the foundation of a technology that could change humanity forever: AI that will drive us around, execute verbal commands without flaw, and, possibly, one day think on its own.

———

This human input industry has long been nurtured by search engines Google and Bing, who for more than a decade have used people to rate the accuracy of their results. Since 2005, Amazon’s Mechanical Turk service, which matches freelance workers with temporary online jobs, has also made crowd-sourced data entry available to researchers worldwide.

More recently, investors have poured tens of millions of dollars into startups like Mighty AI and CrowdFlower, which are developing software that makes it easier to label photos and other data, even on smartphones.

Venture capitalist S. “Soma” Somasegar says he sees “billions of dollars of opportunity” in servicing the needs of machine learning algorithms. His firm, Madrona Venture Group, invested in Mighty AI. Humans will be in the loop “for a long, long, long time to come,” he says.

Accurate labeling could make the difference between a self-driving car distinguishing between the sky and the side of a truck — a distinction Tesla’s Model S failed in the <a target=”&mdash;blank” href=”https://apnews.com/ee71bd075fb948308727b4bbff7b3ad8″>first known fatality</a> involving self-driving systems in 2016.

“We’re not building a system to play a game, we’re building a system to save lives,” says Mighty AI CEO Daryn Nakhuda.

———

Marjorie Aguilar, a 31-year-old freelance makeup artist in Maracaibo, Venezuela, spends four to six hours a day drawing boxes around traffic objects to help train self-driving systems for Mighty AI.

She earns about 50 cents an hour, but in a crisis-wracked country with runaway inflation, just a few hours’ work can pay a month’s rent in bolivars.

“It doesn’t sound like a lot of money, but for me it’s pretty decent,” she says. “You can imagine how important it is for me getting paid in U.S. dollars.”

Aria Khrisna, a 36-year-old father of three in Tegal, Indonesia, says doing things like adding word tags to clothing pictures on websites such as eBay and Amazon pays him about $100 a month, roughly half his income.

And for 25-year-old Shamima Khatoon, her job annotating cars, lane markers and traffic lights at an all-female outpost of data-labeling company iMerit in Metiabruz, India, represents the only chance she has to work outside the home in her conservative Muslim community.

“It’s a good platform to increase your skills and support your family,” she says.

———

Major automakers like Toyota, Nissan and Ford, ride-hailing companies like Uber and other tech giants like Alphabet Inc.’s Waymo are paying reams of labelers, often through third-party vendors.

The benefits of greater accuracy can be immediate.

At InterContinental Hotels Group, every call that its digital assistant Amelia can take from a human saves $5 to $10, says information technology director Scot Whigham.

When Amelia fails, the program listens while a call is rerouted to one of about 60 service desk workers. It learns from their response and tries the technique out on the next call, freeing up human employees to do other things.

“We’ve transformed those jobs,” Whigham says.

When a computer can’t make out a customer call to the Hyatt Hotels chain, an audio snippet is sent to AI-powered call center Interactions in an old brick building in Franklin, Massachusetts.

There, while the customer waits on the phone, one of a roomful of headphone-wearing “intent analysts” transcribes everything from misheard numbers to profanities and quickly directs the computer how to respond.

That information feeds back into the system. “Next time through, we’ve got a better chance of being successful,” says Robert Nagle, Interactions’ chief technology officer.

———

Researchers have tried to find workarounds to human-labeled data, but the results are often inadequate.

In a <a target=”&mdash;blank” href=”http://www.pnas.org/content/114/50/13108″>project</a> that used Google Street View images of parked cars to estimate the demographic makeup of neighborhoods, then-Stanford researcher Timnit Gebru tried to train her AI by scraping Craigslist photos of cars for sale that were labeled by their owners.

But the product shots didn’t look anything like the car images in Street View, and the program couldn’t recognize them. In the end, she says, she spent $35,000 to hire auto dealer experts to label her data.

The need for human labelers is “enormous” and “dynamic,” says Robin Bordoli, CEO of labeling technology company CrowdFlower. “You can’t trust the algorithm 100 percent.”

———

At the moment, figuring out how to get computers to learn without so-called “ground truth” data provided by humans remains an open research question.

Trevor Darrell, a machine learning expert at the University of California Berkeley, says he expects it will be five to 10 years before computer algorithms can learn to perform without the need for human labeling.

His group alone spends hundreds of thousands of dollars a year paying people to annotate images. “Right now, if you’re selling a product and you want perfection, it would be negligent not to invest the money in that kind of annotation,” he says.

Several companies like Alphabet’s Waymo and game-maker Unity Technologies are developing simulated worlds to train their algorithms in controlled scenarios where every object comes pre-defined.

For the most part, even companies trying to push humans out of the loop still rely on them.

CloudSight, for instance, offers website and app developers a handy tool for uploading a photo and getting a few words back describing it. The retailer Kohl’s uses the service for a “Snap and Shop” visual search feature on its app.

But it’s not just a fancy computer program spitting back responses. If the algorithm doesn’t have a good answer, one of its 800 employees in places like India, Southeast Asia or Africa type in the answer in real time.

“We want to be the ones that can label any image without any human involvement,” says Ian Parnes, CloudSight’s head of business development. “How long that will take is anyone’s guess.”

———

Associated Press writers Matt O’Brien in Franklin, Massachusetts, Yuri Kageyama in Tokyo, and Dee-Ann Durbin in Detroit contributed to this report.

Free News Delivery by Email

Would you like to have the day's news stories delivered right to your inbox every evening? Enter your email below to start!

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

I agree to these terms.

This site uses Akismet to reduce spam. Learn how your comment data is processed.