California’s new bot law forces companies to tell you when you’re interacting with a machine
The law goes into effect on July 1, 2019 and will likely have national implications.
Earlier this year at Google I/O, the company’s developer conference, Google demonstrated an AI technology called Duplex. The AI-powered conversational bot sounded indistinguishably like a human during recorded human-computer interactions over the phone. This was the Turing test realized.
When I asked Google later about how it would handle the ethical issue of disclosure to the person on the other end of the phone, the company told me informally that was still being worked out. Now the state of California has apparently worked it out for Google — and everyone else deploying bots.
New law seeks to prevent consumer deception. Last Friday, California governor Jerry Brown signed into law a bill that requires bots to disclose (over the phone, online or otherwise) that they are not humans in their interactions with consumers. The move is directed at protecting consumers and voters from deception by bots posing as real people. The law goes into effect next year:
This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The bill would define various terms for these purposes. The bill would make these provisions operative on July 1, 2019.
The new law requires the disclosure to be “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.” However precise language is not specified in the bill.
The California law may motivate other states to pass their own disclosure rules, raising the prospect of federal intervention, as with the recent state-based effort to restore net neutrality.
Bots will become much more conversational. As the Duplex demonstration illustrated, bots are poised to become more and more sophisticated and less distinguishable from humans over time. Bots have many beneficial uses in the enterprise and for small business; however, if abused they could manipulate or deceive people online or over the phone.
According to a recent Accenture Digital report (.pdf), bots are already used by enterprises for customer service. And a majority of consumers prefer interacting with chatbots over humans for certain basic types of informational inquiries.
Why it matters. The new law will likely have a limited impact on the deployment and use of bots. But in certain contexts it may make them less “effective” for sales or marketing purposes, especially over the phone. For example, a person receiving a call from a robot that says up front “I’m a robot” might be inclined to hang up or more resistant to the content of the message.
The required disclosure language will need to be worked out in context. One can imagine, however, that it could easily become the subject of litigation. Many companies seeking to use bots for automated calls or outbound selling are going to be disinclined to call too much attention to the fact that you’re talking to a machine. For example, a call initiated with “I’m Greg, a conversational assistant” may be insufficiently clear to meet the standard in the new law.
By comparison, online, it’s possible that “bot warnings” will become as perfunctory and ignored as software terms and conditions.
Opinions expressed in this article are those of the guest author and not necessarily MarTech Today. Staff authors are listed here.