One recent Mobile Monday event here in San Francisco got me thinking about how business can make chatbots more useful. We hear plenty about how bots bombard social media with nonsense. We don't hear enough about how they can serve customers' needs.
Any bot business model must answer questions about what services a bot can offer that an app or algo can't accomplish. The AI tech that enterprises need to effectively deploy bots isn't always good enough to read text, so UX interactivity with a bot is limited for now. Human customer service reps must be cued to pick up from a bot when it reaches a point where it gets stuck answering a customer's inquiry.
The untapped potential of bots begs questions about how to monetize them. Interstitial advertisements during pauses between a bot's actions may degrade their appeal, unlike with apps or online games where users have come to accept them. Small sidebar ads in a bot's pop-up window may be the way to go. Inline ads certainly work with captchas, which take up about as much screen space as a typical chatbot's window.
I see enormous potential for chatbot abuse by organized religion. I'm serious here. American evangelicals pushing the prosperity gospel could easily employ bots in their online solicitations for money. Unsuspecting believers may think they're interacting with some religious personage when they're really just talking to an AI that's using machine learning to improve its pitch. Bots that learn how to push people's emotional triggers and prompt more donations are going to bankrupt lots of low-information donors, while unscrupulous preachers get filthy rich.
Enterprises should reference the FTC Business Center for emerging regulatory guidance on bot deployment. The policies on marketing and data security are probably most immediately relevant to bot developers. Users must see disclosures when they are interacting with chatbots instead of live humans. Bots generating unwanted communication (ads, emails, text messages) can run afoul of FTC rules and prompt enforcement actions. FTC guidance also covers non-profit organizations, so those ripoff prosperity preachers cannot plead ignorance if they use bots to fleece their flocks.
There is very little private sector governance of chatbot use at present, which is why FTC guidance on customer service will have to stretch in the interim until the tech industry has its own self-regulatory standards. The Friends of ChatBot Coalition is Jeff Pulver's nascent effort to self-organize the developer community around commercial bot use. Once the tech community puts some industry rules in place, it can have a constructive dialogue with the FTC. I expect to read about these future regulatory developments in Chatbot Magazine.
I used to think bots were an automated swarm of bees, programmed to multiply themselves for attacks against unsuspecting targets online. That's the malicious side of bot development. In the hands of competent programmers, bots should be UX enhancements for brand engagement. The future of AI-driven chatbots will be as good as we make it, starting now.
Any bot business model must answer questions about what services a bot can offer that an app or algo can't accomplish. The AI tech that enterprises need to effectively deploy bots isn't always good enough to read text, so UX interactivity with a bot is limited for now. Human customer service reps must be cued to pick up from a bot when it reaches a point where it gets stuck answering a customer's inquiry.
The untapped potential of bots begs questions about how to monetize them. Interstitial advertisements during pauses between a bot's actions may degrade their appeal, unlike with apps or online games where users have come to accept them. Small sidebar ads in a bot's pop-up window may be the way to go. Inline ads certainly work with captchas, which take up about as much screen space as a typical chatbot's window.
I see enormous potential for chatbot abuse by organized religion. I'm serious here. American evangelicals pushing the prosperity gospel could easily employ bots in their online solicitations for money. Unsuspecting believers may think they're interacting with some religious personage when they're really just talking to an AI that's using machine learning to improve its pitch. Bots that learn how to push people's emotional triggers and prompt more donations are going to bankrupt lots of low-information donors, while unscrupulous preachers get filthy rich.
Enterprises should reference the FTC Business Center for emerging regulatory guidance on bot deployment. The policies on marketing and data security are probably most immediately relevant to bot developers. Users must see disclosures when they are interacting with chatbots instead of live humans. Bots generating unwanted communication (ads, emails, text messages) can run afoul of FTC rules and prompt enforcement actions. FTC guidance also covers non-profit organizations, so those ripoff prosperity preachers cannot plead ignorance if they use bots to fleece their flocks.
There is very little private sector governance of chatbot use at present, which is why FTC guidance on customer service will have to stretch in the interim until the tech industry has its own self-regulatory standards. The Friends of ChatBot Coalition is Jeff Pulver's nascent effort to self-organize the developer community around commercial bot use. Once the tech community puts some industry rules in place, it can have a constructive dialogue with the FTC. I expect to read about these future regulatory developments in Chatbot Magazine.
I used to think bots were an automated swarm of bees, programmed to multiply themselves for attacks against unsuspecting targets online. That's the malicious side of bot development. In the hands of competent programmers, bots should be UX enhancements for brand engagement. The future of AI-driven chatbots will be as good as we make it, starting now.