Interesting read on AI: Succor borne every minute

This post on the blog of the American Federal Trade Commission is bang on in discussing AI, IMHO. Do have a full read, but just a couple of insights:

  • Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods. We’ve warned companies about making false or unsubstantiated claims about AI or algorithms. And we’ve followed up with action, including recent cases against WealthPressDK AutomationAutomators AI, and CRI Genetics. We’ve also repeatedly advised companies – with reference to past cases – not to use automated tools to mislead people about what they’re seeing, hearing, or reading.
  • Don’t offer these services without adequately mitigating risks of harmful output. It’s a deliberate design choice to offer bots and avatars that perform as if human. We’ve discussed how that choice has inherent risks in terms of manipulation and inducing consumers, even if inadvertently, to make harmful choices. The risks may be greater for certain audiences, such as children. And we’ve warned AI companies repeatedly to assess and mitigate risks of reasonably foreseeable harm before and after deploying their tools. Our recent case against Rite Aid for its unfair use of facial recognition technology is an instructive case in point.
  • Don’t insert ads into a chat interface without clarifying that it’s paid content. More and more advertising will likely creep into the output that consumers get when interacting with various generative AI services. It will be tempting for firms offering simulated humans for companionship and the like to do the same, especially given the ability to target ads based on what these services gather or glean about their users. We’ve explained that any generative AI output should distinguish clearly between what is organic and what is paid. The Commission has also explored the wider problem of blurred digital advertising to children, advising marketers to steer clear of it altogether.
  • Don’t use consumer relationships with avatars and bots for commercial manipulation. A company offering an anthropomorphic service also shouldn’t manipulate people via the attachments formed with that service, such as by inducing people to pay for more services or steering them to affiliated businesses. That notion applies equally to how such a service is designed to react if people try to cancel their subscription. Consistent with the FTC’s rulemaking proposal to make it easier for people to “click to cancel” subscriptions, a bot shouldn’t plead, like Hal 9000 in 2001: A Space Odyssey, not to be turned off.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.