AI can make a digital clone of a politician say whatever it wants, which opens the door to a host of potential problems this election season.
That’s why Google is taking steps to mitigate the political dangers of the technology by requiring advertisers to disclose when they use deepfakes, or realistic AI versions of people, in election ad campaigns.
In a Monday update to its political content policy, Google asked election advertisers to “prominently disclose” when their ads inaccurately portray people or events, if these advertisers are located “in regions where [election ads] verification is required.”
Digital face scanning. Credit: Getty Images
This policy applies to the U.S.; as of Monday, Google
→ Continue reading at Entrepreneur