Article: “A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be” - https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook/
This month we saw the first weaponized use of deepfakes
during an armed conflict. A deepfake was released of an imitation Zelensky (he
was motionless and had an unusual sounding voice) telling Ukrainians to lay
down their weapons. This video emerged
not only on Facebook and Youtube but also Telegram and Russian social network VKontakte.
TV Channel Ukraine 24 was hacked and the deepfake video and
a summary of the fake news appeared on its website. Minutes after the fake news
appeared on TV, Zelensky posted a Facebook video stating the video was fake. Soon after, the head of security policy at Meta
tweeted that they had removed the deepfake video for violating its policy
against “misleading manipulated media.” A Twitter spokesperson provided a statement
committing to tracking the video and removing it where it violated rules
banning “deceptive synthetic media.” A YouTube spokesperson also communicated
their removal of the deepfake video uploads.
The unfolding of events shows that under the right
conditions, deepfakes can be vanquished. However, it was, in a sense, “easy” for
Zelensky to defeat the deepfake – his government had prepared for such a
scenario, he is one of the highest-profile people in the world, and the deepfake
was of poor quality. Other political
leaders in conflicts may be less fortunate and thus more vulnerable to
deepfakes.
In class, we have evaluated the responsibility of social
media in regulating fake news and the role of advertising and machine learning
in skewing reality. While technology is
underway to create automatic deepfake detectors, it is not fully developed
yet. How do we ensure the tools of digital
marketing (from algorithms affecting search to social media) do not allow
deepfakes to become political weapons?
No comments:
Post a Comment