The Displaced Plainsman directs his attention to a quick Axios report which links to a Center for New American Security report on the impact of artificial intelligence on economic, social, and military components of international security. The original report includes a heading that catches we two bloggers’ attention: The End of Truth… under which heading the report authors contend that the forgeries, fake news, and strategic propaganda artificial intelligence facilitates could make it impossible for us to sort out truth from malevolent fiction:
AI technology could weaken, if not end, recorded evidence’s ability to serve as proof. Some technologies, such as blockchain, may make it possible to authenticate the provenance of video and audio files. These technologies may not mature quickly enough, though. They could also prove too unwieldy to be used in many settings, or simply may not be enough to counteract humans’ cognitive susceptibility toward “seeing is believing.” The result could be the “end of truth,” where people revert to ever more tribalistic and factionalized news sources, each presenting or perceiving their own version of reality.
AI-enabled forgeries are becoming possible at the same time that the world is grappling with renewed challenges of fake news and strategic propaganda. During the 2016 U.S. presidential election, for example, hundreds of millions of Americans were exposed to fake news. The Computational Propaganda Project at Oxford University found that during the election, “professional news content and junk news were shared in a one-to-one ratio, meaning that the amount of junk news shared on Twitter was the same as that of professional news.”79 A common set of facts and a shared understanding of reality are essential to productive democratic discourse. The simultaneous rise of AI forgery technologies, fake news, and resurgent strategic propaganda poses an immense challenge to democratic governance [Michael Horowitz et al., “Artificial Intelligence and International Security,” Center for a New American Security, 2018.07.10].
I should note that “quick Axios report” is redundant: everything Axios posts is quick. I would suggest another threat to our grasp on truth in public discourse is our dwindling attention, to which Axios responds with its new Reader’s Digest which considers 520 words to be “going deeper.”
Defeating the “deep fakes” the authors foresee will take deep reading, for which the report authors and Axios‘s brevity suggest we as a society are not prepared. One tiny drop in that bucket of preparation for and resistance to robopropaganda is my rejection of anonymity. With at least one hostile power interfering in our elections with fake Twitter accounts, I have an obligation to put my name to my content, regularly assert my authenticity, and face the public consequences of getting any facts wrong.
Of course, asserting authenticity is meaningless if I don’t also practice authenticity—i.e., always tell the truth. While I will continue to offer great gobs of homegrown analysis, commentary, argumentation, speculation, and occasional bits of satire, I will also continue to offer factual information, to tell you where I get those facts, and to provide hyperlinks so you can check those facts yourself.
In the comment section, the least I can do is maintain my moderation filter, sending any new name or e-mail to the queue for my review and inquiry of the commenter’s authenticity. Comments remain in limbo until commenters respond with their names and demonstrate that minimal level of mutual trust and authenticity—I know you, you know me, let’s talk. Otherwise, it’s not a far step to imagine the Russian fakers upgrading from Twitter bots to blog-swampers sowing misinformation and discord with disruptive, off-topic propaganda.
Hard-wiring Asimov’s Three Laws of Robotics, with a supplemental corollary about truth, into every device using artificial intelligence might help. Until the programmers catch up, I’ll keep speaking clearly by name, so you know I’m Kilroy, not Mr. Roboto, and I’ll ask my sources and commenters to do the same… because who wants to spend all day talking to a machine?