Cyrus Krohn.

Editor’s note: This guest post is tied to the release of Bombarded, a new book by Cyrus Krohn, a former Microsoft, Yahoo, and Cheezburger executive who co-founded Seattle startup Element Data. 

Guest Commentary: If the digital political landscape during Election 2020 gives you the creeps, imagine an election a few years — and presidents — from now where hackable digital tech, unmanaged media, and micro-targeted marketing have rampaged forward without brakes or scruples. While 2032 seems a distant future, Moore’s Law is already delivering on the risks ahead.

In Ernest Hemingway’s “The Sun Also Rises,” Mike Campbell is asked how he went bankrupt. He replies, famously: “Two ways. Gradually and then suddenly.” The collapse of the information environment in the U.S. is proceeding along similar lines, as domestic partisan tribes and international influences such as Russia, Iran, and the Chinese Communist Party (CCP) weaponize the web.

Mara Hvistendahl of MIT Technology Review recently interviewed Samantha Hoffman of the Australian Strategy Policy Institute, one of the leading experts on the Chinese surveillance state. Hoffman detailed how collecting large datasets can reveal patterns and trends in human behavior, which help the CCP with intelligence and propaganda as well as surveillance. She went on to detail how TikTok is a good example of a seemingly benign app that can give the CCP a lot of useful data.

Now consider how more robust artificial intelligence and machine learning — the same technologies that enable today’s chats with Apple’s Siri or Amazon’s Alexa — can leverage the wealth of data about you, stored in the cloud by scads of private interests, to design intimate, unique virtual conversations between you and a perfectly simulated politician who lives on-screen, or even as a holographic projection.

Cyrus Krohn is the author of Bombarded, a new book released this month.

Recently Andrew Yang played the hologram card, and during the pandemic lockdown Israeli President Reuven Rivlin addressed his people as an augmented reality hologram beamed to mobile phones.

As 5G technology standards for cellular networks proliferate, holograms will be commonplace, using data gleaned from your IoT devices that will lead you to PoT, or Politicians on Things and a maddening ubiquity of suspect, often vicious content.

When the 2016 Star Wars movie Rogue One employed a CGI version of actor Peter Cushing, also known as Grand Moff Tarkin, it was an oh-wow moment for fans, and a safe one. Cushing died in 1994. His simulated comeback came with context. CGI Cushing lived only within the well-understood framing device of a familiar fictional narrative.

People might prove less receptive to a dead Peter Cushing popping up suddenly on your phone or gas pump screen, calling you by name, demonstrating deep knowledge of your Netflix queue, ex-boyfriends, or favorite cheeses, and trying to sell you things or blackmail you.

Yet such interactions are possible in the political arena. If it proves impossible to tell who produces political deepfakes or verify what they say, the effect will be traumatizing.

The Cushing CGI work prompted a thoughtful ethical debate in the movie industry. “[T]he moment is underpinned by some quite terrifying existential questions,” wrote British filmmaker Christopher Hooton. Deep-fakes raise the specter of a whole new class of identity theft. As for the kinds of forces who would misuse this technology to mislead, and further destabilize politics and elections? Chaos agents tend not to be preoccupied with ethics.

Neuralink, an Elon Musk company, is planning trials for a device to be implanted in people’s brains that could read and transmit thoughts. The first primitive expressions of the technology will likely have medical angles, such as helping paralyzed people control physical devices with mental effort alone.

But Musk is said to believe that in order to keep pace with artificial intelligence, we will all inevitably get these implants. His thinking on this subject is blocks ahead of government regulators (though perhaps not government surveillance agencies). There are no laws preventing the National Security Agency from collecting your thoughts, surreptitiously or not, or a future Jumpshot-type outfit packaging up your brain data and selling it to third parties with political motives.

Musk isn’t even first on the moon, here. China is said to be already deploying brain-reading technology to monitor the mental and emotional states of people in mission-critical jobs, like train operators. As part of its autonomous-vehicle push, automaker Nissan is incubating a technology that lets Nissan cars read brain signals from their drivers; it “delivers more excitement and driving pleasure by detecting, analyzing and responding to driver’s brainwaves in real time.”

Nevermind peeking at your browsing history. Now we’re talking about third parties getting access to your entire consciousness — not just your preferences in wheelbarrows, but your loves, fears, dreams, desires, and political preferences.

If legal brain spying is perfected, then hacked — and every digital system is hackable — the implications are profound. Knit these new technologies with agenda-driven politics, and the result looks like an unprecedented, disruptive form of societal chaos.

We risk driving millions of potential voters out of the political arena, leaving it in the hands of elites, hobbyists, and comparatively small knots of aggrieved grudge-nursers.

It’s time our legislators pass a federal data privacy law, establish a new regulatory body to oversee algorithms, and apply more oversight to BigTech. The clock is tik-toking.

Arizona Sen. Kyrsten Sinema has been vocal about data privacy legislation and could be influential in this effort. In Washington state, the past two legislative cycles have yet to approve SB 6281, the Washington Privacy Act, which would have given Washington residents the right to access, correct or delete data collected on them by commercial entities, as well as the right to opt out of certain forms of data processing.

It is not legal to take out a credit card, rent a car, or apply for a passport using a false name. The internet is awash in dual- factor authentication protocols to confirm your identity before you can check your bank balance or pay your insurance premium. Financial providers in particular have made it harder to commit identity fraud online.

Yet there is not much to stop you forging fake Facebook or Twitter accounts and harassing an ex, a classmate, or any public figure. The silo-style, cellular structure of the digital infosphere encourages delusions and bad behavior, and so does the cloaking effect of anonymity. I think it’s time for social media account applications to be assessed with at least the same scrutiny the TSA gives you at LaGuardia.

Users seeking to join Facebook et al should have to present a valid form of identification. (Minors can piggyback on the ID of a parent or guardian, just as they sometimes get credit cards backed by a parent’s FICO score.) The big social media platforms could co-manage a joint verification system linked to DMV and law enforcement databases. There would obviously be an appeal process for those flagged by the system, but a vetted, approved user could receive an equivalent of the TSA’s Known Traveler Number — that code that says you’ve been vetted and gets you into the PreCheck queue.

Call it the KDC code, for Known Digital Citizen. When applying to post on more platforms, enter your KDC code for faster service. Maybe Twitter’s push for cross-platform standards, Bluesky, leads to a world where your KDC is propagated for you as you hopscotch around the web.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.