Zero-sum frames
Welcome to Reailty Check. Thanks for reaidng!
In the first post, I mentioned false dichotomies and the danger of reducing a tech conversation to efficiency vs. harm, or, say, too-confusing vs. plug-and-play. If you’ve ever tried to help a parent send an email or, say, a 100-year-old institution adopt social media, you know that any real progress happens between leaping in too fast and walking away too soon.
But we love when things are simplified to binaries. (It’s almost as if we run on 1s and 0s.) Especially right or wrong choices and ideological battles.
It’s incredibly useful to explain things using distinctions. Our instinct to differentiate is part of our ability to adapt and survive. But when did we decide that useful distinctions were life and death choices, and not simply useful for orienting ourselves and then making up our own minds? Making a better decision, not choosing a side.
There’s brain science about this, and lots of documentation about how fear and anxiety amid traumatic times make people more tribalistic and less capable of good decision-making.
Some of the common binaries in the technology conversation include:
droid vs. android (i.e., “it responds” vs. “it thinks”)
help vs. harm (i.e., “it will fix things” vs. “it will make things worse”)
future harms vs. current harms (i.e., “killer robots are coming” vs. “irresponsible adoption already is hurting people”)
incomprehensible vs. class-in-session (i.e., “it’s mysterious tech, so let’s rely on CEOs and their lobbyists to set the terms” vs. “let’s keep educating ourselves so that rulemakers and journalists can set and enforce new norms”)
If we can make these distinctions navigation points in an ongoing process of learning and norm-setting, they will serve the same purpose that analogies and hypotheticals serve in training or policy development: as scenarios to consider, not creeds to defend (or fundraise on).
But if we let tech companies set the terms, or allow our uncertainty to be the limit of our rule-setting, then we will be ceding the safest and the most efficient futures to forces of profit and forces of entropy.
Notes and Afterthoughts
This means continuing to correct the biased decisions about who’s in the room when rules are considered or tools are planned.
And it means not tolerating false equivalence in the public discourse, since that usually helps entrenched power more than fact-finders.
See also this post from Emily Bender about a subtler form of false equivalence in the AI threats discourse, where the word “schism” implies a rift in a once cohesive group, when the reality is that one group has an evidence base (more links in the post) and the other, in Professor Bender’s apt words, are “pulling numbers out of their asses.”
