Concrete examples of people-centered process
Welcome to Reailty Check. Thanks for reaidng!
The AI era’s call for greater inclusivity in governance and design decisions is not new. Many of us in civic tech still quote the phrase “Build With, Not For” that Cuán McCann popularized in 2014. I learned this week that “Nothing About Us Without Us” entered the contemporary policy lexicon via disability rights activists in South Africa and Eastern Europe in the 1990s—and that they were referencing a 16th century Polish law.
In recent days, two expert friends released valuable resources on participation in AI governance and design.
Tim Davies and his colleagues at Connected by Data posted findings from the People’s Panel on AI, a citizen consultation held “to address the glaring absence of public voice” at the UK AI Safety Summit last November, and Lina Srivastava published an excellent introduction to “Building Community Governance for AI,” based on principles that center community leaders and narrative power to help broker an “inclusive, anti-racist, and feminist” future with “access and opportunity for all."
In their open letter to Prime Minister Rishi Sunak, Connected by Data and almost 150 groups and experts said “the communities and workers most affected by AI have been marginalised” from the November 2023 summit. The People’s Panel was remarkable not only as an act of advocacy, but as an embodiment of “Show, don’t tell.” Though the event was merely parallel to the UK AI summit, it offers a model in how engagement, education and inclusion could build a bridge between elite decision-makers and a more representative public. Highlights from the event summaries include:
The importance—and feasibility—of real deliberation by community members: A key part of the Panel was the chance to learn about the issues during the AI Fringe event held alongside the UK summit. This kind of information sharing is fundamental to deliberative practice. And while we can’t expect every AI decision process to incorporate a multi-day learning phase for community members, the Panel demonstrated that “members were able to enter into ad-hoc conversations with experts at the Fringe … as informed interlocutors.”
The power of tying community engagement to public events, public decisions and the news cycle: Connected by Data says the decision to hold the People’s Panel within the AI Fringe “brought significant energy and dynamism to the process,” providing the chance for people to engage with the “live issues” driving the expert discussion. They also worked to ensure the People’s Panel and its members were “visible” at the AI Fringe, creating a chance for two-way conversation among experts and participants.
Publics participate more and differently when the issues in question are splashed across their screens and consciences. MoveOn would not have scaled up without the Clinton/Lewinsky affair and the Iraq invasion. Apple shareholders would not have voted for a company civil rights audit without the police killing of George Floyd. Public narratives fuel participation.
The People’s Panel summary concludes with a call for anyone “making decisions about AI, from policy design to AI model releases [to] embed a deliberative review into your decision making.”
Lina’s article in SSIR provides grounding principles and a range of examples for how that embedding can work. “We need a new roadmap” she says, “… to elevate the voices, perspectives, and solutions of communities who directly experience the harms of AI.” Key points include:
The need for “public education” to address the AI knowledge gap. “Whether integrated into public school systems or facilitated by civil society, educational initiatives are pivotal,” Lina says, “and should prioritize plain language and cultural relevance.” See Pew’s finding from a year ago, for example, that few Americans can identify where AI is in use in their everyday lives; one hopes this number is at least a little higher today.
Lina emphasizes the need for an enabling environment if inclusive practices for AI decisions are to take root. Echoing the People’s Panel findings (and the recommendations by Deloitte and Miceli et al. mentioned previously), she calls for investment in “cooperative structures,” that emphasize “shared ownership, democratic control, and collaboration.”
The piece also offers a great list of organizations grounded in cross-disciplinary design principles and challenges to dominant narratives—practices that can build a more ethical tech future: See Electric South, Brown Girls Doc Mafia, Bitchitra Collective, the Center for Cultural Power, and Metalabel, as well as the bridging work being done by Promising Trouble/Careful Industries, Black in AI and TechSalon.
Institutions working to internalize practices of deliberation and participation, must also be vigilant and accountable for participation-washing and other forms of performative inclusion. And, as Rachel Coldicutt of Careful Industries pointed out to me this week, “even a well-executed co-design process risks being meaningless if it is not accompanied with purposeful routes to redress and accountability.”
Notes and Afterthoughts
Seeking to illustrate the idea of inclusive participation, I used Midjourney to ask for images of regular folks and “an architect” working together on civic plans. My favorite of the resulting images is the thumbnail for this post, but see below for several more generated as I experimented. (It took hours, btw. AI would not be the answer even if it wasn’t cribbing from uncompensated human creatives.)
A post about participatory design would be incomplete without a mention of Porto Alegre, the Brazilian city that inaugurated its residents as budget decision-makers in the 1990s—a process tradition that lasted nearly 30 years. There’s more about participatory budgeting here, and also in Hollie Gilman’s ongoing work.
Here is a full list of those who attended the UK AI Safety Summit. While several contributors to the global AI accountability conversation are there, the ratio of community advocates (much less representatives of impacted communities) to tech industry and government groups is noticeably low.
On closing AI knowledge gaps, the AI Education Project is an interesting example of how to change the civic dynamic by building AI literacy at the K-12 stage.
“Explainability” is a vital ingredient in successful digital transformation. It was encouraging to see that one of Senator Chuck Schumer’s AI “Insight Forums” was devoted to “Transparency, explainability and alignment.” But based on this summary from Tech Policy Press, that forum focused primarily on the threats to copyright and artists’ rights, and the need for transparency and accountability in AI training.
And with that, here are those AI images I made using prompts referencing famous artists. I have avoided AI imagery because of this obvious dilemma and expect that I’ll continue to minimize my use of it.





AI generated images to illustrate the idea of "participatory design." From Midjourney. The basic prompt I used was: "In a sunlit atrium of a municipal building, an architect is sketching on a very large drawing board. A group of people who are African-American and Hispanic surround the architect, looking at the drawing board and pointing at the drawing board; the group of people is 1/3 children. View from above. Style of Hayao Miyazaki." Then I tried variations that included asking for the style of a 1920's political drawing (3), Norman Rockwell (4), "Peanuts" (5), and the Brothers Hildebrandt (6).
