Community-Centered AI

My Response to America’s AI Action Plan.

An illustration of a male with eyes closed facing an image of a wired version of himself with a lightbulb in between them.

Image generated by ChatGPT

In July 2025, the White House released "America's AI Action Plan", outlining the federal government's strategy to "win the race" for AI dominance through three pillars: accelerating innovation, building AI infrastructure, and leading in international diplomacy and security.

America’s AI Action Plan is ambitious, promising big advances for government tech. But without safeguards, it could deepen the digital divide, expanding access for the digitally privileged while leaving vulnerable communities behind. I’ve seen this gap up close, both in my federal research and in my own family’s struggles as immigrants navigating systems not built for us. Still, I believe AI can do the opposite: close gaps and expand access, if we design it with vulnerable communities at the center.

Leaving People Behind

When I was 10 years old, I became my immigrant parents' primary translator, not just for language, but for navigating digital government systems. They were seeking asylum, fleeing oppression, and the technology meant to help them often created additional barriers. Their limited digital literacy meant they relied completely on my interpretation of complex forms and processes that could determine their fate. Today, looking at my six-year-old son, I cannot imagine putting so much responsibility on him. During my time at the Department of Homeland Security (DHS) Office of the Chief Information Officer as a Customer Experience (CX) Strategist, I'm struck by how much more complex and fragmented federal systems have become. High-speed internet, secure devices, and digital literacy are now the baseline for accessing government services, yet millions, particularly low-income, rural, and elderly populations, still lack one or more of these.

The plan’s push to “accelerate AI adoption” and cut “red tape” marks a new era for civic tech. OpenAI’s move to give every federal worker access to ChatGPT is framed as reducing administrative burden so civil servants can focus on real work. That could unlock resources and speed up innovation. But it also raises a critical question: as AI races into government, will these systems truly serve everyone, or just the privileged?

In my fieldwork with non-English speakers and low-digital-literacy communities, I’ve seen how quickly AI can mislead. Asylum seekers often rely on family or WhatsApp for guidance, or trust AI translation apps without question, sometimes with dangerous results. During discovery, I observed one tool mistranslate a court document and invent legal assumptions that left someone with completely wrong expectations about their case. This isn’t rare. A Reuters investigation found immigration officials using AI to evaluate applications, multiplying errors in life-altering decisions. Without safeguards, we risk building systems that exclude the very people government is meant to serve, while overwhelming the workforce tasked with helping them.

I imagine how AI could have changed my own story as a 10-year-old. If we had access to a trauma-informed AI assistant, one that could translate legal forms accurately, explain requirements in plain Spanish, and flag when the interpretation confidence was low and human help was needed, my story would have looked different. Maybe I wouldn’t have been as involved in adult matters, but rather fully living out my 10-year-old experience.

Innovation Guardrails, Not Shortcuts

The Action Plan's emphasis on innovation and infrastructure creates real opportunities for civic technologists. The focus on "enable AI adoption" and building "world-class scientific datasets" could provide the resources we need to develop more sophisticated, helpful government services. The plan's commitment to "AI interpretability, control, and robustness breakthroughs" suggests recognition that we need AI systems we can understand and trust.

However, the plan's silence on equity and inclusion concerns me. While it mentions "empowering American workers" and ensuring AI "protects free speech and American values," it doesn't explicitly address how we'll ensure AI serves vulnerable communities like BIPOC communities, rural families, seniors, immigrants, or people with disabilities, or prevent algorithmic bias from perpetuating existing inequities.

The emphasis on "removing red tape and onerous regulation" could be particularly problematic. While excessive bureaucracy does slow innovation, many regulations exist to protect vulnerable communities from discrimination and harm. As we streamline AI deployment, we need to distinguish between helpful regulations and bureaucratic barriers.

My Magic Wand

Let’s say I had a magic wand to address some gaps, I’d start by building a multilingual, culturally aware AI assistant with low-digital and low-literacy users as a baseline. Thinking back at my time at DHS, I co-led the design of a trauma-informed communication framework for frontline officers. I imagine scaling that with AI. An assistant that not only translates but also explains legal documents and proceedings in culturally contextualized plain language and flags any sections requiring human review. An overwhelmed individual dealing with trauma would be able to understand deadlines, rights, and next steps without relying on a 10-year-old interpreter.

With another swing of the magic wand, I’d continue by creating a proactive verification loop for high-stakes decisions. Thinking back at the individual who used the AI image capture app to summarize their legal document, with proper design, AI could cross-check legal processes and validated templates, trigger alerts when confidence is low, and auto-route the individual to human support before harm occurs.

The Action Plan's success depends on how we implement it at the ground level. This is where community-centered design becomes essential, not as a barrier to innovation, but as a pathway to building AI systems that truly work for everyone.

Below, I explore ways in which technologists can align with America’s AI Action Plan and ideas on how to implement it responsibly.

 

Protecting Communities While Building the Future

Male with dark hair demonstrating a shield with two hands in it with a wired up brain to the right of it.

Image generated by ChatGPT

Here's how I think civic technologists can align with America’s AI Action Plan while protecting vulnerable communities.

Leverage the Innovation Push Responsibly.

The plan's focus on "creative and transformative application" of AI systems creates space for innovative approaches to serving diverse populations. We can use this momentum to develop AI tools that specifically address language barriers, digital literacy gaps, and cultural differences, turning inclusion from an afterthought into a competitive advantage.

Build Inclusive Datasets.

The commitment to "world-class scientific datasets" presents an opportunity to ensure training data represents all Americans, not just digitally privileged populations. I love how Mak-CAD is empowering communities across Sub-Saharan Africa with AI knowledge to build and train AI models. They show that it’s possible to not only advocate but also intentionally build datasets that include multilingual content, diverse cultural contexts, and input from historically excluded communities.

Champion Interpretable AI.

The plan's investment in "AI interpretability, control, and robustness" aligns perfectly with vulnerable communities' needs. When AI systems can explain their reasoning in plain language, users, regardless of their technical background, can better understand and verify the assistance they're receiving.

Scale Community Partnerships.

As government accelerates AI adoption, we can demonstrate that community engagement accelerates rather than slows development. By involving affected communities in research and co-design from the start, we identify use cases, surface potential problems, and build trust that makes deployment smoother and more successful.

 

From Research to Action: Designing AI for All

A male with dark hair thinking with icons floating to the right.

Image generated by ChatGPT

Based on my research and America’s AI Action Plan's priorities, here's how civic technologists can implement AI responsibly.

1) Start with community research before building.

Use the Action Plan's innovation resources to fund deep community engagement. Understanding real-world use cases and existing support networks isn't just ethical, it's strategic intelligence for building better systems.

2) Co-design with affected communities.

Bring community members into the development process as partners. Their insights should shape everything from user interface design to underlying algorithms. This collaborative approach can accelerate development by preventing costly mistakes and rebuilds.

3) Build verification systems for high-stakes decisions.

For vulnerable communities who may not recognize AI errors, create multiple verification paths. This might include human review processes, integration with trusted community organizations, or clear escalation procedures.

4) Implement continuous monitoring and feedback loops.

Deploy with robust systems that can catch errors and biases in real-time. Make it easy for users and community advocates to report problems and see that feedback leads to improvements.

5) Document and share approaches.

Create replicable models that other teams can adapt, contributing to the Action Plan's goal of American leadership by demonstrating that inclusive AI development works.

 

The AI Action Plan represents a significant investment in America's technological future. As civic technologists, we have the opportunity, and responsibility, to ensure this investment serves all Americans, not just those with digital privilege. When we design for the most vulnerable users, we create solutions that are more robust, more trustworthy, and more innovative.

The communities we serve are ready to partner with us- I know this personally through professional and lived experience. They have insights, experiences, and solutions we need. The question is whether we're ready to listen, learn, and build together. The future of civic technology doesn't have to replicate the exclusionary patterns of the past. By embracing community-centered design and approaching AI deployment with both excitement and humility, we can create systems that truly live up to the promise of technology serving everyone.

The choice is ours.

Next
Next

Equity-Centered AI Products