Designing a 911 voice agent for people who can't make a 911 call
911 is built for voice. I led product design for an AI voice agent that calls 911 on the user's behalf, delivering a structured emergency report in the one format every dispatcher in the country is trained to trust.
ROLE
Product & Design Lead
TIMELINE
2024-2025
TEAM
CEO/Founder, Product Designer, 4 Engineers,
2 AI/ML Experts, UX Researcher, Govt Partners
SCOPE
Pilot Prototype
DOMAIN
Public Safety
TOOLS
Figma

PROJECT OVERVIEW
How AccesSOS works today: Locate, Tap, Send
AccesSOS is the only accessible 911 platform built for people who can't communicate verbally.
Among them are people who are Deaf, hard of hearing, nonverbal, or are in situations where speaking isn't safe. The interface is icon-based and language-independent.
When I joined, the app was live in four counties across California and New Mexico. Over the next two years I scaled it to 45 states and shipped User Profiles — a release that saved users critical seconds in emergencies by passing their medical and accessibility information to dispatchers in a single tap.
Locate
In an emergency, it can be hard to know exactly where you are.
accesSOS uses your phone's GPS to pinpoint your location automatically. Your exact address and coordinates are shared with 911 — so dispatchers can send help to the right place, even if you don't know where you are.
Tap
Describe what's happening, without saying a word.
Select the type of emergency, add details about the situation, and attach your pre-saved medical profile, all by tapping. No typing, no speaking, no explaining under pressure. The app asks the right questions so you don't have to think about what to say.
Send
One tap. Help is on the way.
Hit send and your full emergency report — GPS location, incident details, photos, and medical information — reaches 911 instantly. No voice call needed. No relay service. No waiting for a bystander. You stay informed with real-time updates as your report is received.


THE PROBLEM
Profiles Solved One Side Of The Conversation And Broke The Other
User profiles were a clear win on the user side. But they got loaded with information dispatchers couldn't efficiently process.
Before profiles, someone in an emergency had to type their location, medical conditions, and disability accommodations from scratch, under stress, often one-handed, sometimes in the dark. After profiles, all of that traveled with a single tap.
User Side: Multiple steps before user profiles


After user profiles: One tap

Dispatcher side has become a wall of text.
Before profiles, the SMS to 911 was tight: location, emergency type, weapon status, a short description. A dispatcher could scan it in seconds. After profiles, that same SMS now carried medical conditions, a full medication list, allergies, emergency contacts, and saved addresses — all bolted onto the front of the emergency itself. A focused message had become a wall of text.
Dispatcher work is triage. They need three things, fast:
What Is Happening?
Where?
Is There A Threat?
Dispatcher Side: Before user profiles

After user profiles

This wasn't just a content problem. PSAP text-to-911 interfaces are basic monochrome terminals — they aren't built to render structured data, hierarchy, or visual emphasis. A dispatcher scanning that screen had no way to separate the emergency from the medical history. The useful information was technically there. It was functionally invisible.
EXPLORING ALTERNATIVES
Three Obvious Fixes, None Of Them Worked
We looked at three directions before landing on the one that worked.
1. Split the SMS
The SMS-to-911 protocol isn't like regular texting. Standard SMS character limits apply, and many PSAP systems don't handle multi-message threading well — the second message can arrive at a different dispatcher position, arrive out of order, or not be linked to the first message at all. We could end up creating what looked like two separate emergencies on the dispatcher's screen.
2. Link to a Profile Page
Send a short SMS with a URL dispatchers could open for the full profile. We brought this to our government PSAP partners and learned it was a non-starter. Most 911 dispatch workstations run locked-down legacy systems that can't open external links. The few that technically could wouldn't do it mid-call — it's a workflow disruption and a liability.
3. Send Less Profile Data
Trim the profile to "critical fields only." But critical relative to what? A medication list seems secondary until the patient is unconscious and paramedics need to know what they're on. An allergy seems irrelevant until someone is about to administer treatment. We couldn't pre-decide relevance for an emergency we hadn't seen yet.
All three options treated the problem as a formatting challenge. The real problem was the delivery medium itself.
THE INSIGHT FROM DISPATCHER INTERVIEWS
911 System Is Built For Voice Calls
Dispatchers are trained to process information through voice calls. It's the format they're fastest in, trust most, and supported nationwide.

Bounce Back SMS
Even though many U.S. counties support text-911, this service is unreliable and dispatchers aren't trained to process texts.
No matter how we format the SMS, it may not be delivered.
We ran interviews with 911 dispatchers across our pilot regions. Three things came up in every conversation.
Trained For Voice
Dispatchers are fastest in the medium they were trained on. Voice is the channel their muscle memory, their software, and their protocols are all built around.
Voice 100% Supported
EVERY 911 call center in the country accepts voice calls. But only 56% support text-911, the supporting software isn't federally certified, dispatchers aren't trained on it, and texts can bounce back undelivered.
Costs To Accept Texts
Adding new channels to a PSAP means new software, new integrations, new training, and new procurement cycles. We couldn't ask 911 to change for us. We had to meet them where they already were.
How might we deliver our information via a voice call?
If we could deliver the user's information through a phone call — structured, prioritized by the specific emergency, and responsive to follow-up questions — we could give dispatchers the richness of profile data without the overload of a text dump. The AI wouldn't replace the dispatcher. It would speak to them in their preferred medium, surfacing the right information at the right time.
THE AHA MOMENT
Why an AI voice agent, specifically
What's at stake when emergency response misreads the user?
Across the U.S., people with autism, mental health conditions, and cognitive disabilities are disproportionately harmed during police encounters, often because critical information about their condition never reaches the responding officer.
This is what AccesSOS profiles exist to prevent. It's also why getting the dispatcher handoff right is mandatory - it's the entire point of the product.

A text or pre-recorded message couldn't answer follow-up questions. A human relay operator couldn't scale to 45 states. An AI voice agent could do both.
Richer Information
Despite 911 not being able to accept media files, AI could narrate photos and videos, improving dispatcher situational awareness.
Personalized Help
By having emergency contacts, fuller address information, and disability info, AI could help provide faster and more adequate help.
Lower Infrastructure Costs
No infrastructure costs for 911 at all, just using their existing channels. Lowers costs for AccesSOS by removing human agents who work in no-text-911 areas.
DEFINING SCOPE
Drawing the line: what the agent does, and what it must never do
In an emergency context, what an AI refuses to do is as important as what it does.
Identify itself
Opens every call by stating that it is an AI calling on behalf of an AccesSOS user.
Deliver the emergency first
Type, location, weapon status, concisely, in the order dispatchers are trained to expect.
Answer from the profile
Find answers about user allergies or address instructions from user profile.
Stay grounded in the source
If the data isn't in the profile or attached media, the agent says so.

Interpret symptoms
It does not interpret symptoms, suggest diagnoses, or offer triage advice. That's the dispatcher's job.
Volunteer outside information
The agent works only with what the user submitted. It doesn't bring outside knowledge, general medical context, or guesses.
Interpret what images mean
It describes what's in the image. It does not infer cause, severity, or intent. "Smoke from a second-floor window" should not turn into "this looks like an electrical fire.
DEFINING AI CAPABILITIES
Independent, relay, or hybrid: who's actually on the call?
Independent and relay are both cleaner architecturally. Hybrid is harder to build. But it's the only mode that doesn't fail when the emergency changes.
Once the agent existed, the next question was who was driving.
Our AI/ML engineers initially leaned toward independent mode — it was the cleanest architecture, the most predictable, and the easiest to test. The agent works from structured data it already has, no live dependencies, no race conditions.
I pushed back. In an AccesSOS emergency, the situation changes faster than the initial report. Someone takes a photo of their attacker. The fire spreads to another room. The user remembers a medication they forgot to add. An independent agent is locked to a snapshot in time — and in emergencies, the snapshot goes stale fast.
But I also couldn't argue for pure relay. Our users include people with intellectual disabilities, people who are injured, people whose phones get taken. A relay-only model fails the moment the user goes silent — which is exactly the moment dispatchers most need information to keep flowing.
We needed an architecture that survived both.

Who's in control?
The AI delivers the initial report independently. The user can add details mid-call. The AI incorporates them live.
What if user goes silent?
The AI continues from the pre-submitted report and tells the dispatcher the user is unresponsive.
What if new info emerges?
The AI monitors for new input and weaves it into the conversation with the dispatcher.
What does user see?
Live progress view, plus the ability to add photos, location, or details at any moment. The user is a co-pilot, not a passenger.
The hybrid system ensures the AI can independently manage the high-pressure dispatcher handoff while remaining fluid to ingest new updates mid-call. The following diagram maps how the system maintains source-grounding while adapting to a changing reality.
Architecture chart:
TESTING THE PROTOTYPE
What We Know and What We Don't
The agent consistently delivers accurate emergency narratives and maintains coherent, natural conversation flow.
Based on internal scenario testing across a library of simulated emergencies — different emergency types, varying levels of profile completeness, diverse dispatcher interaction patterns.
Source-grounding works as designed. In testing, the agent correctly declined to answer questions when the underlying data wasn't present, rather than generating plausible-sounding but unsupported responses. This was the behavior I most wanted to validate, because it's the foundation of safety in this system.
WHAT'S NEXT (ALREADY HAPPENING)
The Pilot: 3 Questions Only Dispatchers Can Answer
We're entering joint validation with three government PSAP partners: the Berkeley Police Department, the New Mexico 911 Bureau, and Fairfax County Department of Public Safety Communications.
Internal testing can validate accuracy. Only real dispatchers, in real shifts, can validate trust.
Trust
Do dispatchers trust the AI agent enough to act on the information it provides?
Time
Does the system reduce emergency intake time compared to current alternatives (relay services, text-to-911)?
Fallback
How often does the fallback system need to engage, and does it work seamlessly when it does?
We're developing a formal evaluation framework built on three safety metrics: narrative accuracy (is the information correct?), response latency (is it fast enough?), and fallback reliability (does escalation work when needed?). Dispatcher feedback will directly shape the AI's communication style — tone, pacing, how information is ordered and prioritized.