The Six Principles of Good Crisis Technology Design

“Design is not just what it looks and feels like. Design is how it works.” – Steve Jobs

I was recently having coffee with a friend in the crisis technology community (hi, Willow!), and we were bemoaning the lack of design in many crisis technology circles.  Whether supporting public safety (emergency management, police, fire, EMS and other government functions) or in the international humanitarian sector (the world of NGOs, IGOs and the like), a common thread I’ve seen over the last 25 years is that the value of designing emergency crisis support technology is very much under-appreciated.

You see it all the time:  Applications that have horrible user interfaces, using terminology that is confusing and where high-consequence actions are too difficult to accomplish without error.  Disaster support equipment that is too heavy for transport using the methods that would be most available. Teams trying to shoehorn technology designed for one scenario into another without a full appreciation for the differences in that scenario.

Burn it into your minds: good design is an integral part of having an ethical approach to technology in crisis response.

How to destroy a brand new disaster kit.

10496920_10152640939319711_870839851655534505_o
At JIFX 14-4:  About three minutes after I explained the Rapid Response Kit to the US Marines, I totally fried it. (Cisco photo)

I’ll use one of my own experiences as an example of bad design.  A few years ago, I designed a emergency networking kit (it became known as the Rapid Response Kit).  In the earliest version of the kit, there was a small router and a switch.  The switch supported Power Over Ethernet (PoE) for VoIP phones and other devices.  Its power supply provided 48 Volts DC.  The router needed only 16 Volts DC from its power supply.  However, by sheer coincidence, the physical plug for both power supplies was exactly the same.  I initially thought that labeling the plugs and documenting it would suffice.  Surely nobody would plug the higher-voltage power supply into the wrong device, right? I documented it!

Yeah, no.  You can see where this is going right?

It’s not enough to test something in a lab in optimal conditions.  You have to take things out into the field to make sure they’re ready for real-world.  In the case of this kit, its first field test was at the JIFX exercise in Camp Roberts, CA.  Camp Roberts is a hot, dry, miserable place.  In other words, it’s perfect for simulating a post-disaster environment.  I took my brand new kit, and set it up for the first time in the field.  I got to talk about it to humanitarian organizations and the US Military.  Talk is one thing, but then you gotta actually turn the thing on, right?

Almost immediately, after being short on sleep and in a hot, dusty environment (and probably a bit dehydrated, truth be told), I plugged the kit in, and of course I cross-connected the power supplies. The router was immediately fried – totally bricked, actually.  The rest of the test could not proceed.  I was quite embarrassed!  That was a clear signal to me that the solution wasn’t ready for the real world.  Back to the drawing board.

(In case you were wondering, I did eventually get the problems worked out, and the kit has since been deployed at disasters in the United States and humanitarian emergencies around the world.)

Designing for Emergencies

I am not aware of any class, course, or training to teach technologists how to design for emergencies. Unless the techies designing the system themselves have come from the emergency or crisis world (like being a former firefighter or cop or something), they may not have ever worked in the field.  Well-meaning but untrained engineers in a hackathon for the humanitarian crisis of the day are an extreme example of this:  engineers creating solutions without understanding the realities of the humanitarian crisis environment their solution is intended to support.

I have a few principles that I submit to you, dear reader, to help bridge the gap between good intentions and effective technological action on the ground in emergencies and crises.

Know the context and history of the underlying crisis.  Design accordingly.

I once went on an observational ride-along with a battalion chief in a large metropolitan fire department.  I wanted to see how they used technology day-to-day in their responses.  While I was interviewing the chief, the station got a call for a reported fire in a nursing facility.  The whole station and part of another nearby one got the response.  Two engines, a truck and the battalion chief got the assignment.  The chief and I jumped into his Suburban and we followed the engine and truck out of the station with lights and sirens screaming.

As we were headed to the emergency, I was struck by how overworked the chief was.  He was driving, on the radio, and using his computer to build situational awareness of the emergency at the nursing home – all at the same time.  He was visibly frustrated by the application on his laptop.  At one point, he sighed and said that the application used by the fire department had originally been created for law enforcement use.  Talk about shoehorning!

But this is one small example of where bad design can compromise safety.  The BC’s first job was to drive safely to the emergency (and as many people know, one of the most dangerous part of any first responder’s job is when they are rolling code 3 to a scene — traffic accidents are all too common).  The poor design of the laptop application he needed to use needlessly added to his workload.  You can see where this can end badly, right?

In the international humanitarian context, there is a desire to standardize technology solutions for humanitarian response… there’s good reason for this.  Fewer solutions to support, fewer one-off solutions. All to the good.  But the other side of that coin is that Haiti isn’t Japan isn’t Hurricane Katrina isn’t the Syrian Refugee Crisis isn’t Puerto Rico. A solution that is appropriate for a refugee crisis in Europe may put people in danger in the Rohingya crisis in Southeast Asia.

This is an expression of  a very human desire to see new challenges through the lens of previous challenges.  This can create “blind spots” in our thinking that can cause problems.  During the 2011 Arab Spring, I was invited by a university to attend a “hackathon” around the humanitarian crisis related to the Libyan revolution.  At this event, I met many volunteer mappers who had just recently worked on the Haiti crisis.  A humanitarian emergency related to a conflict situation is a very different animal than that related to an earthquake.  In this case, the volunteer mappers were enthusiastically mapping distribution points for supplies, feeding areas, refugee camps, and logistics lines without any consideration to the security situation in North Africa.

When I asked the leads of the mapping project how they proposed to protect this information and determine who needed to have access, I got blank stares.  “This is meant to be open source data.”  My response was that in the wrong hands, this information so carefully curated and generated could be used as a targeting list by the combatants on the ground.  In a real sense, their success on the Haiti quake (where the threat model was very different) created a sort of tunnel vision to how the reality in North Africa was very different.  The key takeaway here:  one size does not fit all.  If you have previous knowledge and experience, by all means use it – but don’t be encumbered by it.

As crisis technologists, we need to research the underlying emergency or crisis environment to make sure that our technology solutions don’t add to the “fog of war” or inadvertently put people at risk.

Don’t Fight The Last War

Many volunteer disaster techs got their start during the 2010 Haiti earthquake response.  I’ve argued elsewhere that Haiti was the first true “data-driven” response, and many people who had previously never been involved in disaster response got their initial experience working on that crisis.  It truly was a breakthrough moment for our community.

Keep in mind though that technology continually evolves.  What crisis affected communities and first responders expect from technology also evolves.  Think about the last ten years and the explosion of smartphones and apps.  If you are designing solutions in 2018 that assume the scenario of Haiti in 2010, you probably aren’t incorporating lessons learned from the intervening years – much less understanding the needs of emergencies today.  What was the right answer eight years ago is probably not the right answer now.  And in a crisis in 2023, you probably shouldn’t be responding with exactly the solution you created in 2018. If you are static in your solutions, you’re not keeping pace with what responders and the public need from you.

Challenge your implicit bias

Everybody has biases.  It’s human nature.  But as crisis technologists, our mission requires us to serve what former FEMA head Craig Fugate called “the whole community.”  Every ethnic and sectarian group, all genders, those who are disabled, those who have pets.  People who have different politics or come from a different culture than you do.  Everybody.  (Start with localization:  your users probably expect content in their language, not yours!)

As an engineer who happens to be male, I’ve had to really push myself to ensure that my technical solutions for mass communications are equally available to both men and women in the refugee camps in Europe.  If your model user is somebody who looks, acts and thinks like yourself, you are risking excluding everyone who isn’t you.

Anticipate Unintended Consequences.  Reduce the Risk.

As you are designing your solution, put your “bad guy” hat on.  What are the risks to security, safety, and privacy from your solution?  What happens if your solution fails, or is subject to fraud or cyberattack?

During my deployment to the 2015 Syrian Refugee Crisis, we had to design advanced security into the networks to detect malicious activity.  We challenged ourselves:  “Great. We’ve given the refugees a voice to the outside world … but now what are we letting in?”  In creating these networks, we were also opening up a new attack surface.  We had to mitigate the risks so that we could maximize the benefits.

You should assume that errors will happen.  You should assume that malicious users are out there, ready to exploit the humanitarian crisis for their own ends.  You should assume technical or logistical failure modes.  A good technology solution will anticipate these risks and deal with them as gracefully as possible. Fragility is a bad thing.

Maximize utility and accessibility

Success in crisis technology is not defined by how “buzzword compliant” your solution is.  Your success is measured by how effective your solution is in helping the situation on the ground.  That means you should consider the needs of people on the margins, people that others would rather forget about.  How accessible or useful is your solution to those who are overworked, or disabled, or otherwise under considerable stress?

If you find yourself pushing a solution forward because of buzzword novelty (you know what I’m talking about, right?  Blockchain! UAVs! Biometrics!) sit back and have a re-think.  Are you advancing a real solution to an identified problem, or are you contributing to tech hype that doesn’t provide a material benefit on the ground?

Remember: the reality on the ground is ultimately the only reality that matters.  Everything else is secondary.  This includes whatever fancy tech is being hyped at the moment.

Design for Supportability

The humanitarian sector is littered with far too many “pilot projects” that are paths to nowhere.  Once the grant runs out, what becomes of the solution?  Design for the long-term.  Document your code!  Write documentation!  Make sure that someone else other than you can support the solution once you move on to your next project.

There’s also a element of reducing complexity here.  During the 2014-2015 Ebola crisis in West Africa, I reviewed a proposal to deploy connectivity in the Ebola-affected communities.  The intended solution leveraged new hardware and technology that had first shipped less than a year earlier.  There was literally nobody in West Africa who was certified to support this new technology.  But since it was seen as the latest-and-greatest, of course it had to be the right answer, right?

No.

You may have the right people in place to do the initial deployment, but you need to consider the entire lifecycle of the solution.  The initial deployment effort is actually the least significant part of that lifecycle.  Design support and sustainability into your technology.  If that means you use something less cutting-edge, but is more supportable over the long-term duration of the crisis, so be it.

We’re early in the journey. Fasten your seat belts.

In my opinion, we are still pretty early in our understanding about the role of technology in disasters and humanitarian crises.  The often-hyped word “disruption” may very well apply here – but disruption can cut both ways, and not always in the good way.  Technology enables new opportunities, but it also brings with it new challenges – this shouldn’t be a surprise to any of you, and I feel a little weird writing what feels like a cliche here.  Until engineers are also trained in international relations or public safety, we have a previously silent, under-acknowledged burden to close the gap between the technical realities and the communities of practice we intend to support with our solutions.  We owe it to people who are trying to survive a crisis and those who are responding to them to make sure that what we create is finely tuned to the purpose at hand.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s