“The plural of ‘anecdote’ is not ‘data’.”
A disaster happens somewhere in the world. Disaster technologists and digital humanitarians mobilize. Maps are crowdsourced, satellite dishes and networks are deployed, UAVs are flown, apps are hacked in marathon sessions, social media mined. All these things happen incredibly rapidly because of the army of passionate individuals and organizations that fundamentally believe that the rapid flow of information helps to save lives and speed recovery to affected communities.
Eventually, as things move from response to recovery, most of these individuals and organizations will document what they did and lessons to be learned for the next time around. Pictures will be shared on social media and in press releases. The community of technology humanitarians will prepare for the next event… knowing there is always a next event.
But something fundamental is missing from our community: how do we actually know we made a difference in the outcome of the response?
Many of us (including myself!) have plenty of anecdotes and war stories about how something we did made a difference, but anecdotes are just that. Let’s face it… I make my living working in this space. Of course I want to believe that my work, my time away from my family, all those long hours in hardship conditions actually make a difference. Organizations want to tell the most positive story about their efforts, to ensure future funding, volunteers, and missions.
It’s called “confirmation bias.” As humans, we naturally seek our information that supports our beliefs and positions, and tend to avoid information that would call those beliefs and positions into question – or discredit them entirely! The photo above is an an example… it’s certainly visually compelling, and I’d like to think that good things came from being out in that rain that day, but we need data to really measure effectiveness!
Our community of emergency techies is relatively small, and the whole multi-discipline sphere is relatively young. By way of analogy, this reminds me of where emergency medicine in its first few decades. During the dawn of the Emergency Medical Services (EMS) field, most treatments were based on assumptions (“backboard all suspected CSPINE patients”). Eventually, evidence-based medicine came into the field, with the results that we don’t backboard anywhere as often as we used to, and that we know how incredibly important chest compressions are in CPR, high-flow oxygen can actually harm a heart attack patient, and therapeutic hypothermia for a cardiac arrest patient actually makes sense.
So, while anecdotes make for great memories, they aren’t terribly useful for the long-term evolution of our field. Just like EMS and many other emergency disciplines, a collection of stories cannot be considered evidence of efficacy because of confirmation bias. One positive story about disaster technology response and one negative story about disaster technology response cancel each other out.
We need to move beyond anecdotes and move towards true evidence-based disaster technology response.
So what does that look like?
What technology interventions actually help the situation on the ground? How will we know this?
What technology interventions don’t help, and should be avoided? How will we know this?
While the field demands innovation and exploration, how do we ensure that we aren’t just enamored with our own technology at the expense of other meaningful activities that support disaster response?
We have to get beyond the feels. Many digital volunteers are moved by crisis to “do something.” But the goal of “doing something” should not be the social reinforcement that a volunteer or disaster worker gets on their Facebook page. It should not be to demonstrate some whiz-bang technology in crisis. After all, a fancy map that isn’t used by anybody to make a decision is merely a graphical representation of disaster trivia. A UAV that gathers video that isn’t used by anybody to make a decision is just a pretty YouTube video. In both cases, the intention and the tech are great, but the outcome is lousy.
If it happens that these other things happen along the way, so much the better. . But out first duty should be to focus on the outcome… verifiable, evidence-based outcomes.
I think there’s a strong role for academia in this effort.
We should look to the area crisis informatics to help inform the practitioner about these questions of efficacy. They can help us develop the metrics, engage peer review, and help move us to that next level that our increasing workload and set of expectations demands. We in turn can influence the research questions and work to connect the results of academia to the work being done by practitioners.
By driving towards evidence-based operations, we are helping to mature our work and minimize wasted effort and cost. But most importantly, we enshrine the beneficiary of our work at the absolute center of the digital humanitarian universe. The disaster victims and responders who need information to make smarter, better decisions for themselves deserve nothing less.
The move towards evidence-based response will take all of us … let me know your thoughts in the comments.