On Emergencies, Wifi, Gender and Social Dynamics

_SOS9763
Syrian refugees get online at a camp on Chios Island, Greece.  (NetHope photo)

 

“It’s no longer a luxury. This is serious. It’s really a social justice issue. It’s a 21st century civil rights issue.” – Cheptoo Kositany-Buckner on the Digital Divide

Once upon a time, there was no connectivity in disasters and humanitarian emergencies.  The Internet was not really regarded as an essential service in the midst of crisis.  Emergency workers communicated predominately through push-to-talk radio.  Victims of disasters and emergencies might line up for blocks to use a payphone to all friends and family.

#CommIsAid

Fast forward to today.  As put forward by the CDAC Network, and others “communication is aid.”  The ability to communicate and share information is seen as a vital humanitarian service on equal footing with food, shelter, medical care and other essential human needs.  The UN OCHA paper Humanitarianism in the Network Age from 2014 was a major milestone in getting recognition of this concept.  Today, in 2017, it is nearly impossible to go to any humanitarian technology conference and not hear this message as a core topic of concern to the crisis technology community.

With this pivot, we as emergency technologists are no longer being asked to just connect the first responders, government workers, and NGO staff that comprise the professional response.  To be sure, I think this will always be a part of our core mission, but the harder lift for us now is to connect the disaster or crises affected population.  We aren’t just talking about a few dozen or a few hundred aid workers – but now we’re talking about true mass communication.  How big?  Our recent work in Europe has seen us provide connectivity to over 600,000+ unique devices as a part of the Syrian refugee response.

OSI Layer 8 and 9

The responsibilities we have to face when connecting a mass population of vulnerable people are substantial, and I don’t think we (the entire community in the broadest sense) have really thought through what our ethical requirements are in this brave new world.  To be sure, I have spent several years advocating for security, data protection and privacy for affected populations – and I’ll probably write more than a few more blog articles on those topics in the months and years ahead.  But today I wanted to think about the next challenges on the radar.

When I was learning network engineering, one of the basic things we have to learn is the OSI Model – it’s a part of every basic data communications class.  The seven layers describe different technical boundaries of a communication system – from the lowest, physical (layer 1) elements, to the highest elements of an application (layer 7).  In learning the OSI model, you quickly learn that there are the “unofficial” layers 8 and 9 – money and people.  It’s the latter that I want to focus on now.

We have learned that connectivity is a social force in an emergency.  During Hurricane Sandy in the United States, people would migrate to seek wifi access and power to recharge their mobile phones.  In Europe, Syrian refugees wanted the Internet so badly that they’d literally stop a riot in the camp so our teams could work.  The ability to get online (or lack thereof) has social ramifications.  There is a human reality in play, here.

Except no network engineering class I’m aware of ever teaches that.  And I’ve taken plenty!  You can take all the CCIE bootcamps you want, learn the ins and outs of routing, switching, etc, but as network engineers, our view of the world is really driven by making the bits and bytes get to where they need to go.  Generally speaking we don’t care what those bits represent.

I would argue that when we are trying to connect a population in crisis, we need to care about those social dynamics, at least a little bit.  We need to care about what our networks mean for people on the ground.

The Gender Divide

What does this look like?  Take gender for example.

In the refugee camps in Europe, it is not uncommon for there to be gender segregation.  Women and children in one part of the camp, men in another. Unaccompanied children or other especially vulnerable people in still another part of the camp where they can have greater security.  In many of these communities, there is a great disparity between men and women when it comes to smartphone access.  For example, on one crisis we are looking at, the ratio is 8 men with phones to every 1 woman who has a phone.

So as a network engineer in a refugee camp, if you decide to merely put connectivity into the most obvious places where you see people congregate, you will most likely connect the men disproportionately, because as one researcher recently told me, “Public spaces are male.”  It takes conscious thought and intention to make sure that everybody gets the opportunity to benefit from the connection and information that Internet access can bring.

At the same time, I’m not a sociologist or anthropologist.  I have exactly zero professional training on gender-in-tech issues.  I also think that there may be some ethical issues when a third party (us) designs a network to influence the social dynamics of a crisis affected population without the express consent and partnership of the agency who has primary authority of that facility.  The social and human reality on the ground is really the responsibility of the aid organizations running that facility, and the crisis affected population themselves.  Getting it wrong can be damaging and erode trust in the response community.

IMG_2913
A special area for vulnerable people at a refugee intake facility. Chios island, Greece. How do we ensure the most vulnerable have access too?

That said, I think we can at least bring the conversation to the table.  These are the questions I’m starting to ask myself in the early stages of network design…

  1.  Who are the people that need to be connected?
  2. What social/cultural differences are there between people when it comes to access to endpoints?  Do some ethnic/gender groups tend to have more devices than others?
  3. What gender differences are there in the facility (or is it quite well integrated?)
  4. Consent and informational messages should be in the languages that are understood by the population – what languages predominate?
  5. Are there special populations or especially vulnerable people who could greatly benefit from access who might be otherwise overlooked?

These are questions we should answer in partnership with the people who are running the refugee camp, or the evacuation center – the people who can best tell us about the human element and articulate priorities.

The digital divide is a real issue across many societies.  But in a crisis, the digital divide can complicate who gets access to information and the ability to communicate.  In my opinion, if communications is aid, we owe it to disaster-affected populations to make sure that as much as possible, everybody has a chance to realize the benefit of information and communication that the Internet brings without bias or favor to any group.  Every individual life has worth and dignity, and as network engineers who are delivering humanitarian aid with our connectivity, our responsibility is to make sure that is reflected in our technical designs and implementation.

 

Advertisements

Basic Information Security for Digital Humanitarians

A key part of any modern disaster or crisis response is that of the volunteer technical community (VTC).  VTCs have emerged in the last few years to enable the collection, analysis, fusion and dissemination of maps, imagery, social media and other products.  These products are in turn used by responding humanitarian organizations and governments on the ground to better inform their emergency response operations.  VTCs are largely volunteer-driven, and rely upon the goodwill and energy of individuals collaborating and coordinating their crowdsourced efforts.

Because of the inherently critical nature of the work of these VTCs, attacks against individual volunteers or entire VTC communities has the possibility of degradation or disruption of critical emergency response activities during a crisis.  Complicating the picture is the fact that most crowdsourced volunteers operate on an ad-hoc basis, often using their personal computers and technology, without any requirements or enforcement of good security practices.

Of course, it’s not reasonable to expect enterprise information security protections within a crowdsourced volunteer community – but neither should security be left entirely unaddressed.

In order to better mitigate the risks, I suggest VTCs adopt the basic principles and consider certain information security controls:

Basic security principles for VTCs…

  1.  Assume you are a target.  Organizers and leaders of volunteer technical communities should begin with the underlying assumption that their organization and volunteers will be subject to attack exactly when it is most inconvenient for their mission.  With that assumption, they can start to consider policies and practices to mitigate that risk.
  2. Do No Harm”  All VTCs have the ethical duty to “do no harm” – that requires consideration of unintended consequences of their activities, and especially of the use or misuse of the information and data that they gather and analyze.  Consider the misuse of the VTC’s technology in the context of other digital responders, emergency workers on the ground, and the victims of the crisis.  “Do no harm” also compels VTCs to appropriately secure and manage the information and technical resources they use.
  3. Security postures may be dynamic.  While an organization may adopt a basic security stance, certain types of crises may require additional security measures because of the types of possible threat actors (organized crime, government-sponsored attacks), the nature of the crisis (natural disaster vs conflict situation) , or the types of data that are to be handled (personally identifying information, healthcare records).  The leadership of the VTC must be able to re-evaluate basic security assumptions and adjust posture as needed.

 

Security practices for VTCs…

  1.  Vet your volunteers.  Have a process for on-boarding/credentialing volunteers into the community – especially during times of crisis when spontaneous volunteers are more likely to emerge.  This need not be a “background check” in the traditional sense, but even having two existing and trusted members vouch for a would-be new member may be sufficient.
  2. Know your data.  What sort of data are you working with?  Is some of that data particularly sensitive?  What about the products from the VTC?  Is it for the public, or does the product need to be kept confidentially for the use of specific organizations?  Consider the development of an information classification and handling policy.
  3. Patch your systems and applications.  All individual members of a VTC should have the responsibility for ensuring appropriate and current anti-virus, software patches, etc. are on their devices.  VTCs should consider establishing minimum technical criteria based on security for participation (e.g. volunteers who still run Windows XP in 2017 may be excluded from working in the community due to inherent security risks).  Consider requiring volunteers to demonstrate their current patch levels against established standards set by the VTC.
  4. Communicate and collaborate securely.  VTCs should organize and collaborate using tools and applications that are currently supported (not end-of-life), and that include security and audit (e.g. support applications that use SSL/TLS or other best-practice encryption and avoid legacy applications that send traffic unencrypted across the Internet such as plain email, telnet, FTP or Internet Relay Chat [IRC]).  Applications or tools that are homegrown (including hackathon apps), or not regularly updated, or otherwise unsupportable should be avoided.
  5. Consider the cloud.  Because of the highly distributed nature of VTCs, it may be more advantageous to centrally store and manipulate data in a trusted location in the cloud, instead of having individuals manipulate and store data on their personal devices.
  6. Enforce good credential and password practices.  Applications that support VTC operations should be configured to enforce strong password properties.  Individual volunteers should not re-use passwords used for VTC activities on other sites or applications (professional or personal).
  7. Have an incident response process.  All VTCs should establish at a bare minimum a basic incident response process that designates whom to alert in the case of a suspected security incident, and roles and responsibilities for dealing with that incident.
  8. Know how to manage and revoke access.  Access to applications, tools and data should follow the principle of least access.  Individuals should only be given the access necessary to perform their tasks.  Administrative or privileged access should be conferred only on a trusted subset of individuals.  Administrators should know how to revoke access to individuals when they leave or become inactive in the VTC .  Periodically re-assess access to individuals to ensure that only current, trusted members of the community have access to the data, and that individuals who left the community or haven’t been active in it no longer have access.
  9. Use two-factor authentication on everything. Wherever possible, members in VTCs should use two-factor authentication for VTC applications as well as other common social media and applications (e.g. Facebook, gmail, etc…)  Remember that users may use their personal email or social media accounts while supporting the mission of the VTC – two-factor authentication should be encouraged across the board.
  10. Educate your volunteers.  Volunteers coming into a VTC should be educated about their information security responsibilities and expectations.  All members of the VTC should know where to access any relevant security policies and procedures, as well as when and how to activate an incident response process.  Specific training around social engineering, phishing awareness and other common attack methods are freely available online and should be used.

 

Remember that good security is never a one-time activity.  VTCs should work on instilling a “culture of security” in the work that they do, so that security controls and processes are just incorporated into the day-to-day work of the community.  Security management in VTCs should be seen as an ongoing, cyclical activity of identifying risks, mitigating hazards, responding to incidents, and then incorporating lessons learned into the organization so that the security posture of the VTC continues to evolve over time.

As the humanitarian and public safety community grow increasingly comfortable with the use of VTCs, it will become increasingly important for those VTCs to maintain the trust that their supported agencies and the public have instilled in them.  Breaches of confidentiality, integrity or availability of VTC data and resources may have dire consequences – up to and including physical harm – of people on the ground who are already inherently vulnerable due to the overarching crisis or emergency they find themselves in.

VTCs may be limited in funding and highly distributed – both of which complicate the security challenge for these organizations.  However, many things can be done to minimize risks at low/no cost to reduce the attack footprint that VTCs present.  The time for VTCs to incorporate good security practices is before the next disaster or emergency strikes as it will obviously become much harder to introduce new policies and procedures during a time of crisis.  By incorporating security into the day-to-day operations of the VTC, it further ensures that security doesn’t become an afterthought when the emergency eventually does strike.  If volunteers are trained to follow secure processes beforehand (and trust that they can do their work while still maintaining security), they are much less likely to abandon them when faced with a stressful emergency situation.

How should the crowdsourced community tackle information security?  Share your thoughts in the comments below!

Humanitarian Information Security and the Obligation to Protect

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”  – Article 12, Universal Declaration of Human Rights

Humanitarian organizations – NGOs, governments and organs of the United Nations system – are increasingly challenged to adopt new Information and Communications Technologies (ICT) to execute the core humanitarian mission. Traditional ways of gathering and disseminating data and information – paper forms, offline spreadsheets or other forms of collection struggle to keep up with the tremendous workload the humanitarian community is under.  As such, many of these organizations are moving towards digitization – that is, “taking manual or offline business processes and converting them to online, networked, computer-supported processes.”

Along with this transformation in the use of technology and information comes with new risks.  As these organizations successfully move to new ways of doing business, they become dependent upon the integrity of the technology underpinning those systems.  It is therefore natural that the humanitarian community start to consider the information security risks inherent in their ICT dependencies.  This isn’t any different than the security challenges any enterprise business has to face.

Beyond the inherent need for information security that exists in any “digitized” organization, I argue that the humanitarian community has a special Obligation to Protect which compels strong information security, data protection and privacy responses based on the core mission of humanitarian organizations and the Humanitarian Principles as established in United Nations General Assembly Resolutions 46/182 and 58/114.

Humanitarianism in the Age of Cyber-Warfare

Humanitarian responses to modern disasters, conflicts, and other crises are incredibly technology dependent.  From Ebola to Haiti to Syria, crisis responders and aid workers rely upon computers, tablets, smartphones, and all manner of technologies to get the job done.  With increasing use of emerging technologies such as cloud-based solutions, UAVs, and data analytics, humanitarians are better able to get the aid to where it is needed in more efficient, cost-effective ways.  Emerging applications also complicate the kinds of information that are being entrusted to the community: “Humanitarians are also collecting entirely new types of information, such as bank accounts and financial data for cash programming, and biometrics, such as fingerprints and iris scans, in Kenya, South Sudan, Malawi and elsewhere.

These essential digital dependencies can also prove to be a tempting target for organized crime, hackers, or sophisticated combatants.  The increasingly large amount of sensitive data being collected by the humanitarian community often outstrips the ability for organizations to effectively identify and mitigate infosec/data protection/privacy risks.  In my unofficial survey of major international humanitarian NGOs, of the 30 organizations I surveyed, only about four or five had dedicated information security headcount, and even in those organizations the ability to influence the security of data being used by country offices or out in the field was extremely limited.

High Vulnerability Meets Low Capacity

The humanitarian community therefore finds itself in the unenviable place of having a high degree of vulnerability to security threats, but a low capacity to address those threats. Unlike businesses or governments where various security standards exist and can be audited against (such as ISO 27001:2013, PCI or NIST 800-53) there are no existing  standards for humanitarian information security. The three standards I just referred to are widely used, but lack a grounding in humanitarian principles that are essential to the organizations in our community.

The need for security won’t be driven from beneficiaries, however.  Imagine a hypothetical scenario of a refugee family fleeing a war or disaster arriving at a camp run by an international aid organization.  At the entrance to the camp is an aid worker with a laptop who is requiring all arrivals to register into a database.  No refugee will respond to this request by demanding assurance as to whether the data will be encrypted or protected against theft.  In this transaction, the refugee lacks agency to advocate for appropriate information security controls — if one wants a place for their family to sleep tonight and eat, one will most likely hand over whatever information is being requested.

Contrast this to my rights as a consumer in the United States or Europe – if Amazon (for example) were to lose my credit card or other sensitive PII to a hacker, there are laws that define what responsibilities fall to Amazon, and what remedies are available to me.  None of this currently exists in the humanitarian space.

Protecting the Vulnerable is What We Do

Humanitarian organizations long ago defined and adopted the humanitarian principles as universal values:

  •  Humanity: Human suffering must be addressed wherever it is found. The purpose of humanitarian action is to protect life and health and ensure respect for human beings.
  • Neutrality: Humanitarian actors must not take sides in hostilities or engage in controversies of a political, racial, religious or ideological nature.
  • Impartiality: Humanitarian action must be carried out on the basis of need alone, giving priority to the most urgent cases of distress and making no distinctions on the basis of nationality, race, gender, religious belief, class or political opinions.
  • Independence: Humanitarian action must be autonomous from the political, economic, military or other objectives that any actor may hold with regard to areas where humanitarian action is being implemented.

 

The Obligation to Protect is consistent with these principles:  Organizations that have the mission of addressing human suffering and protecting vulnerable people from further harm in the physical space – assuring basic physical security along with food, shelter, medical care and other essential human needs – also have an equivlent duty to protect people from digital harm, whether it’s identity theft, financial fraud, or physical harm resulting from the loss of confidentiality, integrity or availability of humanitarian ICT systems.

First, Do No Harm

The precautionary principle of “do no harm” should be the underlying touchpoint of all humanitarian ICT efforts.  It requires technologists and humanitarians alike to consider the risks of all technology use and to work to minimize such risks to aid workers, donors and most importantly beneficiary populations.  In short, information security is essential to the humanitarian ICT mission because compromise puts the core mission and rationale of humanitarian action at risk.


Conclusion: Information Security is Inherent in the Mission

Humanitarian action is increasingly dependent upon ICT.  In the absence of legislation and standards within the community, humanitarian organizations must recognize the Obligation to Protect as it applies to information security, data protection and privacy as an essential part of the humanitarian mission. All humanitarian actors – whether they work for a humanitarian agency, are crowd-sourced volunteers on the Internet, or from the private sector – must be educated on the Obligation to Protect and how all parties must ensure appropriate and secure use of ICT and datasets.

Aid workers and beneficiary populations are often among the most vulnerable people on earth — they exist in crisis, with little or no ability to identify and minimize risk on their own.  The goal of good security should be to minimize the risk that they will be further victimized by electronic malfeasance.

What do you think is the best way to drive information security into the humanitarian community, let me know in the comments below!

Further reading:

Applying Humanitarian Principles to Current Uses of Information Communication Technologies: Gaps in Doctrine and Challenges to Practice“, Harvard Humanitarian Initiative, July 2015

Humanitarianism In The Age of Cyber-Warfare: Towards the Principled and Secure Use of Information in Humanitarian Emergencies“, UN OCHA, October 2014

The ETC 2020 vision requires smarter humanitarian networks.

The United Nations recently set out the ETC 2020 strategy, asking for a remarkable change in how humanitarian communications will happen in future disasters and other humanitarian crises.  The ETC 2020 strategy does something which I’ve previously advocated for, a focus on delivering communications not just to the humanitarian responder community, but directly to the population affected by a crisis.  (Disclosure note: I was one of the contributors to ETC 2020).  In our modern world, even remote populations depend on connectivity from a number of different media to be informed and engaged in their own humanitarian response.  The term you’ll hear now within the UN humanitarian system is “Connecting with Communities” or CwC.  CwC is a relatively broad term that refers to the focus on delivering communications to affected populations through all means: community broadcast radio all the way to providing wireless networks that disaster victims can use.

As emergency communicators, the scale and scope of the kind of communications we will have to deliver must change to meet this new expanded role and mandate.  Simply put, the kind of networking that the ICT response community has typically deployed in the past will not easily scale to the much greater demands of supporting affected populations.  We aren’t just providing communications to a few dozen to a few hundred humanitarian responders in an emergency.  We’re now talking about providing connectivity to thousands to hundreds of thousands of affected individuals, while still retaining the ability to support humanitarian operations too.

The kinds of networks we will need to design and deploy will have to change.

The End of “Dumb Pipes”

Traditional humanitarian networks have relied upon what I am terming “dumb pipes” – relatively simple networks that were quickly able to be deployed and paired up with a VSAT or other form of backhaul to provide service.  While such networks (sometimes built with consumer-grade gear) would work, they also lacked the cybersecurity, network optimization and network management capabilities essential for communications at scale.

Once deployed, these networks are often left in place for an extended period of time throughout the incident, and provide an undifferentiated level of network access to a small number humanitarian responders at a scene.  But since these networks are often unmanaged, the overall health of the network is usually left unexamined – only when a user complains that she isn’t able to get to the Internet is a technical issue ever investigated.  Malware-compromised hosts, or power users who might know what BitTorrent is are free to suck up a disproportionate amount of network bandwidth and displace legitimate mission traffic – because hey, aren’t all networks in disaster zones slow?  *insert eyeroll here*

Defining the Next Generation of Humanitarian Networking

10259037_10153870069399711_5251650731216481281_o
Syrian refugees in Greece using a high-density next generation network designed for security, quality and manageability. (Cisco photo)

So given that the dumb pipe method doesn’t scale, doesn’t protect and doesn’t assure network quality, we have to start defining the capabilities of the next generation of humanitarian networking (hereafter NGN) that will ultimately realize the ETC 2020 vision.  What does that look like?

Advanced Cybersecurity: Legacy networks may (at best) have a layer 3/4 stateful firewall between the users of the network and the Internet.  But this is woefully inadequate for the threat environment that we face in 2016 and into the future.  Humanitarian organizations that have the duty to support and protect affected populations must realize that that very same duty extends to the electronic realm as well — especially if they are providing humanitarian connectivity to that same group of people!

The cyberwarfare elements of current conflicts in Ukraine and Syria are well documented – and the targets of these campaigns often include humanitarian aid workers and vulnerable civilians.  This intersection between information security and military conflict will only increase as the toolsets become more accessible and the attack surface of potential targets grows.  This is because of the inexorable growth of smartphones and other devices even among the most vulnerable populations on the planet.

Networks supporting humanitarian workers and CwC in an NGN must embed the kind of cryptography and day-zero intrusion prevention that identifies and mitigates advanced attacks against a range of threats: from traditional phishing attacks to sophisticated malware created by a nation-state or combatant. This protection must be in the network, because the ability to enforce policy on any of the devices joining the network will be minimal to nonexistent.

This kind of capability must exist on every network regardless of whether the emergency is a conflict situation or natural disaster.  After all, we saw nation-state malware attacks against humanitarians on the ground in Nepal after the 2015 earthquake!

Content Management:  Are there types of Internet content that the humanitarian community should exclude from a CwC network?  Some types of content are an easy choice to limit due to the amount of bandwidth they consume:  YouTube and other streaming content, for example – especially on low-bandwidth links!  Other types of content are also an easy choice based on security: blocking untrusted Android app stores where malware is known to reside, or confirmed spam sources.  But what about blocking content based on other concerns?  Adult content, access to militant websites, or sites related to human trafficking… the possible list is endless.  NGNs will have to be able to block inappropriate content at a technical level, but this also requires the humanitarian technical community to determine a policy around content management.  Blocking content is easy in any modern network, but often the human policy decision of what to block and when to block it is going to be the tougher problem.

Traffic Shaping and Quality of Service:  Sometimes there is a need to allow traffic, but ensure that it doesn’t take up the entirety of available bandwidth.  We often see this in disaster ICT soon after the emergency when responders first show up on the ground with their laptops and smartphones:  as soon as these devices (which may have been offline for some time previously) get onto the network, they start pulling down operating system updates, anti-virus databases, fixes to Adobe Flash (aren’t there always?) and iPhones and Androids start syncing everything to the cloud.  (Android is particularly chatty, by the way!) — all of this traffic is legitimate: we want people to have security fixes! But all of this activity can displace essential mission traffic, creating a defacto denial of service against the humanitarian network.

In an NGN, we should have the ability to provide a high quality of service to delay-sensitive traffic, such as VoIP – SIP, Skype, WhatsApp and other forms of realtime voice and video traffic.  Additionally, we should be able to ensure that things like security updates and operating system downloads get a much lower level of priority to ensure that mission traffic is not displaced from the network.

Rate Limiting:  Just like networks supporting a sporting event or other mass gathering, humanitarian NGNs will have to implement per-client rate limiting to ensure that the largest number of devices are able to get effective service from the network.  Enforcing a rate cap of 100-200 kbps per mobile client device is enough to ensure that the user gets a network that “feels” fast for most applications, supports voice and video calling over Skype or WhatsApp, and yet prevents any one device from hogging up the entire network pipe. Devices that truly require higher data rates can be segmented onto another VLAN or SSID that is itself deconflicted from lower-priority traffic.

Network Management:  As stated earlier, many existing networks supporting humanitarian response are essentially unmanaged once they are set up and in service.  Logs are not monitored, errors that may make the network less effective are not dealt with until the point where human users start to complain.  In essence, humanitarian ICT teams have used their own users as the “tripwire” that something is wrong with the network.

In an NGN this should never be the case. Even for reasons of good customer service, if nothing else, the network should be instrumented and actively monitored for problems.  Minor problems must be identified and resolved before they cause major headaches for users. The IT team supporting the network should be the first to know if there is an outage, not the last.

This Can Happen Today.  It Already Has.

11096393_10153240336129711_7355207693900059094_o
The BT Emergency Response Team on Tanna Island, Vanuatu after Cyclone Pam showing that advanced networks are feasible in the most remote humanitarian emergencies. (BT photo)

 

These five essential capabilities must comprise the core bedrock of humanitarian networks now and into the future.  On our team, we have already made this a part of how we operate.  Starting with Cyclone Pam in Vanuatu, the response to the Nepal earthquake and most recently our work in Europe on the Syrian refugee crisis we have deployed networks that have all five of these capabilities designed in from the very beginning.  All of this without busting the budgets or limited number of network engineers available to humanitarian aid organizations, and yet being able to scale to an extremely large number of users.

The ambition and scope of ETC 2020 will require an evolution in how we provide Internet connectivity in the midst of disaster and humanitarian crisis.  The core capabilities I’ve described are common capabilities in solutions from many vendors at practical price points. With the right engineering, there is no reason why the CwC vision of providing connectivity to massive numbers of affected people shouldn’t be a reality.

Mini-Review: Microsoft Surface 3 + Windows 10 for Humanitarian Response

So I’ve been testing the Microsoft Surface 3 for a few weeks now as a potential deployment laptop for NGOs and humanitarian responders who may find themselves headed into emergencies, ICT4D or other similar scenarios.  The need for computing in remote areas is increasing, and being able to get a good solution in the field is sometimes different than what is needed at home or in the enterprise.  With the release of Windows 10 the other day, I’ve also gotten to take the new OS for a spin.

First off, people should use what works best for them:  If a chromebook is what does it for you, by all means, use it.  I’m mostly an Apple person, so my personal deployment gear has traditionally been a MacBook Pro, an iPad and my iPhone.  Operating System wars are lame, and I have no desire to refight them here.  However, if you want an idea of whether this particular solution might be useful, read on…

What do we need for a deployment laptop?  The ability to run all of the network management software (ssh, visio, etc.), general business productivity apps, low power draw (in case you need to run on solar or wind power), portability (every pound you carry into the field matters), and low cost (a $3000 Toughbook isn’t usually necessary in most situations, even in a disaster, and you’re often better off buying three $1000 laptops instead for the same spend.)

The Surface 3

So when the Microsoft Surface was first announced, I was really excited about the form factor – it wasn’t quite a laptop, and it wasn’t quite a tablet.  It was something in-between.  We even got a few first generation Surface Pros to work with.  Unfortunately, for what we’d need for a deployment laptop, they consumed too much power and had too little battery life.  I’ve kept my eye on the platform as it continued to evolve… the ARM-based Surface was killed and replaced by an x86-based Atom CPU system.  It’s this latest generation of Surface that I got from Microsoft as a loaner.  Yes, this one is a real PC (the ARM-based systems couldn’t run the majority of PC software because it had a different CPU).

Why not the Surface Pro?  Because for a deployment laptop, the substantial difference in power consumption matters.  The Atom CPU in the Surface 3 won’t win any PhotoShop rendering battles, but it has proven to be adequate for the kind of general business apps we might want to use.  My loaner from Microsoft came pre-loaded with the Pro version of Windows 8.1 (and later Win10) – so I could test performance with BitLocker and other security elements turned on.

Things worked as they should.  Also, the keyboard for these computers now has a backlit keyboard, which is great for when you’re working in low-light situations.  Since it has standard USB, various accessories (like a router’s console cable for example) worked fine.  Anyway, if you want to think of it as a “real computer” that happens to be the same sized as an older iPad, that’s about right. Compared to other laptops, this one sips power (my new MacBook Pro 13 Retina also sips power, but the Surface is a few watts less).  The Microsoft Type Keyboard is a must with this system (even though they’re sold separately) – and I find the keyboard enjoyable to use, which is unlike many other “thin and light” PC keyboards.

Windows 10

I don’t know anybody who *loves* Windows 8 – if you’re using Windows, it’s probably Win7.  People who use Windows 8 use it, but the UI hasn’t been great… it tried to be both a tablet OS and a PC OS and in the end, didn’t get the hybrid thing quite right.  Windows 10 doesn’t require you to relearn anything if you’ve already been using Windows – a huge usability improvement.  The thing has been out for only a day or so, and I’m sure there are roadbumps with bugs and such that will come up – there are some concerns with user privacy and the sharing of WiFi keys, and I’m waiting for responses to some of those issues that are being debated as I write this … but Windows 10 is a big improvement.  I’m actually enjoying using the OS, and find everything pretty intuitive.  My biggest concern from a Windows OS standpoint in the field has been security – there are a number of security improvements in the OS that I think justify the upgrade compared to previous versions of Windows.  One of the things that is intriguing is the torrent-like peer-to-peer distribution of patches and updates (“Windows Update Delivery Optimization“) that promises to be really useful in disasters and humanitarian crises.   The idea being that a single computer might pull down patches and then other computers would be able to pull down those patches from the first computer, rather than going over the WAN individually.  This may be a huge bandwidth saver for disaster response/ ICT4D when you’re on the thin sippy straw of a VSAT link or other low-bandwidth connectivity.  On the Surface 3 (and other hybrid / convertible systems), the system is smart enough to switch seamlessly between a PC desktop mode and a more touch-friendly tablet mode when the keyboard is detached.

It was hard to recommend the Surface hardware with Windows 8 – the hardware was great, but the OS and a very confusing user experience just was not doing it for me.  The combination of Windows 10 and the Surface 3 rectifies most of the major shortcomings in the earlier versions of both and makes a compelling solution for people who prefer (or are required) to use Windows in humanitarian response or ICT4D.  For some users who currently take three separate devices into the field, you might very well be able to take fewer devices with you – which means less weight, less power needed, and one less thing to break.

So is it right for you?  If you’re using Windows in the field, there’s really no real argument to avoid Windows 10 in the same way there were plenty of reasons to avoid Windows 8.  You might want to wait for some of the invariable bugs and issues to get worked out first, but really, yeah, go for it.  And I think the Surface 3 is a great PC system in a very tidy form factor that makes it great for HA/DR deployment scenarios.

The Case for Evidence-Based Disaster Technology Response

“The plural of ‘anecdote’ is not ‘data’.”

A disaster happens somewhere in the world. Disaster technologists and digital humanitarians mobilize. Maps are crowdsourced, satellite dishes and networks are deployed, UAVs are flown, apps are hacked in marathon sessions, social media mined. All these things happen incredibly rapidly because of the army of passionate individuals and organizations that fundamentally believe that the rapid flow of information helps to save lives and speed recovery to affected communities.

Eventually, as things move from response to recovery, most of these individuals and organizations will document what they did and lessons to be learned for the next time around. Pictures will be shared on social media and in press releases. The community of technology humanitarians will prepare for the next event… knowing there is always a next event.

But something fundamental is missing from our community: how do we actually know we made a difference in the outcome of the response?

Many of us (including myself!) have plenty of anecdotes and war stories about how something we did made a difference, but anecdotes are just that. Let’s face it… I make my living working in this space. Of course I want to believe that my work, my time away from my family, all those long hours in hardship conditions actually make a difference. Organizations want to tell the most positive story about their efforts, to ensure future funding, volunteers, and missions.

Loading a helicopter with tech in Vanuatu (Cyclone Pam, 2015) - let's use evidence to know how to best use our capabilities and skill.
Loading a helicopter with tech in Vanuatu (Cyclone Pam, 2015) – let’s use evidence to know how to best use our capabilities and skill.  This photo is not necessarily evidence of an effective response, as much as I might hope it is. 🙂

It’s called “confirmation bias.” As humans, we naturally seek our information that supports our beliefs and positions, and tend to avoid information that would call those beliefs and positions into question – or discredit them entirely!  The photo above is an an example… it’s certainly visually compelling, and I’d like to think that good things came from being out in that rain that day, but we need data to really measure effectiveness!

Our community of emergency techies is relatively small, and the whole multi-discipline sphere is relatively young. By way of analogy, this reminds me of where emergency medicine in its first few decades. During the dawn of the Emergency Medical Services (EMS) field, most treatments were based on assumptions (“backboard all suspected CSPINE patients”). Eventually, evidence-based medicine came into the field, with the results that we don’t backboard anywhere as often as we used to, and that we know how incredibly important chest compressions are in CPR, high-flow oxygen can actually harm a heart attack patient, and therapeutic hypothermia for a cardiac arrest patient actually makes sense.

So, while anecdotes make for great memories, they aren’t terribly useful for the long-term evolution of our field. Just like EMS and many other emergency disciplines, a collection of stories cannot be considered evidence of efficacy because of confirmation bias. One positive story about disaster technology response and one negative story about disaster technology response cancel each other out.

We need to move beyond anecdotes and move towards true evidence-based disaster technology response.

So what does that look like?

What technology interventions actually help the situation on the ground? How will we know this?

What technology interventions don’t help, and should be avoided? How will we know this?

While the field demands innovation and exploration, how do we ensure that we aren’t just enamored with our own technology at the expense of other meaningful activities that support disaster response?

We have to get beyond the feels. Many digital volunteers are moved by crisis to “do something.” But the goal of “doing something” should not be the social reinforcement that a volunteer or disaster worker gets on their Facebook page. It should not be to demonstrate some whiz-bang technology in crisis. After all, a fancy map that isn’t used by anybody to make a decision is merely a graphical representation of disaster trivia. A UAV that gathers video that isn’t used by anybody to make a decision is just a pretty YouTube video. In both cases, the intention and the tech are great, but the outcome is lousy.

If it happens that these other things happen along the way, so much the better. . But out first duty should be to focus on the outcome… verifiable, evidence-based outcomes.

I think there’s a strong role for academia in this effort.

We should look to the area crisis informatics to help inform the practitioner about   these questions of efficacy. They can help us develop the metrics, engage peer review, and help move us to that next level that our increasing workload and set of expectations demands. We in turn can influence the research questions and work to connect the results of academia to the work being done by practitioners.

By driving towards evidence-based operations, we are helping to mature our work and minimize wasted effort and cost. But most importantly, we enshrine the beneficiary of our work at the absolute center of the digital humanitarian universe. The disaster victims and responders who need information to make smarter, better decisions for themselves deserve nothing less.

The move towards evidence-based response will take all of us … let me know your thoughts in the comments.

A Cybersecurity Wake Up Call for Emergency Managers

Since the 9/11 attacks, the United States government has been increasingly concerned about the implications of cybersecurity on a technologically dependent society. While cybersecurity has been a significant priority for policymakers and the national security organizations of the United States, the intersection of cybersecurity and traditional emergency management remains less well-known, with relatively few agencies considering the cyber implications of their emergency management roles. This lack of awareness and preparedness leaves the public safety community at risk. It is safe to say that those risks are being exploited as we speak.

Let’s start from here: Nearly every emergency response in the United States today is dependent on computers, networks and other technologies that are vulnerable to cyberattack. I’m not just talking about the spectacular “cyber Pearl Harbor” scenarios that politicians often mention (as I write this, the news is filled with articles about how the head of the US National Security Agency announced that China has the capacity to disable the power grid of the United States via a cyber attack). I’m talking about the ordinary kinds of emergencies that are responded to thousands of times across the country every day. The house fires, the car accidents, the medical and law enforcement calls that are the bread-and-butter of most public safety agencies are all dependent upon technology, from the PSAP that answers the 9-1-1 calls, to the CAD and WebEOC systems, to the individual laptops, smartphones, and tablets carried by the responders in the field. As more networked technology is adapted for public safety use (such as public safety LTE, or “FirstNet”), the potential footprint of vulnerability will continue to grow – which is why those risks must be mitigated to the extent possible.

At this point, an emergency manager might say “Hey, isn’t this really an IT problem? My agency has an IT department. I don’t know anything about how the Internet or hackers work.”

The answer is absolutely “No. It is your problem too!”

If the effects of cyberattacks were strictly limited to the electronic world, one might safely leave the problem in the hands of (a hopefully capable) technical staff. But cyberattacks in the right circumstances have the capability to affect the physical world – the systems that public safety and critical infrastructure alike depend on. And that, emergency manager, is where it becomes your problem.

Emergency managers must shake themselves of the notion that cyberattacks against their communities must look like something out of a Tom Clancy novel. Here’re two recent examples of cyberattacks against public safety. Both of these examples are rather ordinary, and could happen on any similar incident anywhere in the country.

Example One: Carlton Complex Fire, Washington State.

The Carlton Complex fire earlier this year was the largest fire in Washington state history, burning an area roughly five times the size of the city of Seattle. Because of the remote location of the fire, communications and connectivity for first responders were ongoing challenges for incident managers (in fact, this was widely reported in the media at the time). In coordination with the FEMA TechCorps program, and at the request of the State of Washington ESF-2, our team responded to the south zone of the Carlton Complex fire, where we enabled mission critical communications for the Type I IMT managing that portion of the fire, as well as providing an open “morale” Wi-Fi network for approximately 750 firefighters and support staff.

While many teams have the ability to deploy Wi-Fi and other connectivity on the fireground, the security of the users and those hundreds of devices are typically not considered. In short, there’s no such thing as a Chief Security Officer (CSO) at a brush fire! When we deployed to the Carlton Complex fire, we brought along a number of intrusion detection, network management, and application-level technologies with us. In short, we weren’t just providing a “dumb pipe” to the Internet, but one that assumed the likelihood of a cyber threat.

Initially, we started to detect incoming attacks against the users on our network on both the open and the mission networks. We do not believe that the first responders were under a targeted attack in this case, but rather our users were subject to the sorts of attacks that all users of the public Internet are subject to on a constant basis – everything from port scans, to compromised webpages that were taking advantage of vulnerabilities in users’ web browsers.

This next part is important: just because we could detect an attack or set a response posture, we could not unilaterally act to block traffic. We were in support of the IMT and it would not have been appropriate for us to arbitrarily move to a more aggressive response posture. Luckily for us, the Communications Unit Leader (COML) and Communications Unit Techs (COMT) staff were aware of network security risks. When we went to them with our data that showed the network and certain users were being attacked, we pointed out that we could move the network into a more protective posture (intrusion prevention instead of intrusion detection)

According to our data collection, in the four days that we were active on the fire, we were able to detect and block 30+ high risk or sophisticated attacks against users on our network, as well as defeating any number of more minor risks. Keeping the network operational and protected enabled the incident managers and firefighters to keep focus on where it was needed: on responding to the fire itself.

Example 2: Ferguson, Missouri.

Recent unrest in Ferguson, Missouri has made news around the world, and particularly challenged emergency managers and other public safety agencies responding to the situation. The Missouri State Police deployed a mobile command vehicle to support law enforcement operations in Ferguson, but according to media reports, the command vehicle itself became the target of an unspecified cyberattack:

“However, Thurston said that ‘Big Blue’ also became a target during the protests as the MCCV experienced it’s first real cyber threat. Thurston said that people were attempting to try and spoof communications from the vehicle at several times during the protests.

Thurston warned attendees at the conference from the law enforcement community that they need to place an increased emphasis on securing their communications. ‘Your communications are targets greater than you ever thought,’ he said. ‘There are groups trying to intercept your communications.”

The Missouri State Police has not released any additional details about this attack, and it’s not clear from the context whether they were talking about spoofed radio traffic, or spoofed data traffic. Regardless, the incident is an example where a public safety resource was specifically targeted because of its role or mission.

A wakeup call.

These two recent incidents should serve as a wake-up call to the emergency management and public safety community that cybersecurity must move out from just being an IT responsibility to part of good all-hazards planning. While there have been some large-scale cyberattacks (Stuxnet and Shamoon being two good examples), most cyberattacks against public safety are smaller scale, and may go largely undetected unless the attack causes significant disruption. Emergency managers need to consider their own vulnerabilities, as well as how to respond to potential disruptions. Here’re some ways to get started…

  1. Identify the information security team within your own agency or organization. Go get coffee with them. Find out how they support your mission, how do they respond to threats that may target your emergency management infrastructure. The goal here is to engage with your IT and security organizations, not just as the “make the printer work” people, but as partners who are committed to your mission success.
  2. Consider the security of field networks and resources. In my experience, hastily formed networks created to support specific emergencies are often not monitored for security issues, nor is the responsibility of incident response identified. This must change. Cybersecurity in field emergency response must actively be managed.
  3. Work with your partners to develop and test realistic cybersecurity scenarios against your responders, your EOC, or your critical dependencies.
  4. Consider who will own security policy and policy enforcement in situations where you have multiple agencies, and multiple devices showing up in mutual aid scenarios that all need to collaborate on the same networks and applications.

Cybersecurity isn’t just the job of the IT department or the private sector. Emergency Managers should work with their technical partners to identify cyber risks, mitigate them where possible, and plan and train for incident response. Disruptions to critical systems may complicate response or put responders or the public in danger. A failure of public safety to secure its own systems and plan for broader responses can leave people already affected by an emergency situation vulnerable to further victimization.

Haiyan: If #CommIsAid, We’ve Got a Scope Problem.

“…the effective, timely deployment of telecommunication resources and that rapid, efficient, accurate and truthful information flows are essential to reducing loss of life, human suffering and damage to property and the environment caused by disasters” – The Tampere Convention

As I write this, I’ve recently just returned from the Philippines where I and many of my colleagues were deployed to assist humanitarian relief operations by deploying various forms of communications.  Our team and many others from around the world were in pretty harsh post-disaster environments, and setting up the necessary communications that are required for any effective, modern disaster relief operation.  The Haitian earthquake of 2010 was the first really good example of a “data driven” (rather than “radio driven”) communications response.  And the need for more internet connectivity in the aftermath of a disaster has only gotten stronger since then.   After a major disaster, it’s not uncommon to see numerous VSAT satellite dishes spring up like mushrooms after a good rain.

And like I said, the work that these teams have completed is impressive.  Recently, the homepage of the UN’s Emergency Telecommunications Cluster had a story about how 2500+ humanitarian workers have used the emergency Wifi.  Most of the major disaster relief operations have some form of Internet connectivity – even if that’s something as simple as a BGAN.

One of the GATR sat dishes we deployed after Super Typhoon Haiyan.
One of the GATR sat dishes we deployed after Super Typhoon Haiyan.

And for all that, we have a huge problem.

In the 2013 UN OCHA report, Humanitarianism In The Network Age it is argued (correctly, I think) that information is a basic need to those affected by a crisis, not just the crisis responders themselves:

information as a basic need requires a reassessment of what information is for. Instead of seeing it primarily as a tool for agencies to decide how to help people, it must be understood as a product, or service, to help affected communities determine their own priorities.

The CDAC Network has been promoting this notion through the Twitter hashtag #commisaid.

The vision of emergency communications is succinctly summed up in that one hashtag: communications is a vital resource for the survivors of catastrophe, right along security, shelter, hygiene, medical care and food.

If you accept this notion, the international technology community has a huge problem.

If you look at the United Nations Cluster System something becomes very clear quickly:  the vast majority of the clusters have the mandate to deliver goods and services ultimately to those who have been affected by the crisis.  The Nutrition cluster is there to make sure people don’t starve, for example  – – it delivers food at scale to thousands, perhaps millions of people depending on the nature of the event.  But the Emergency Telecommunications Cluster  is different.  It’s mission is limited to “the humanitarian community” (which is UN-codespeak for the UN agencies such as WFP, UNICEF, etc. and certain BINGOs “big NGOs” such as the ICRC):

To provide timely, predictable and effective Information Communications Technology services to support humanitarian community in carrying out their work efficiently, effectively and safely.

Excluded from this mandate is the idea of providing communications services to countless desperate people who would love to communicate with their families, with the resources they need to restore their lives, and to the outside world more generally to tell their story.  We (the collective “we”) are failing to adapt to the new reality.  If we truly believe that communications is a form of aid, we are utterly failing to deliver that aid!  We are totally content to deploy a handful of satellite dishes, a few Wifi access points, and provide connectivity to aid workers.  But what about *everyone else* outside of the perimeter?  What about the public?  Who will speak for them?  Who will restore to them the ability to speak for themselves?

This is a very tricky question.

Telecommunications services: Internet service, POTS telephony, mobile phone telephony are all regulated and vary greatly in quality and penetration from country to country.  The regulatory hurdles are significant, and let’s not even start talking about logistics!  The humanitarian actors have traditionally been reluctant to engage with the private sector (who invariably owns and controls the pre-crisis telecommunications infrastructure).  The lines get very blurry when you start introducing organizations who have a profit motive and organizations who have a humanitarian objective.  These are big challenges, but those challenges don’t overshadow the fundamental truth:

a wrecked mobile phone tower in Guiuan, Eastern Samar after Super Typhoon Haiyan.  Is this a private sector problem?  A humanitarian problem?  Both.
A wrecked mobile phone tower in Guiuan, Eastern Samar after Super Typhoon Haiyan. Is this a private sector problem? A humanitarian problem? Both!

The best way to deliver humanitarian communications to a community at scale is to help restore the pre-crisis communications infrastructure.

During Super Typhoon Haiyan, we saw the loss of the mobile phone telecommunications infrastructure throughout our work in Eastern Samar and Leyte.  Millions of people were walking around with mobile phones that had no service.  They already had end user devices they were familiar with (even in the poorest or most rural parts of the world, seemingly everyone has a mobile, right?) — so in one sense, accomplishing the mission of delivering humanitarian communications is much easier: you don’t have to touch millions of individuals in order to help them.  You “just” need to get their phones back online.

A Tale of Two Refugee Camps

Two years ago, our team at work participated in a project with NetHope, Inveneo, WFP and Microsoft to create DadaabNet to connect up the humanitarian aid agencies operating at the world’s largest refugee camp in Dadaab, Kenya.  You can get a sense of the camp from these photos.  Prior to the deployment of this network, the state of communications at this camp was extremely limited.  VHF radios, a small number of satphones, and no mobile phone coverage.  After this project concluded, the humanitarian aid agencies were clearly in a better state of communications, but the vast majority of the 500,000 people in that camp are still without communications.

Contrast this with a much newer camp, the Zaatari refugee camp in Northern Jordan.  When the camp was first opened in 2012, it started taking in refugees from the civil war in nearby Syria at a rapid rate.  But Zaatari is actually served by at least two GSM/3G mobile phone carriers!  Not only are the humanitarian staff able to communicate with traditional handsets, but many of the refugees are able to use their mobiles as well.  The infrastructure could be better (one could always have more coverage and more bandwidth, natch!) but the situation there is vastly different than that which we found in Dadaab.

Service providers make all the difference.

A Call To Action

The humanitarian community, and the ETC in particular, needs to establish better working relationships between itself and the humanitarian actors it represents, and organizations representing the carriers (such as the GSMA).  Service providers must become a stakeholder in humanitarian response – ISPs, traditional telephony, satellite providers, mobile phone carriers.  Engagement protocols need to be developed… maybe that next UN flight might be best carrying a phone switch rather than a thousand kilos of Wifi access points!  It might help more people with a better service.

In addition to the carriers, equipment vendors need to become stakeholders in humanitarian response.  This might be easier than one might think since several of them (Ericsson and Cisco) are already heavily involved in humanitarian response — it’s logical to start with those who are already part of the conversation.   How do we ensure that the equipment needed to create communications at scale gets released, imported and transported to the locations where it can best be put to use?

There are challenges in all of this:  competitive challenges (can you help one carrier without helping another?), the division between humanitarian action and disruption to the competitive marketplace, and regulations of all sorts.  But we must begin to transform how we do technology response to consider the bigger picture

If communications is truly a form of aid, we have no other choice.

Digital Volunteers and Disaster Stress

“You can’t patch a wounded soul with a Band-Aid.” – Michael Connelly

In 2005, I came back from a disaster deployment during Hurricane Katrina changed in a lot of ways. For six months after I left New Orleans and the Gulf Coast, I was inexplicably angry; there was an invisible, ever present chip on my shoulder. I almost punched a man in Santa Cruz for merely overhearing him talk about the hurricane in a restaurant. My long-term relationship withered, and then died in the middle of grief and anger that I could hardly put words to. As a disaster responder for many years, I was prepared and knew the signs of disaster stress, and yet there I was flailing around. I couldn’t talk to anybody because hardly anybody I knew in California was directly impacted by the storm. They’d all seen it on TV. I was in the middle of an abandoned, flooded New Orleans. How could they possibly relate?

Still, with time, things got better, even though certain sights and sounds could (and sometimes still do) trigger very strong emotions indeed.

In 2010, I spent four months doing remote support for our response efforts in Haiti after the M7.0 earthquake there. Every waking hour I had during January through April was devoted, in some way or form, to Haiti. Even though I never left Northern California, I was intimately tied to the ground in Port-Au-Prince. I was on the phone, on the Internet, configuring and shipping equipment and technology that was needed by the rescue and recovery efforts. My colleagues were actually on the ground, there and I was able to sleep in my own bed every night of that response. And yet there I was feeling disaster stress, feeling overwhelmed, utterly tired, engaging in risky behavior that was quite unlike me in the off hours.

These are two disasters I’ve been involved with. One had me on the ground, and for the other, I was remote to the emergency. In both cases, I had various signs and symptoms of disaster stress. But my experiences working during Haiti relief reinforced the idea that even virtual disaster responders can experience disaster stress that isn’t virtual at all. [For the record, I’m going to avoid the use of the  term Post-Traumatic Stress Disorder for this article, which is a clinical definition, and one I am not qualified to confer on myself or anyone else for that matter – remember folks, you don’t self-diagnose!]

Society’s understanding of traumatic stress has been getting better over the years – more understanding of the types and management of trauma has been helped by the focus put on veterans returning from recent conflict around the world, survivors of various forms of abuse, survivors from catastrophes such as a natural or man-made disaster. The American Red Cross, for example, reports that 1/3 of disaster volunteers report some signs of disaster stress, even if the individual’s personal volunteer experience was positive. But less well understood are the effects of these events on people who aren’t even there. Our hyperconnected world allows us to experience trauma regardless of time and distance. For many years after the September 11 attacks, American news channels would re-air their real-time coverage of that day on the anniversary of those attacks. One could choose to relive that day with all the horror “as it happened.” More recently, when the 2013 Boston Marathon bombing happened, it was nearly impossible to escape CCTV footage shown over and over again of the explosions and the immediate aftermath the mass casualty situation. Recent studies have shown that this “saturation coverage” can increase disaster stress in those who otherwise have no immediate connection to the disaster:

“In the aftermath of the September 11, 2001, attacks, four studies demonstrated associations between viewing television coverage of the attacks and (self-reported) posttraumatic stress symptomatology. Ahern et al.found a 2.3 times greater odds of probable posttraumatic stress disorder in the group that watched television most.” source

I believe that most current “digital disaster volunteers” got their start in this nascent domain in or around the time of the 2010 Haiti earthquake.  Many operate in ad-hoc communities and relatively small non-profits.  Few came into this area with pre-existing training in crisis operations.  Indeed, one of the great benefits is that the technology has empowered literally anyone with an Internet connection to get involved.  Anyone can get involved in good faith, but few have the support resources needed to identify disaster stress, and there may be few avenues of getting help.  Since most digital volunteers are operating by themselves in coffeeshops, in homes or other venues, the possibility truly exists of these individuals slipping through the cracks, their inner trauma unacknowledged and silent.  Contrast this to the crew of a fire engine, who may have three or more colleagues immediately available who “get it” and with whom one could feel more comfortable discussing what just happened.  Aside from the negative impact to the individual and their quality of life, as people who want to encourage digital volunteers in disasters, it doesn’t serve our purposes either when we silently lose previously enthusiastic volunteers and we fail to build that cadre of trained, experienced people for the next emergency.  It hurts us by making capacity building that much harder.

As a community, we need to come together and help create mechanisms that enable individuals to identify disaster related stress in themselves and others they come into contact with (such as their fellow volunteers!) We also need to identify actions that individuals can take to support themselves, their colleagues and others who may experience disaster-related stress. Lastly, we need to make it okay to get help. Many digital disaster volunteers may minimize their symptoms and feelings because, after all, they weren’t on-the-ground. By changing the conversation and saying that ALL disaster responders are potentially susceptible to disaster stress, we help minimize the stigma that many may feel around disaster stress and mental health more generally.

I’d invite readers to check out the training presentation I put together for our team at work a few years ago on the subject – other organizations are free to leverage that content if they find it useful.

Most importantly, whether we are in person or remote, we must never lose sight of the fact that while we choose to be technology responders, disasters are first and foremost human emergencies, not technological ones.  By remaining compassionate and understanding with ourselves and others we work with and respond to, we remain connected to our humanity in the midst of crisis and chaos.

Transparency, disasters, and the private sector.

Truth never damages a cause that is just – Mohandas Gandhi

As astute readers of this blog may know, I work for a relatively large tech company ($VENDOR) with the day-to-day responsibility of preparing and helping the public sector when a crisis happens, whether that’s a natural or man-made event. For free.    

Think about that for a second.  (I do all the time) – that’s a pretty strange beast right there.  A for-profit company (and anyone who has ever bought any of our stuff knows how for-profit we are, amirite?) that’s not just concerned with business resiliency (as all businesses should be) but actually going into the middle of the disaster to leverage all that tech equipment and skill in the public good.  

This is, by any research I’ve done, a relatively new beast – I couldn’t find any instances of this prior to 9/11 that wasn’t motivated by a legal duty somewhere (the telcos, for example). 

But getting to the point where we are with our program: building and maintaining trust with our public-sector partners requires a commitment to transparency, and without it, I don’t think any private sector effort in disaster will go far.  

What the private sector wants

Large corporations that are involving themselves in disaster response and humanitarian relief are often trying to balance two competing motives.  

First, their shareholders, employees and other stakeholders are looking to them (being the kind of businesses that care about their social responsibility) to DO SOMETHING.

Secondly, the public and especially those in the affected communities will tear you up if it even looks like you’re trying to make hay out of someone else’s tragedy.  You can’t market in a disaster.

During Hurricane Sandy in 2012, one of the major search engine companies posted a message on Twitter that said “for every time you RT our <marketing message>, we will donate <amount of money> to the Red Cross,” and boy did they get flamed over that.  You’re a business!  Why don’t you just donate the cash to the Red Cross without trying to promote your search engine.  This is a marketing message disguised as disaster relief.   

To their credit, said search engine company apologized for their messaging and donated the cash to the Red Cross in its entirety.  

You can see how this is fraught with peril from a business perspective, right?  

What the public needs

The public (or public sector) often asks for three things:  cash or in-kind donations, equipment and skill.  But the greatest and most rare of these three is skill. But without trust, none of these things happens.  

In the early days of our program, when we’d contact an EOC to offer ourselves up as a mutual aid resource, we would often get a conversation like this:

“Hi, this is Rakesh from $VENDOR and I want to…” (me)

“Hey, why are you calling me right now?  Don’t you know I’m working on a huge disaster?” (EOC staffer)

“Yeah but what I’m offering is…” (me)

“This is no time to be talking about selling me anything!” (EOC Staffer)

*click*

I don’t recommend this method. 

 

How do we get there?

If you’re in the private sector, the most important resource you have that affects your ability to assist in a crisis is trust.  Without it, all your motives are suspect.  Your reasons for acting are the most cynical ones.  You’re trying to make a buck.  Or greenwash your tarnished reputation … you get the idea.

Building trust requires transparency.  Transparency in your motives, your goals, and your limitations (there is, after all, a legitimate line between CSR and traditional business activity).

So that’s why we publish our activities in our public CSR reports every year, and we build extensive after actions reports that are shared within the company as well as to the public agencies we supported in a particular event, and why even our social media playbook is public.

We all know that the private sector has unique resources and talents that ought to be brought to bear during a disaster.  Building and maintaining that trust requires a commitment to transparency – it’s not just a way to operate, it’s the only way to operate.