The GDPR is a unique opportunity to get humanitarian data protection right.



If you work in data protection or privacy, you already know this:  the clock is ticking.

On 25 May 2018, the most significant data protection law in this young century will go into effect in the European Union: The General Data Protection Regulation (GDPR), Regulation 2016/679.  Europe has long regarded privacy as a fundamental human right, and the EU Data Protection Directive (95/46/EC) has been long held up as the gold standard in privacy protection for more than 25 years.  For all of its benefits, the Directive was put forward in 1995, when the Internet was still nascent and had not yet assumed such a critical role in the lives of billions of people.  Many people connected to the Internet using SLIP or PPP on dial-up, and the dominant browser was Mosaic (soon to be replaced by Netscape Navigator).  Facebook, WhatsApp, Instagram, Netflix and Google were all many years away.  Mobile devices were limited to something like the Newton or a laptop that could be charitably considered a “luggable.” Cloud computing and machine learning at the scale we see today were still largely thought exercises.

With the radical changes in global society catalyzed by the Internet, the humanitarian community has not been immune to the changes that are happening in every corner of the planet.  In order to scale up to the increasing need and complexity of humanitarian crises, many aid organizations have gotten on the “innovation” bandwagon.  Innovation, in this case, often meant the use of greater computing and data collection resources.

With humanitarian data security and privacy under greater scrutiny than ever before, the imminent arrival of the GDPR presents a not insignificant challenge to NGOs and intra-governental organizations who process data on European data subjects.  But it also presents a unique opportunity to do right by the most vulnerable people on the planet, people who are at risk from conflict, disaster or other humanitarian crisis.

The European Migration Crisis Was a Wake Up Call

Since the end of the Second World War and the rise of the modern humanitarian system, it’s been generally true that aid agencies would recruit and fundraise from the West, and then do service delivery in the Global South or other parts of the developing world. But the migration crisis affecting Europe in the last few years, primarily driven from the Syrian conflict and several other conflicts in North Africa, changed that dynamic.  For the first time in the age of modern computing, aid agencies were delivering services to a large population in Europe.

When I worked on the Syrian refugee crisis in 2015, we made the determination that the moment a refugee’s boat set foot in the Greek Islands or in Italy, their data had to be governed in accordance with the EU Data Protection Directive.  The people arriving in Europe were suddenly “EU data subjects,” and were entitled to certain rights and protections accordingly.  So we had, for the first time, a significant humanitarian crisis that was in scope of a strong set of data privacy requirements.  Our refugee networks at the time were designed with these findings in mind, and the fact that we created the largest known dedicated humanitarian network (with over 600,000+ users)  with this protection demonstrated that you could have scale, innovation, and privacy for an extremely vulnerable population.

The GDPR protects Europeans – but global organizations will extend its protections.

At first glance, it may not be readily apparent why the GDPR is relevant to humanitarian operations, the vast majority of which occur outside of the scope of EU data protection law and the 28 EU member states.  Firstly, many global NGOs have a GDPR exposure. Even if their service delivery occurs in many other parts of the world, far away from any relevant privacy laws, these organizations are often based in and fundraise in Europe.  Further, many of them also recruit staff from within Europe.  So they will have to come into compliance with GDPR requirements.

But with aid operations that worth $17.9 Billion USD (2012) and reached 73 million people around the world, these same organizations will have strong incentives to extend “privacy by default and by design” across geographies. It is often cheaper and more efficient to design one regime for security and privacy within an organization than to have a fractured landscape (and the resultant risk very significant fines – up to 4% of an organization’s global turnover in the most egregious cases).

Unlike a Fortune 100 global enterprise where there may be a single CIO and a single IT organization with global standards, NGOs often have very fractured ICT infrastructure, where there may be some standards and unity in the “home office,” but where field offices and field operations may have a hodgepodge of management and technical solutions, some of which are ad-hoc, and many of which exist outside of overall security and privacy governance.

Carrots and Sticks

There have been multiple calls for increased data governance and protection within the humanitarian community of late, especially as the risks of “irresponsible data” become more clear.  But most of these calls to action are based around the moral and ethical reasons that humanitarian organizations should adopt strong security and data protection postures.  The price to vulnerable people is just too high, they argue, and that there is a clear humanitarian rationale for protecting those vulnerable people which equally exists in the electronic space as much as it does in the physical space.  And all of that is true… there is a clear humanitarian reason for taking data protection a lot more seriously than is often done.  That’s the carrot:  it’s the right thing to do, and it’s completely in line with the humanitarian principles established over the last hundred years or so.

The GDPR on the other hand, compels data protection by granting  significantly enhanced rights to data subjects and those previously-mentioned fines for violations.  (As an aside, it is almost a cliche to write an article about the GDPR that doesn’t also mention those fines – they’re big and the coercive power of them should not be underestimated.)  Humanitarian organizations will soon ignore data protection at their own peril.

A Culture of Data Protection

Humanitarian organizations have often overlooked security and data protection issues in pursuit of executing their core missions.  As I stated previously, the current grant-based funding model  has created incentives to de-emphasise risk reduction.  Security and privacy issues are often considered “administrative overhead.” NGOs have an incentive to minimize overhead and maximize the amount of the donated dollar that goes to programs and operations.  Donors in turn want to see their money going to hungry people, or shelter, or other humanitarian needs.  Both sides of the equation have previously had rational (if ultimately mistaken) reasons to de-emphasize data protection.

No more.  The GDPR, by its territorial scope and expansive scale will require humanitarian organizations who may have been able to avoid tackling thorny issues of data protection to finally confront them.  Is the GDPR the perfect solution to data protection?  Absolutely not.  But the key thing it does is require organizations to start building comprehensive security and privacy programs across all of their data processing activities.

And that’s a start…



Valuing Privacy in Humanitarian Response


Image: CC BY-SA 3.0 Nick Youngson

Humanitarian aid workers are increasingly challenged in their use of data collection in order to accomplish their development and crisis response objectives.  The unending growth of mobile devices, the ubiquity of connectivity in even the most remote corners of the world, and the trend towards ‘digitization’ means that aid agencies are dealing with an increasingly large number of datasets in order to provide effective program delivery to beneficiaries and accountability to donors.  Considering the fact that in many cases we are collecting data on inherently vulnerable populations (refugees, disaster victims and so on), we must address the balance between data collection and privacy.

Security Isn’t Privacy

I’ve previously written about cybersecurity in humanitarian response and disaster relief,  so what’s new here?  Well, for starters – privacy is not the same as information security.  Infosec, with its traditional emphasis on confidentiality, integrity and availability, is mostly focused on preventing and responding to unauthorized access to ICT assets and datasets.  Privacy, on the other hand, deals with how an organization appropriately collects, protects and uses data specific to individual natural persons.

The two are interrelated (you cannot have good privacy without having good security) but differ in that the domain of privacy is focused around the concept of “personal information,” and since we are dealing with information about and pertaining to other human beings, there are special ethical, legal and policy concerns above and beyond just good information security practices.

Humanitarian actors must always be aware of the special responsibilities they have around privacy and data protection. They are often entrusted with sensitive data from especially vulnerable populations in crisis who may not have sufficient agency to make informed decisions about the use of their personal data.  Further, since the collection of this data often happens in the context of conflict, crisis or disaster, the disclosure of this data to unauthorized parties can have catastrophic consequences for beneficiary populations or aid workers themselves … which may include the compromise of their physical safety and security.

A Rights Based Approach

In the business world, concerns around privacy are largely driven from a compliance standpoint.  Laws such as HIPAA, FCRA, and others (hey, the EU GDPR is right around the corner, don’t you know!) require organizations collecting personal information to handle and protect it in certain ways.  But in many circumstances, privacy laws and regulations may not directly apply to humanitarian work. One of the main reasons for this is that in many parts of the world where humanitarian aid organizations are doing program delivery, privacy laws are either nonexistent or weak.  What is needed, rather, is a view of privacy that is grounded in existing humanitarian laws, principles, and doctrine.

Image: The HHI Signal Code establishes a rights-based approach for humanitarian ICT protection.

The Harvard Humanitarian Initiative’s Signal Code takes a rights based approach towards humanitarian security and privacy, stating that “all people have the right to agency over the collection, use and disclosure of their personally identifying information (PII).”  In fact, GovLab conducted a survey in 2016 on 17 different data protection regimes in the humanitarian space.  While there’s some difference across the various codes of conduct and sets of recommendations, there are some basic principles that should be followed by all humanitarian actors:

  1.  Privacy by Design – All data collection systems, whether they’re sophisticated, cloud-based systems, or pen-and-paper based systems, should be designed with privacy in mind from the ground-up.  The Privacy Engineer’s Manifesto is a good introduction to Privacy by Design principles.
  2. Informed Consent – To the extent possible, data subjects (such as disaster victims or refugees fleeing conflict, etc.) should be given the opportunity to freely express informed consent about any data collection activities. It should be presented to them in their native language, and avoid the use of jargon or technical terminology. The issue of consent in the humanitarian space requires careful thought, as it could be argued that a person in the midst of humanitarian crisis may not be able to give full consent.  The fact that they’re in need of humanitarian aid or protection may itself act as a coercive effect on the data subject.  Simply put, many people in crisis, if they’re looking for food, shelter or safety, may very well acquiesce to any requested data collection uncritically.
  3. Data collection minimization – Humanitarian organizations should minimize the data they collect to only that which is absolutely necessary.  Traditionally, many aid organizations have taken the opposite tack, collecting every possible data point about a person regardless of whether there’s a rationale behind that data collection.  The underlying assumption is that data is an asset, so of course more data is more better!
  4. Data Quality – Personal data collected should be relevant to the purpose for which it was collected, kept up-to-date and accurate.  Data subjects should have the ability to review, and if necessary, correct inaccurate data.
  5. Use Limitation – Personal data collected for one purpose should not be repurposed for other uses that the data subject didn’t consent to.  If necessary, updated consent should be obtained.
  6. Security – the personal data collected should be reasonably protected from unauthorized disclosure, modification, or destruction.  The data collected should not be used to harm the interests of the data subject.

Irresponsible Data

We’ve seen examples of “irresponsible” data – where PII for vulnerable Syrian refugees was stolen, and more recently the very real potential that PII of Rohingya refugees being used as a way to further oppress and exploit a vulnerable population at risk of genocide. There are others, most of which are poorly documented and only whispered about in the hallways of conferences held in New York or Geneva.

Systems architects of humanitarian ICT systems need to consider the very real downsides of data collection, preferably before any data collection is undertaken.  Can data be bucketed, masked, encrypted, or simply not be collected in the first place?  The key concept here is “intentionality” – any personal information collected and processed must have a reasonable justification, and not simply captured because it was easy to do so.  Any desired collection of personal information should be subject to a Privacy Impact Assessment and any risks surfaced should be addressed.

Privacy Protects the Vulnerable – And Donors Can Help!

I am heartened to hear that more humanitarian aid organizations are focusing on privacy.  In many cases, these conversations are being driven by the imminent arrival of the GDPR in Europe, but even in places where GDPR doesn’t apply, the conversations are finally happening.  At the recent NetHope Global Summit in Vancouver, BC, privacy was brought to the forefront in ways rarely seen in the humanitarian sector.

It will take sustained advocacy to focus the sector on the very real risks that poor data management practices present to privacy.  But the donor community can help.  Donors should consider explicitly earmarking a portion of their grants to “risk reduction” activities around privacy and information security.  Since so much of the sector is grant funded, privacy won’t truly be a priority until the grant making process prioritizes this.

With humanitarians educated about the risks of privacy, technologists invested in privacy by design, and donors committed to funding privacy risk-reduction activities, the global humanitarian community can extend the value of “protection” from the physical sphere into the electronic one to millions of people who depend on the humanitarian community.

Our goal should always be to keep people who were already at risk and often victimized from becoming victims all over again from fraud, electronic crime, or other threats to identity and agency.

On Emergencies, Wifi, Gender and Social Dynamics

Syrian refugees get online at a camp on Chios Island, Greece.  (NetHope photo)


“It’s no longer a luxury. This is serious. It’s really a social justice issue. It’s a 21st century civil rights issue.” – Cheptoo Kositany-Buckner on the Digital Divide

Once upon a time, there was no connectivity in disasters and humanitarian emergencies.  The Internet was not really regarded as an essential service in the midst of crisis.  Emergency workers communicated predominately through push-to-talk radio.  Victims of disasters and emergencies might line up for blocks to use a payphone to all friends and family.


Fast forward to today.  As put forward by the CDAC Network, and others “communication is aid.”  The ability to communicate and share information is seen as a vital humanitarian service on equal footing with food, shelter, medical care and other essential human needs.  The UN OCHA paper Humanitarianism in the Network Age from 2014 was a major milestone in getting recognition of this concept.  Today, in 2017, it is nearly impossible to go to any humanitarian technology conference and not hear this message as a core topic of concern to the crisis technology community.

With this pivot, we as emergency technologists are no longer being asked to just connect the first responders, government workers, and NGO staff that comprise the professional response.  To be sure, I think this will always be a part of our core mission, but the harder lift for us now is to connect the disaster or crises affected population.  We aren’t just talking about a few dozen or a few hundred aid workers – but now we’re talking about true mass communication.  How big?  Our recent work in Europe has seen us provide connectivity to over 600,000+ unique devices as a part of the Syrian refugee response.

OSI Layer 8 and 9

The responsibilities we have to face when connecting a mass population of vulnerable people are substantial, and I don’t think we (the entire community in the broadest sense) have really thought through what our ethical requirements are in this brave new world.  To be sure, I have spent several years advocating for security, data protection and privacy for affected populations – and I’ll probably write more than a few more blog articles on those topics in the months and years ahead.  But today I wanted to think about the next challenges on the radar.

When I was learning network engineering, one of the basic things we have to learn is the OSI Model – it’s a part of every basic data communications class.  The seven layers describe different technical boundaries of a communication system – from the lowest, physical (layer 1) elements, to the highest elements of an application (layer 7).  In learning the OSI model, you quickly learn that there are the “unofficial” layers 8 and 9 – money and people.  It’s the latter that I want to focus on now.

We have learned that connectivity is a social force in an emergency.  During Hurricane Sandy in the United States, people would migrate to seek wifi access and power to recharge their mobile phones.  In Europe, Syrian refugees wanted the Internet so badly that they’d literally stop a riot in the camp so our teams could work.  The ability to get online (or lack thereof) has social ramifications.  There is a human reality in play, here.

Except no network engineering class I’m aware of ever teaches that.  And I’ve taken plenty!  You can take all the CCIE bootcamps you want, learn the ins and outs of routing, switching, etc, but as network engineers, our view of the world is really driven by making the bits and bytes get to where they need to go.  Generally speaking we don’t care what those bits represent.

I would argue that when we are trying to connect a population in crisis, we need to care about those social dynamics, at least a little bit.  We need to care about what our networks mean for people on the ground.

The Gender Divide

What does this look like?  Take gender for example.

In the refugee camps in Europe, it is not uncommon for there to be gender segregation.  Women and children in one part of the camp, men in another. Unaccompanied children or other especially vulnerable people in still another part of the camp where they can have greater security.  In many of these communities, there is a great disparity between men and women when it comes to smartphone access.  For example, on one crisis we are looking at, the ratio is 8 men with phones to every 1 woman who has a phone.

So as a network engineer in a refugee camp, if you decide to merely put connectivity into the most obvious places where you see people congregate, you will most likely connect the men disproportionately, because as one researcher recently told me, “Public spaces are male.”  It takes conscious thought and intention to make sure that everybody gets the opportunity to benefit from the connection and information that Internet access can bring.

At the same time, I’m not a sociologist or anthropologist.  I have exactly zero professional training on gender-in-tech issues.  I also think that there may be some ethical issues when a third party (us) designs a network to influence the social dynamics of a crisis affected population without the express consent and partnership of the agency who has primary authority of that facility.  The social and human reality on the ground is really the responsibility of the aid organizations running that facility, and the crisis affected population themselves.  Getting it wrong can be damaging and erode trust in the response community.

A special area for vulnerable people at a refugee intake facility. Chios island, Greece. How do we ensure the most vulnerable have access too?

That said, I think we can at least bring the conversation to the table.  These are the questions I’m starting to ask myself in the early stages of network design…

  1.  Who are the people that need to be connected?
  2. What social/cultural differences are there between people when it comes to access to endpoints?  Do some ethnic/gender groups tend to have more devices than others?
  3. What gender differences are there in the facility (or is it quite well integrated?)
  4. Consent and informational messages should be in the languages that are understood by the population – what languages predominate?
  5. Are there special populations or especially vulnerable people who could greatly benefit from access who might be otherwise overlooked?

These are questions we should answer in partnership with the people who are running the refugee camp, or the evacuation center – the people who can best tell us about the human element and articulate priorities.

The digital divide is a real issue across many societies.  But in a crisis, the digital divide can complicate who gets access to information and the ability to communicate.  In my opinion, if communications is aid, we owe it to disaster-affected populations to make sure that as much as possible, everybody has a chance to realize the benefit of information and communication that the Internet brings without bias or favor to any group.  Every individual life has worth and dignity, and as network engineers who are delivering humanitarian aid with our connectivity, our responsibility is to make sure that is reflected in our technical designs and implementation.


Basic Information Security for Digital Humanitarians

A key part of any modern disaster or crisis response is that of the volunteer technical community (VTC).  VTCs have emerged in the last few years to enable the collection, analysis, fusion and dissemination of maps, imagery, social media and other products.  These products are in turn used by responding humanitarian organizations and governments on the ground to better inform their emergency response operations.  VTCs are largely volunteer-driven, and rely upon the goodwill and energy of individuals collaborating and coordinating their crowdsourced efforts.

Because of the inherently critical nature of the work of these VTCs, attacks against individual volunteers or entire VTC communities has the possibility of degradation or disruption of critical emergency response activities during a crisis.  Complicating the picture is the fact that most crowdsourced volunteers operate on an ad-hoc basis, often using their personal computers and technology, without any requirements or enforcement of good security practices.

Of course, it’s not reasonable to expect enterprise information security protections within a crowdsourced volunteer community – but neither should security be left entirely unaddressed.

In order to better mitigate the risks, I suggest VTCs adopt the basic principles and consider certain information security controls:

Basic security principles for VTCs…

  1.  Assume you are a target.  Organizers and leaders of volunteer technical communities should begin with the underlying assumption that their organization and volunteers will be subject to attack exactly when it is most inconvenient for their mission.  With that assumption, they can start to consider policies and practices to mitigate that risk.
  2. Do No Harm”  All VTCs have the ethical duty to “do no harm” – that requires consideration of unintended consequences of their activities, and especially of the use or misuse of the information and data that they gather and analyze.  Consider the misuse of the VTC’s technology in the context of other digital responders, emergency workers on the ground, and the victims of the crisis.  “Do no harm” also compels VTCs to appropriately secure and manage the information and technical resources they use.
  3. Security postures may be dynamic.  While an organization may adopt a basic security stance, certain types of crises may require additional security measures because of the types of possible threat actors (organized crime, government-sponsored attacks), the nature of the crisis (natural disaster vs conflict situation) , or the types of data that are to be handled (personally identifying information, healthcare records).  The leadership of the VTC must be able to re-evaluate basic security assumptions and adjust posture as needed.


Security practices for VTCs…

  1.  Vet your volunteers.  Have a process for on-boarding/credentialing volunteers into the community – especially during times of crisis when spontaneous volunteers are more likely to emerge.  This need not be a “background check” in the traditional sense, but even having two existing and trusted members vouch for a would-be new member may be sufficient.
  2. Know your data.  What sort of data are you working with?  Is some of that data particularly sensitive?  What about the products from the VTC?  Is it for the public, or does the product need to be kept confidentially for the use of specific organizations?  Consider the development of an information classification and handling policy.
  3. Patch your systems and applications.  All individual members of a VTC should have the responsibility for ensuring appropriate and current anti-virus, software patches, etc. are on their devices.  VTCs should consider establishing minimum technical criteria based on security for participation (e.g. volunteers who still run Windows XP in 2017 may be excluded from working in the community due to inherent security risks).  Consider requiring volunteers to demonstrate their current patch levels against established standards set by the VTC.
  4. Communicate and collaborate securely.  VTCs should organize and collaborate using tools and applications that are currently supported (not end-of-life), and that include security and audit (e.g. support applications that use SSL/TLS or other best-practice encryption and avoid legacy applications that send traffic unencrypted across the Internet such as plain email, telnet, FTP or Internet Relay Chat [IRC]).  Applications or tools that are homegrown (including hackathon apps), or not regularly updated, or otherwise unsupportable should be avoided.
  5. Consider the cloud.  Because of the highly distributed nature of VTCs, it may be more advantageous to centrally store and manipulate data in a trusted location in the cloud, instead of having individuals manipulate and store data on their personal devices.
  6. Enforce good credential and password practices.  Applications that support VTC operations should be configured to enforce strong password properties.  Individual volunteers should not re-use passwords used for VTC activities on other sites or applications (professional or personal).
  7. Have an incident response process.  All VTCs should establish at a bare minimum a basic incident response process that designates whom to alert in the case of a suspected security incident, and roles and responsibilities for dealing with that incident.
  8. Know how to manage and revoke access.  Access to applications, tools and data should follow the principle of least access.  Individuals should only be given the access necessary to perform their tasks.  Administrative or privileged access should be conferred only on a trusted subset of individuals.  Administrators should know how to revoke access to individuals when they leave or become inactive in the VTC .  Periodically re-assess access to individuals to ensure that only current, trusted members of the community have access to the data, and that individuals who left the community or haven’t been active in it no longer have access.
  9. Use two-factor authentication on everything. Wherever possible, members in VTCs should use two-factor authentication for VTC applications as well as other common social media and applications (e.g. Facebook, gmail, etc…)  Remember that users may use their personal email or social media accounts while supporting the mission of the VTC – two-factor authentication should be encouraged across the board.
  10. Educate your volunteers.  Volunteers coming into a VTC should be educated about their information security responsibilities and expectations.  All members of the VTC should know where to access any relevant security policies and procedures, as well as when and how to activate an incident response process.  Specific training around social engineering, phishing awareness and other common attack methods are freely available online and should be used.


Remember that good security is never a one-time activity.  VTCs should work on instilling a “culture of security” in the work that they do, so that security controls and processes are just incorporated into the day-to-day work of the community.  Security management in VTCs should be seen as an ongoing, cyclical activity of identifying risks, mitigating hazards, responding to incidents, and then incorporating lessons learned into the organization so that the security posture of the VTC continues to evolve over time.

As the humanitarian and public safety community grow increasingly comfortable with the use of VTCs, it will become increasingly important for those VTCs to maintain the trust that their supported agencies and the public have instilled in them.  Breaches of confidentiality, integrity or availability of VTC data and resources may have dire consequences – up to and including physical harm – of people on the ground who are already inherently vulnerable due to the overarching crisis or emergency they find themselves in.

VTCs may be limited in funding and highly distributed – both of which complicate the security challenge for these organizations.  However, many things can be done to minimize risks at low/no cost to reduce the attack footprint that VTCs present.  The time for VTCs to incorporate good security practices is before the next disaster or emergency strikes as it will obviously become much harder to introduce new policies and procedures during a time of crisis.  By incorporating security into the day-to-day operations of the VTC, it further ensures that security doesn’t become an afterthought when the emergency eventually does strike.  If volunteers are trained to follow secure processes beforehand (and trust that they can do their work while still maintaining security), they are much less likely to abandon them when faced with a stressful emergency situation.

How should the crowdsourced community tackle information security?  Share your thoughts in the comments below!

Humanitarian Information Security and the Obligation to Protect

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”  – Article 12, Universal Declaration of Human Rights

Humanitarian organizations – NGOs, governments and organs of the United Nations system – are increasingly challenged to adopt new Information and Communications Technologies (ICT) to execute the core humanitarian mission. Traditional ways of gathering and disseminating data and information – paper forms, offline spreadsheets or other forms of collection struggle to keep up with the tremendous workload the humanitarian community is under.  As such, many of these organizations are moving towards digitization – that is, “taking manual or offline business processes and converting them to online, networked, computer-supported processes.”

Along with this transformation in the use of technology and information comes with new risks.  As these organizations successfully move to new ways of doing business, they become dependent upon the integrity of the technology underpinning those systems.  It is therefore natural that the humanitarian community start to consider the information security risks inherent in their ICT dependencies.  This isn’t any different than the security challenges any enterprise business has to face.

Beyond the inherent need for information security that exists in any “digitized” organization, I argue that the humanitarian community has a special Obligation to Protect which compels strong information security, data protection and privacy responses based on the core mission of humanitarian organizations and the Humanitarian Principles as established in United Nations General Assembly Resolutions 46/182 and 58/114.

Humanitarianism in the Age of Cyber-Warfare

Humanitarian responses to modern disasters, conflicts, and other crises are incredibly technology dependent.  From Ebola to Haiti to Syria, crisis responders and aid workers rely upon computers, tablets, smartphones, and all manner of technologies to get the job done.  With increasing use of emerging technologies such as cloud-based solutions, UAVs, and data analytics, humanitarians are better able to get the aid to where it is needed in more efficient, cost-effective ways.  Emerging applications also complicate the kinds of information that are being entrusted to the community: “Humanitarians are also collecting entirely new types of information, such as bank accounts and financial data for cash programming, and biometrics, such as fingerprints and iris scans, in Kenya, South Sudan, Malawi and elsewhere.

These essential digital dependencies can also prove to be a tempting target for organized crime, hackers, or sophisticated combatants.  The increasingly large amount of sensitive data being collected by the humanitarian community often outstrips the ability for organizations to effectively identify and mitigate infosec/data protection/privacy risks.  In my unofficial survey of major international humanitarian NGOs, of the 30 organizations I surveyed, only about four or five had dedicated information security headcount, and even in those organizations the ability to influence the security of data being used by country offices or out in the field was extremely limited.

High Vulnerability Meets Low Capacity

The humanitarian community therefore finds itself in the unenviable place of having a high degree of vulnerability to security threats, but a low capacity to address those threats. Unlike businesses or governments where various security standards exist and can be audited against (such as ISO 27001:2013, PCI or NIST 800-53) there are no existing  standards for humanitarian information security. The three standards I just referred to are widely used, but lack a grounding in humanitarian principles that are essential to the organizations in our community.

The need for security won’t be driven from beneficiaries, however.  Imagine a hypothetical scenario of a refugee family fleeing a war or disaster arriving at a camp run by an international aid organization.  At the entrance to the camp is an aid worker with a laptop who is requiring all arrivals to register into a database.  No refugee will respond to this request by demanding assurance as to whether the data will be encrypted or protected against theft.  In this transaction, the refugee lacks agency to advocate for appropriate information security controls — if one wants a place for their family to sleep tonight and eat, one will most likely hand over whatever information is being requested.

Contrast this to my rights as a consumer in the United States or Europe – if Amazon (for example) were to lose my credit card or other sensitive PII to a hacker, there are laws that define what responsibilities fall to Amazon, and what remedies are available to me.  None of this currently exists in the humanitarian space.

Protecting the Vulnerable is What We Do

Humanitarian organizations long ago defined and adopted the humanitarian principles as universal values:

  •  Humanity: Human suffering must be addressed wherever it is found. The purpose of humanitarian action is to protect life and health and ensure respect for human beings.
  • Neutrality: Humanitarian actors must not take sides in hostilities or engage in controversies of a political, racial, religious or ideological nature.
  • Impartiality: Humanitarian action must be carried out on the basis of need alone, giving priority to the most urgent cases of distress and making no distinctions on the basis of nationality, race, gender, religious belief, class or political opinions.
  • Independence: Humanitarian action must be autonomous from the political, economic, military or other objectives that any actor may hold with regard to areas where humanitarian action is being implemented.


The Obligation to Protect is consistent with these principles:  Organizations that have the mission of addressing human suffering and protecting vulnerable people from further harm in the physical space – assuring basic physical security along with food, shelter, medical care and other essential human needs – also have an equivlent duty to protect people from digital harm, whether it’s identity theft, financial fraud, or physical harm resulting from the loss of confidentiality, integrity or availability of humanitarian ICT systems.

First, Do No Harm

The precautionary principle of “do no harm” should be the underlying touchpoint of all humanitarian ICT efforts.  It requires technologists and humanitarians alike to consider the risks of all technology use and to work to minimize such risks to aid workers, donors and most importantly beneficiary populations.  In short, information security is essential to the humanitarian ICT mission because compromise puts the core mission and rationale of humanitarian action at risk.

Conclusion: Information Security is Inherent in the Mission

Humanitarian action is increasingly dependent upon ICT.  In the absence of legislation and standards within the community, humanitarian organizations must recognize the Obligation to Protect as it applies to information security, data protection and privacy as an essential part of the humanitarian mission. All humanitarian actors – whether they work for a humanitarian agency, are crowd-sourced volunteers on the Internet, or from the private sector – must be educated on the Obligation to Protect and how all parties must ensure appropriate and secure use of ICT and datasets.

Aid workers and beneficiary populations are often among the most vulnerable people on earth — they exist in crisis, with little or no ability to identify and minimize risk on their own.  The goal of good security should be to minimize the risk that they will be further victimized by electronic malfeasance.

What do you think is the best way to drive information security into the humanitarian community, let me know in the comments below!

Further reading:

Applying Humanitarian Principles to Current Uses of Information Communication Technologies: Gaps in Doctrine and Challenges to Practice“, Harvard Humanitarian Initiative, July 2015

Humanitarianism In The Age of Cyber-Warfare: Towards the Principled and Secure Use of Information in Humanitarian Emergencies“, UN OCHA, October 2014

The ETC 2020 vision requires smarter humanitarian networks.

The United Nations recently set out the ETC 2020 strategy, asking for a remarkable change in how humanitarian communications will happen in future disasters and other humanitarian crises.  The ETC 2020 strategy does something which I’ve previously advocated for, a focus on delivering communications not just to the humanitarian responder community, but directly to the population affected by a crisis.  (Disclosure note: I was one of the contributors to ETC 2020).  In our modern world, even remote populations depend on connectivity from a number of different media to be informed and engaged in their own humanitarian response.  The term you’ll hear now within the UN humanitarian system is “Connecting with Communities” or CwC.  CwC is a relatively broad term that refers to the focus on delivering communications to affected populations through all means: community broadcast radio all the way to providing wireless networks that disaster victims can use.

As emergency communicators, the scale and scope of the kind of communications we will have to deliver must change to meet this new expanded role and mandate.  Simply put, the kind of networking that the ICT response community has typically deployed in the past will not easily scale to the much greater demands of supporting affected populations.  We aren’t just providing communications to a few dozen to a few hundred humanitarian responders in an emergency.  We’re now talking about providing connectivity to thousands to hundreds of thousands of affected individuals, while still retaining the ability to support humanitarian operations too.

The kinds of networks we will need to design and deploy will have to change.

The End of “Dumb Pipes”

Traditional humanitarian networks have relied upon what I am terming “dumb pipes” – relatively simple networks that were quickly able to be deployed and paired up with a VSAT or other form of backhaul to provide service.  While such networks (sometimes built with consumer-grade gear) would work, they also lacked the cybersecurity, network optimization and network management capabilities essential for communications at scale.

Once deployed, these networks are often left in place for an extended period of time throughout the incident, and provide an undifferentiated level of network access to a small number humanitarian responders at a scene.  But since these networks are often unmanaged, the overall health of the network is usually left unexamined – only when a user complains that she isn’t able to get to the Internet is a technical issue ever investigated.  Malware-compromised hosts, or power users who might know what BitTorrent is are free to suck up a disproportionate amount of network bandwidth and displace legitimate mission traffic – because hey, aren’t all networks in disaster zones slow?  *insert eyeroll here*

Defining the Next Generation of Humanitarian Networking

Syrian refugees in Greece using a high-density next generation network designed for security, quality and manageability. (Cisco photo)

So given that the dumb pipe method doesn’t scale, doesn’t protect and doesn’t assure network quality, we have to start defining the capabilities of the next generation of humanitarian networking (hereafter NGN) that will ultimately realize the ETC 2020 vision.  What does that look like?

Advanced Cybersecurity: Legacy networks may (at best) have a layer 3/4 stateful firewall between the users of the network and the Internet.  But this is woefully inadequate for the threat environment that we face in 2016 and into the future.  Humanitarian organizations that have the duty to support and protect affected populations must realize that that very same duty extends to the electronic realm as well — especially if they are providing humanitarian connectivity to that same group of people!

The cyberwarfare elements of current conflicts in Ukraine and Syria are well documented – and the targets of these campaigns often include humanitarian aid workers and vulnerable civilians.  This intersection between information security and military conflict will only increase as the toolsets become more accessible and the attack surface of potential targets grows.  This is because of the inexorable growth of smartphones and other devices even among the most vulnerable populations on the planet.

Networks supporting humanitarian workers and CwC in an NGN must embed the kind of cryptography and day-zero intrusion prevention that identifies and mitigates advanced attacks against a range of threats: from traditional phishing attacks to sophisticated malware created by a nation-state or combatant. This protection must be in the network, because the ability to enforce policy on any of the devices joining the network will be minimal to nonexistent.

This kind of capability must exist on every network regardless of whether the emergency is a conflict situation or natural disaster.  After all, we saw nation-state malware attacks against humanitarians on the ground in Nepal after the 2015 earthquake!

Content Management:  Are there types of Internet content that the humanitarian community should exclude from a CwC network?  Some types of content are an easy choice to limit due to the amount of bandwidth they consume:  YouTube and other streaming content, for example – especially on low-bandwidth links!  Other types of content are also an easy choice based on security: blocking untrusted Android app stores where malware is known to reside, or confirmed spam sources.  But what about blocking content based on other concerns?  Adult content, access to militant websites, or sites related to human trafficking… the possible list is endless.  NGNs will have to be able to block inappropriate content at a technical level, but this also requires the humanitarian technical community to determine a policy around content management.  Blocking content is easy in any modern network, but often the human policy decision of what to block and when to block it is going to be the tougher problem.

Traffic Shaping and Quality of Service:  Sometimes there is a need to allow traffic, but ensure that it doesn’t take up the entirety of available bandwidth.  We often see this in disaster ICT soon after the emergency when responders first show up on the ground with their laptops and smartphones:  as soon as these devices (which may have been offline for some time previously) get onto the network, they start pulling down operating system updates, anti-virus databases, fixes to Adobe Flash (aren’t there always?) and iPhones and Androids start syncing everything to the cloud.  (Android is particularly chatty, by the way!) — all of this traffic is legitimate: we want people to have security fixes! But all of this activity can displace essential mission traffic, creating a defacto denial of service against the humanitarian network.

In an NGN, we should have the ability to provide a high quality of service to delay-sensitive traffic, such as VoIP – SIP, Skype, WhatsApp and other forms of realtime voice and video traffic.  Additionally, we should be able to ensure that things like security updates and operating system downloads get a much lower level of priority to ensure that mission traffic is not displaced from the network.

Rate Limiting:  Just like networks supporting a sporting event or other mass gathering, humanitarian NGNs will have to implement per-client rate limiting to ensure that the largest number of devices are able to get effective service from the network.  Enforcing a rate cap of 100-200 kbps per mobile client device is enough to ensure that the user gets a network that “feels” fast for most applications, supports voice and video calling over Skype or WhatsApp, and yet prevents any one device from hogging up the entire network pipe. Devices that truly require higher data rates can be segmented onto another VLAN or SSID that is itself deconflicted from lower-priority traffic.

Network Management:  As stated earlier, many existing networks supporting humanitarian response are essentially unmanaged once they are set up and in service.  Logs are not monitored, errors that may make the network less effective are not dealt with until the point where human users start to complain.  In essence, humanitarian ICT teams have used their own users as the “tripwire” that something is wrong with the network.

In an NGN this should never be the case. Even for reasons of good customer service, if nothing else, the network should be instrumented and actively monitored for problems.  Minor problems must be identified and resolved before they cause major headaches for users. The IT team supporting the network should be the first to know if there is an outage, not the last.

This Can Happen Today.  It Already Has.

The BT Emergency Response Team on Tanna Island, Vanuatu after Cyclone Pam showing that advanced networks are feasible in the most remote humanitarian emergencies. (BT photo)


These five essential capabilities must comprise the core bedrock of humanitarian networks now and into the future.  On our team, we have already made this a part of how we operate.  Starting with Cyclone Pam in Vanuatu, the response to the Nepal earthquake and most recently our work in Europe on the Syrian refugee crisis we have deployed networks that have all five of these capabilities designed in from the very beginning.  All of this without busting the budgets or limited number of network engineers available to humanitarian aid organizations, and yet being able to scale to an extremely large number of users.

The ambition and scope of ETC 2020 will require an evolution in how we provide Internet connectivity in the midst of disaster and humanitarian crisis.  The core capabilities I’ve described are common capabilities in solutions from many vendors at practical price points. With the right engineering, there is no reason why the CwC vision of providing connectivity to massive numbers of affected people shouldn’t be a reality.

Mini-Review: Microsoft Surface 3 + Windows 10 for Humanitarian Response

So I’ve been testing the Microsoft Surface 3 for a few weeks now as a potential deployment laptop for NGOs and humanitarian responders who may find themselves headed into emergencies, ICT4D or other similar scenarios.  The need for computing in remote areas is increasing, and being able to get a good solution in the field is sometimes different than what is needed at home or in the enterprise.  With the release of Windows 10 the other day, I’ve also gotten to take the new OS for a spin.

First off, people should use what works best for them:  If a chromebook is what does it for you, by all means, use it.  I’m mostly an Apple person, so my personal deployment gear has traditionally been a MacBook Pro, an iPad and my iPhone.  Operating System wars are lame, and I have no desire to refight them here.  However, if you want an idea of whether this particular solution might be useful, read on…

What do we need for a deployment laptop?  The ability to run all of the network management software (ssh, visio, etc.), general business productivity apps, low power draw (in case you need to run on solar or wind power), portability (every pound you carry into the field matters), and low cost (a $3000 Toughbook isn’t usually necessary in most situations, even in a disaster, and you’re often better off buying three $1000 laptops instead for the same spend.)

The Surface 3

So when the Microsoft Surface was first announced, I was really excited about the form factor – it wasn’t quite a laptop, and it wasn’t quite a tablet.  It was something in-between.  We even got a few first generation Surface Pros to work with.  Unfortunately, for what we’d need for a deployment laptop, they consumed too much power and had too little battery life.  I’ve kept my eye on the platform as it continued to evolve… the ARM-based Surface was killed and replaced by an x86-based Atom CPU system.  It’s this latest generation of Surface that I got from Microsoft as a loaner.  Yes, this one is a real PC (the ARM-based systems couldn’t run the majority of PC software because it had a different CPU).

Why not the Surface Pro?  Because for a deployment laptop, the substantial difference in power consumption matters.  The Atom CPU in the Surface 3 won’t win any PhotoShop rendering battles, but it has proven to be adequate for the kind of general business apps we might want to use.  My loaner from Microsoft came pre-loaded with the Pro version of Windows 8.1 (and later Win10) – so I could test performance with BitLocker and other security elements turned on.

Things worked as they should.  Also, the keyboard for these computers now has a backlit keyboard, which is great for when you’re working in low-light situations.  Since it has standard USB, various accessories (like a router’s console cable for example) worked fine.  Anyway, if you want to think of it as a “real computer” that happens to be the same sized as an older iPad, that’s about right. Compared to other laptops, this one sips power (my new MacBook Pro 13 Retina also sips power, but the Surface is a few watts less).  The Microsoft Type Keyboard is a must with this system (even though they’re sold separately) – and I find the keyboard enjoyable to use, which is unlike many other “thin and light” PC keyboards.

Windows 10

I don’t know anybody who *loves* Windows 8 – if you’re using Windows, it’s probably Win7.  People who use Windows 8 use it, but the UI hasn’t been great… it tried to be both a tablet OS and a PC OS and in the end, didn’t get the hybrid thing quite right.  Windows 10 doesn’t require you to relearn anything if you’ve already been using Windows – a huge usability improvement.  The thing has been out for only a day or so, and I’m sure there are roadbumps with bugs and such that will come up – there are some concerns with user privacy and the sharing of WiFi keys, and I’m waiting for responses to some of those issues that are being debated as I write this … but Windows 10 is a big improvement.  I’m actually enjoying using the OS, and find everything pretty intuitive.  My biggest concern from a Windows OS standpoint in the field has been security – there are a number of security improvements in the OS that I think justify the upgrade compared to previous versions of Windows.  One of the things that is intriguing is the torrent-like peer-to-peer distribution of patches and updates (“Windows Update Delivery Optimization“) that promises to be really useful in disasters and humanitarian crises.   The idea being that a single computer might pull down patches and then other computers would be able to pull down those patches from the first computer, rather than going over the WAN individually.  This may be a huge bandwidth saver for disaster response/ ICT4D when you’re on the thin sippy straw of a VSAT link or other low-bandwidth connectivity.  On the Surface 3 (and other hybrid / convertible systems), the system is smart enough to switch seamlessly between a PC desktop mode and a more touch-friendly tablet mode when the keyboard is detached.

It was hard to recommend the Surface hardware with Windows 8 – the hardware was great, but the OS and a very confusing user experience just was not doing it for me.  The combination of Windows 10 and the Surface 3 rectifies most of the major shortcomings in the earlier versions of both and makes a compelling solution for people who prefer (or are required) to use Windows in humanitarian response or ICT4D.  For some users who currently take three separate devices into the field, you might very well be able to take fewer devices with you – which means less weight, less power needed, and one less thing to break.

So is it right for you?  If you’re using Windows in the field, there’s really no real argument to avoid Windows 10 in the same way there were plenty of reasons to avoid Windows 8.  You might want to wait for some of the invariable bugs and issues to get worked out first, but really, yeah, go for it.  And I think the Surface 3 is a great PC system in a very tidy form factor that makes it great for HA/DR deployment scenarios.

The Case for Evidence-Based Disaster Technology Response

“The plural of ‘anecdote’ is not ‘data’.”

A disaster happens somewhere in the world. Disaster technologists and digital humanitarians mobilize. Maps are crowdsourced, satellite dishes and networks are deployed, UAVs are flown, apps are hacked in marathon sessions, social media mined. All these things happen incredibly rapidly because of the army of passionate individuals and organizations that fundamentally believe that the rapid flow of information helps to save lives and speed recovery to affected communities.

Eventually, as things move from response to recovery, most of these individuals and organizations will document what they did and lessons to be learned for the next time around. Pictures will be shared on social media and in press releases. The community of technology humanitarians will prepare for the next event… knowing there is always a next event.

But something fundamental is missing from our community: how do we actually know we made a difference in the outcome of the response?

Many of us (including myself!) have plenty of anecdotes and war stories about how something we did made a difference, but anecdotes are just that. Let’s face it… I make my living working in this space. Of course I want to believe that my work, my time away from my family, all those long hours in hardship conditions actually make a difference. Organizations want to tell the most positive story about their efforts, to ensure future funding, volunteers, and missions.

Loading a helicopter with tech in Vanuatu (Cyclone Pam, 2015) - let's use evidence to know how to best use our capabilities and skill.
Loading a helicopter with tech in Vanuatu (Cyclone Pam, 2015) – let’s use evidence to know how to best use our capabilities and skill.  This photo is not necessarily evidence of an effective response, as much as I might hope it is. 🙂

It’s called “confirmation bias.” As humans, we naturally seek our information that supports our beliefs and positions, and tend to avoid information that would call those beliefs and positions into question – or discredit them entirely!  The photo above is an an example… it’s certainly visually compelling, and I’d like to think that good things came from being out in that rain that day, but we need data to really measure effectiveness!

Our community of emergency techies is relatively small, and the whole multi-discipline sphere is relatively young. By way of analogy, this reminds me of where emergency medicine in its first few decades. During the dawn of the Emergency Medical Services (EMS) field, most treatments were based on assumptions (“backboard all suspected CSPINE patients”). Eventually, evidence-based medicine came into the field, with the results that we don’t backboard anywhere as often as we used to, and that we know how incredibly important chest compressions are in CPR, high-flow oxygen can actually harm a heart attack patient, and therapeutic hypothermia for a cardiac arrest patient actually makes sense.

So, while anecdotes make for great memories, they aren’t terribly useful for the long-term evolution of our field. Just like EMS and many other emergency disciplines, a collection of stories cannot be considered evidence of efficacy because of confirmation bias. One positive story about disaster technology response and one negative story about disaster technology response cancel each other out.

We need to move beyond anecdotes and move towards true evidence-based disaster technology response.

So what does that look like?

What technology interventions actually help the situation on the ground? How will we know this?

What technology interventions don’t help, and should be avoided? How will we know this?

While the field demands innovation and exploration, how do we ensure that we aren’t just enamored with our own technology at the expense of other meaningful activities that support disaster response?

We have to get beyond the feels. Many digital volunteers are moved by crisis to “do something.” But the goal of “doing something” should not be the social reinforcement that a volunteer or disaster worker gets on their Facebook page. It should not be to demonstrate some whiz-bang technology in crisis. After all, a fancy map that isn’t used by anybody to make a decision is merely a graphical representation of disaster trivia. A UAV that gathers video that isn’t used by anybody to make a decision is just a pretty YouTube video. In both cases, the intention and the tech are great, but the outcome is lousy.

If it happens that these other things happen along the way, so much the better. . But out first duty should be to focus on the outcome… verifiable, evidence-based outcomes.

I think there’s a strong role for academia in this effort.

We should look to the area crisis informatics to help inform the practitioner about   these questions of efficacy. They can help us develop the metrics, engage peer review, and help move us to that next level that our increasing workload and set of expectations demands. We in turn can influence the research questions and work to connect the results of academia to the work being done by practitioners.

By driving towards evidence-based operations, we are helping to mature our work and minimize wasted effort and cost. But most importantly, we enshrine the beneficiary of our work at the absolute center of the digital humanitarian universe. The disaster victims and responders who need information to make smarter, better decisions for themselves deserve nothing less.

The move towards evidence-based response will take all of us … let me know your thoughts in the comments.

A Cybersecurity Wake Up Call for Emergency Managers

Since the 9/11 attacks, the United States government has been increasingly concerned about the implications of cybersecurity on a technologically dependent society. While cybersecurity has been a significant priority for policymakers and the national security organizations of the United States, the intersection of cybersecurity and traditional emergency management remains less well-known, with relatively few agencies considering the cyber implications of their emergency management roles. This lack of awareness and preparedness leaves the public safety community at risk. It is safe to say that those risks are being exploited as we speak.

Let’s start from here: Nearly every emergency response in the United States today is dependent on computers, networks and other technologies that are vulnerable to cyberattack. I’m not just talking about the spectacular “cyber Pearl Harbor” scenarios that politicians often mention (as I write this, the news is filled with articles about how the head of the US National Security Agency announced that China has the capacity to disable the power grid of the United States via a cyber attack). I’m talking about the ordinary kinds of emergencies that are responded to thousands of times across the country every day. The house fires, the car accidents, the medical and law enforcement calls that are the bread-and-butter of most public safety agencies are all dependent upon technology, from the PSAP that answers the 9-1-1 calls, to the CAD and WebEOC systems, to the individual laptops, smartphones, and tablets carried by the responders in the field. As more networked technology is adapted for public safety use (such as public safety LTE, or “FirstNet”), the potential footprint of vulnerability will continue to grow – which is why those risks must be mitigated to the extent possible.

At this point, an emergency manager might say “Hey, isn’t this really an IT problem? My agency has an IT department. I don’t know anything about how the Internet or hackers work.”

The answer is absolutely “No. It is your problem too!”

If the effects of cyberattacks were strictly limited to the electronic world, one might safely leave the problem in the hands of (a hopefully capable) technical staff. But cyberattacks in the right circumstances have the capability to affect the physical world – the systems that public safety and critical infrastructure alike depend on. And that, emergency manager, is where it becomes your problem.

Emergency managers must shake themselves of the notion that cyberattacks against their communities must look like something out of a Tom Clancy novel. Here’re two recent examples of cyberattacks against public safety. Both of these examples are rather ordinary, and could happen on any similar incident anywhere in the country.

Example One: Carlton Complex Fire, Washington State.

The Carlton Complex fire earlier this year was the largest fire in Washington state history, burning an area roughly five times the size of the city of Seattle. Because of the remote location of the fire, communications and connectivity for first responders were ongoing challenges for incident managers (in fact, this was widely reported in the media at the time). In coordination with the FEMA TechCorps program, and at the request of the State of Washington ESF-2, our team responded to the south zone of the Carlton Complex fire, where we enabled mission critical communications for the Type I IMT managing that portion of the fire, as well as providing an open “morale” Wi-Fi network for approximately 750 firefighters and support staff.

While many teams have the ability to deploy Wi-Fi and other connectivity on the fireground, the security of the users and those hundreds of devices are typically not considered. In short, there’s no such thing as a Chief Security Officer (CSO) at a brush fire! When we deployed to the Carlton Complex fire, we brought along a number of intrusion detection, network management, and application-level technologies with us. In short, we weren’t just providing a “dumb pipe” to the Internet, but one that assumed the likelihood of a cyber threat.

Initially, we started to detect incoming attacks against the users on our network on both the open and the mission networks. We do not believe that the first responders were under a targeted attack in this case, but rather our users were subject to the sorts of attacks that all users of the public Internet are subject to on a constant basis – everything from port scans, to compromised webpages that were taking advantage of vulnerabilities in users’ web browsers.

This next part is important: just because we could detect an attack or set a response posture, we could not unilaterally act to block traffic. We were in support of the IMT and it would not have been appropriate for us to arbitrarily move to a more aggressive response posture. Luckily for us, the Communications Unit Leader (COML) and Communications Unit Techs (COMT) staff were aware of network security risks. When we went to them with our data that showed the network and certain users were being attacked, we pointed out that we could move the network into a more protective posture (intrusion prevention instead of intrusion detection)

According to our data collection, in the four days that we were active on the fire, we were able to detect and block 30+ high risk or sophisticated attacks against users on our network, as well as defeating any number of more minor risks. Keeping the network operational and protected enabled the incident managers and firefighters to keep focus on where it was needed: on responding to the fire itself.

Example 2: Ferguson, Missouri.

Recent unrest in Ferguson, Missouri has made news around the world, and particularly challenged emergency managers and other public safety agencies responding to the situation. The Missouri State Police deployed a mobile command vehicle to support law enforcement operations in Ferguson, but according to media reports, the command vehicle itself became the target of an unspecified cyberattack:

“However, Thurston said that ‘Big Blue’ also became a target during the protests as the MCCV experienced it’s first real cyber threat. Thurston said that people were attempting to try and spoof communications from the vehicle at several times during the protests.

Thurston warned attendees at the conference from the law enforcement community that they need to place an increased emphasis on securing their communications. ‘Your communications are targets greater than you ever thought,’ he said. ‘There are groups trying to intercept your communications.”

The Missouri State Police has not released any additional details about this attack, and it’s not clear from the context whether they were talking about spoofed radio traffic, or spoofed data traffic. Regardless, the incident is an example where a public safety resource was specifically targeted because of its role or mission.

A wakeup call.

These two recent incidents should serve as a wake-up call to the emergency management and public safety community that cybersecurity must move out from just being an IT responsibility to part of good all-hazards planning. While there have been some large-scale cyberattacks (Stuxnet and Shamoon being two good examples), most cyberattacks against public safety are smaller scale, and may go largely undetected unless the attack causes significant disruption. Emergency managers need to consider their own vulnerabilities, as well as how to respond to potential disruptions. Here’re some ways to get started…

  1. Identify the information security team within your own agency or organization. Go get coffee with them. Find out how they support your mission, how do they respond to threats that may target your emergency management infrastructure. The goal here is to engage with your IT and security organizations, not just as the “make the printer work” people, but as partners who are committed to your mission success.
  2. Consider the security of field networks and resources. In my experience, hastily formed networks created to support specific emergencies are often not monitored for security issues, nor is the responsibility of incident response identified. This must change. Cybersecurity in field emergency response must actively be managed.
  3. Work with your partners to develop and test realistic cybersecurity scenarios against your responders, your EOC, or your critical dependencies.
  4. Consider who will own security policy and policy enforcement in situations where you have multiple agencies, and multiple devices showing up in mutual aid scenarios that all need to collaborate on the same networks and applications.

Cybersecurity isn’t just the job of the IT department or the private sector. Emergency Managers should work with their technical partners to identify cyber risks, mitigate them where possible, and plan and train for incident response. Disruptions to critical systems may complicate response or put responders or the public in danger. A failure of public safety to secure its own systems and plan for broader responses can leave people already affected by an emergency situation vulnerable to further victimization.

Haiyan: If #CommIsAid, We’ve Got a Scope Problem.

“…the effective, timely deployment of telecommunication resources and that rapid, efficient, accurate and truthful information flows are essential to reducing loss of life, human suffering and damage to property and the environment caused by disasters” – The Tampere Convention

As I write this, I’ve recently just returned from the Philippines where I and many of my colleagues were deployed to assist humanitarian relief operations by deploying various forms of communications.  Our team and many others from around the world were in pretty harsh post-disaster environments, and setting up the necessary communications that are required for any effective, modern disaster relief operation.  The Haitian earthquake of 2010 was the first really good example of a “data driven” (rather than “radio driven”) communications response.  And the need for more internet connectivity in the aftermath of a disaster has only gotten stronger since then.   After a major disaster, it’s not uncommon to see numerous VSAT satellite dishes spring up like mushrooms after a good rain.

And like I said, the work that these teams have completed is impressive.  Recently, the homepage of the UN’s Emergency Telecommunications Cluster had a story about how 2500+ humanitarian workers have used the emergency Wifi.  Most of the major disaster relief operations have some form of Internet connectivity – even if that’s something as simple as a BGAN.

One of the GATR sat dishes we deployed after Super Typhoon Haiyan.
One of the GATR sat dishes we deployed after Super Typhoon Haiyan.

And for all that, we have a huge problem.

In the 2013 UN OCHA report, Humanitarianism In The Network Age it is argued (correctly, I think) that information is a basic need to those affected by a crisis, not just the crisis responders themselves:

information as a basic need requires a reassessment of what information is for. Instead of seeing it primarily as a tool for agencies to decide how to help people, it must be understood as a product, or service, to help affected communities determine their own priorities.

The CDAC Network has been promoting this notion through the Twitter hashtag #commisaid.

The vision of emergency communications is succinctly summed up in that one hashtag: communications is a vital resource for the survivors of catastrophe, right along security, shelter, hygiene, medical care and food.

If you accept this notion, the international technology community has a huge problem.

If you look at the United Nations Cluster System something becomes very clear quickly:  the vast majority of the clusters have the mandate to deliver goods and services ultimately to those who have been affected by the crisis.  The Nutrition cluster is there to make sure people don’t starve, for example  – – it delivers food at scale to thousands, perhaps millions of people depending on the nature of the event.  But the Emergency Telecommunications Cluster  is different.  It’s mission is limited to “the humanitarian community” (which is UN-codespeak for the UN agencies such as WFP, UNICEF, etc. and certain BINGOs “big NGOs” such as the ICRC):

To provide timely, predictable and effective Information Communications Technology services to support humanitarian community in carrying out their work efficiently, effectively and safely.

Excluded from this mandate is the idea of providing communications services to countless desperate people who would love to communicate with their families, with the resources they need to restore their lives, and to the outside world more generally to tell their story.  We (the collective “we”) are failing to adapt to the new reality.  If we truly believe that communications is a form of aid, we are utterly failing to deliver that aid!  We are totally content to deploy a handful of satellite dishes, a few Wifi access points, and provide connectivity to aid workers.  But what about *everyone else* outside of the perimeter?  What about the public?  Who will speak for them?  Who will restore to them the ability to speak for themselves?

This is a very tricky question.

Telecommunications services: Internet service, POTS telephony, mobile phone telephony are all regulated and vary greatly in quality and penetration from country to country.  The regulatory hurdles are significant, and let’s not even start talking about logistics!  The humanitarian actors have traditionally been reluctant to engage with the private sector (who invariably owns and controls the pre-crisis telecommunications infrastructure).  The lines get very blurry when you start introducing organizations who have a profit motive and organizations who have a humanitarian objective.  These are big challenges, but those challenges don’t overshadow the fundamental truth:

a wrecked mobile phone tower in Guiuan, Eastern Samar after Super Typhoon Haiyan.  Is this a private sector problem?  A humanitarian problem?  Both.
A wrecked mobile phone tower in Guiuan, Eastern Samar after Super Typhoon Haiyan. Is this a private sector problem? A humanitarian problem? Both!

The best way to deliver humanitarian communications to a community at scale is to help restore the pre-crisis communications infrastructure.

During Super Typhoon Haiyan, we saw the loss of the mobile phone telecommunications infrastructure throughout our work in Eastern Samar and Leyte.  Millions of people were walking around with mobile phones that had no service.  They already had end user devices they were familiar with (even in the poorest or most rural parts of the world, seemingly everyone has a mobile, right?) — so in one sense, accomplishing the mission of delivering humanitarian communications is much easier: you don’t have to touch millions of individuals in order to help them.  You “just” need to get their phones back online.

A Tale of Two Refugee Camps

Two years ago, our team at work participated in a project with NetHope, Inveneo, WFP and Microsoft to create DadaabNet to connect up the humanitarian aid agencies operating at the world’s largest refugee camp in Dadaab, Kenya.  You can get a sense of the camp from these photos.  Prior to the deployment of this network, the state of communications at this camp was extremely limited.  VHF radios, a small number of satphones, and no mobile phone coverage.  After this project concluded, the humanitarian aid agencies were clearly in a better state of communications, but the vast majority of the 500,000 people in that camp are still without communications.

Contrast this with a much newer camp, the Zaatari refugee camp in Northern Jordan.  When the camp was first opened in 2012, it started taking in refugees from the civil war in nearby Syria at a rapid rate.  But Zaatari is actually served by at least two GSM/3G mobile phone carriers!  Not only are the humanitarian staff able to communicate with traditional handsets, but many of the refugees are able to use their mobiles as well.  The infrastructure could be better (one could always have more coverage and more bandwidth, natch!) but the situation there is vastly different than that which we found in Dadaab.

Service providers make all the difference.

A Call To Action

The humanitarian community, and the ETC in particular, needs to establish better working relationships between itself and the humanitarian actors it represents, and organizations representing the carriers (such as the GSMA).  Service providers must become a stakeholder in humanitarian response – ISPs, traditional telephony, satellite providers, mobile phone carriers.  Engagement protocols need to be developed… maybe that next UN flight might be best carrying a phone switch rather than a thousand kilos of Wifi access points!  It might help more people with a better service.

In addition to the carriers, equipment vendors need to become stakeholders in humanitarian response.  This might be easier than one might think since several of them (Ericsson and Cisco) are already heavily involved in humanitarian response — it’s logical to start with those who are already part of the conversation.   How do we ensure that the equipment needed to create communications at scale gets released, imported and transported to the locations where it can best be put to use?

There are challenges in all of this:  competitive challenges (can you help one carrier without helping another?), the division between humanitarian action and disruption to the competitive marketplace, and regulations of all sorts.  But we must begin to transform how we do technology response to consider the bigger picture

If communications is truly a form of aid, we have no other choice.