It often takes multiple crises to surface, and then fix, obvious structural problems. The combination of Solar Winds/Sunburst and Microsoft Exchange cyber attacks might finally bring the digital security world to the edge of a long-needed paradigm shift towards resilience and a cyber commons that requires all participants to care for. And the leading edge of that transformation is an old saw: better information sharing between private corporations and the government, and public-private partnerships to solve problems. Before you say -- wait, wait, that’s the same advice the sector has been giving for decades, we know. But we also have reason to think that a more operationally relevant kind of information sharing is possible today, including actual detections and breach artifacts, and that the government is more willing to do its part to share the load and selectively utilize standards and reporting obligations to engender private participation.
In late February, Sen. Mark Warner (D-VA), the Chairman of the Senate Select Committee on Intelligence, wondered aloud why there wasn’t yet a requirement for companies to disclose information about suspected attacks? His question answered itself: for too long, both the private sector and the government have worried about definitions and liability and failed to answer these basic questions:
- Who should shoulder the burden of paying the victims of attacks?
- How can we measure the negative externalities and indirect costs imposed on victims from successive data breaches? How is harm to be measured?
- What types of attacks should trigger mandatory notification? Where is the potential for a breach impacting availability or integrity, and not just limited to data exfiltration, sufficient to demand disclosure?
- To whom should notifications be made and how quickly?
- And how would it be shared? What level of detail is sufficient? What level of detail is too invasive?
- How do we stop the “no evidence” of compromise whitewashing in current breach notifications where many of the reporting firms have inadequate logs or forensics to support this being a meaningful statement?
- How should operational disruptions or errors resulting in cyber events, especially in modern multi-party supply chains, be reported or characterized?
- How can private cyber insurance best support companies and individuals?
- Is there a role for a government-funded financial backstop for financial liabilities stemming from systemic cyber risks?
Reasonable answers are starting to fill themselves in. The creation of a Cyber Unified Coordination Group consisting of the Federal Bureau of Investigation, the Cybersecurity and Infrastructure Security Agency, the National Security Agency and the office of the Director of National Intelligence prefigures the establishment of a permanent major cyber investigation task force, along the lines of the National Transportation Safety Board, which could serve as both the information clearinghouse and the first government investigator. But there would have to be a single point of contact -- one among a small group of highly trained, highly cleared, and well-regarded security professionals -- who can field confidential queries from companies about suspect breaches and then trigger a larger federal response or investigation if necessary. Like the NTSB, the investigation process would have clear rules designed to protect proprietary information where possible but bias its conclusions towards disclosure of anything that would help mitigate a breach. Also like the NTSB, it is important to note that government entities can reasonably investigate and produce post-mortem analysis, but should not be counted on to provide direct response functions outside of their existing statutory obligations during the incident for a variety of practical, legal, and operational reasons. Hypothetical government response teams parachuting into private entities chanting “we are here to help” is neither realistic or welcome.
Companies still need financial safe harbor to adequately respond and recover -- and we think that a robust cyber insurance market, with strong national standards, trained adjudicators and transparent exchanges of exposure and loss data, is essential; especially if the government agrees to backstop breaches exceeding a certain magnitude, either individually or in aggregate. Liability for deficiency in software exploitability is ultimately needed, as the private sector will be more likely to invest in appropriate quality assurance and to disclose or seek remedy for potentially adverse product security practices, so long as there are real, measurable ways to determine who has acted in good faith and who has been negligent or sloppy.
Building trustworthy mechanisms in our digital world will help drive trust between companies and across the public-private divide with confidential or even potentially classified cyber threat data, too. Companies have begged for better and more granular information from the government for years -- and though a sliver of intelligence is now fairly routinely sanitized and shared through CISA and other mechanisms, it is often not shared quickly enough or in actionable enough formats -- to make practical operational impacts still addressed by professional networks on phone calls or Signal text exchanges.
One way to move forward is to start with the principal that the government and the private sector have equally significant duties to warn; and if they exercise those duties faithfully, we can get from a “do we really want this to get out” culture to a “just to be on the safe side” more quickly. It’s easier for large companies like Microsoft to stake a claim for mandatory notification, but even this giant recently issued emphatic “nothing to see here” statements that were gradually relaxed to admit significant penetration by bad actors during SolarWinds. For smaller companies with no room error, the question of whether or what to disclose can mean the life of death of the enterprise. That’s why new standards for notification have to be intelligible, easily incorporated, and sufficiently revealing without imposing undue new costs or forcing proprietary information to be revealed. Good companies with good security programs can and will still get breached, we can’t just add to their burden. It is also an argument for more data transparency around the digital supply chain, cyber exposures and historical breaches and financial losses: if the private sector can visualize how they fit into the bigger picture, or even how common and severe breaches tend to be, it will increase the social permission structure for them to share just to be on the safe side incidents. Blanket no evidence statements from organizations need to stop - organizations issuing emphatic denials need to show their work and substantiate the corpus of data used to rule out compromise or limit impact declarations.
Along with the standards for notification, we would also suggest that the government require sufficient data capture and storage, the equivalent of cyber black boxes, in systemically important institutions; that is, a requirement that companies in certain roles or providing some kinds of critical services keep sufficient logs and other telemetry to support post-hoc forensic analysis. Since many attacks involve multiple vectors, most of which are not caught at the same time, part of the duty to warn involves a duty to collect detailed information about your own systems, and to make it available to responders who can hunt for indicators of compromise and trace more of the attack chain to better reconstruct what they did to get in.
As Mike Tanji, a former Pentagon cyber official put it, “hard metrics that can be measured, communicated, and evaluated” are central to just about every recommendation to do better. We agree. Sen. Warner and the SSCI can move forward quickly. The industry is ready. The stakes continue to build. And the political headwinds are finally calm.