Over the past two years, professor Duncan Hollis has given lectures in eight different countries. This year, he has visited, or is scheduled to visit, Estonia, Australia, Germany, India, Mexico, Japan, and Switzerland. Today, however, he’s in Philadelphia, on North Broad Street, in the Temple Law School faculty lounge.
Hollis has not always been a frequent flier. He grew up in a Massachusetts suburb, roughly 20 minutes from Boston. He was 16 when the first international opportunity presented itself. “I still remember the moment,” Hollis says. His French teacher suggested he apply for a scholarship opportunity in Japan. Hollis earned the scholarship and spent the following summer in Japan. “I probably couldn’t have pointed out Japan from China on a map,” says Hollis. “I don’t even think I had been west of Springfield, Massachusetts at that point.”
The experience introduced Hollis to new experiences and cultures. He attended Bowdoin College and then law school at Boston College, spending an additional semester and multiple summers in Japan. “I wanted to bridge the two cultures,” says Hollis. “Advocacy plays a huge role in that because you have to be sensitive to where other people are coming from and convince them to do something that’s out of their comfort zone, or maybe convince yourself that you’re OK doing something that’s not in your comfort zone.”
Hollis studied international law at Boston College while also pursuing a Masters of Law and Diplomacy at the Fletcher School. On graduation from both programs, he embarked on a rewarding career in the private sector and then with the U.S. Department of State. He joined the Temple Law faculty in 2004, where he has become a widely-published author and authority on international and foreign affairs law, including the law of treaties.
Lately, however, it has been a different kind of international problem that has interested Hollis. Cybersecurity burst onto the public scene in November 2014, when the Sony Corporation was hacked, bringing thousands of private emails into the public sphere. Hollis quips that while it took Angelina Jolie to help his family finally understand what he was working on, cybersecurity is a much larger, and more complex, problem.
“Technology touches everything,” he says, “from your refrigerator, to the clothes you wear, to the OnStar in your car, to all the cows that are being wired to the Internet to monitor their health and predict mad cow disease. It’s no longer just about your credit card data. Everything is going to be wired up, and as a result, everyone is going to have a stake in the security question.”
Hollis’s journey into cybersecurity began in 2007, seven years before the Sony hack. The country of Estonia, one of the most wired countries in the world, was hacked. Government and university websites, as well as hospital and mass media networks, were all affected.
Cybersecurity is famous for its attribution problem. It’s a little bit of a cat and mouse game … with a good hacker, you might not even know you’ve been hacked. With a really good hacker, they can do something called a false flag and make it look like someone else was responsible.
In the aftermath of the Estonia hack, an article that Hollis had written on the international legal issues surrounding cyber operations rapidly gained attention. He was invited to lecture at Harvard Law School, where he found himself speaking with not only Harvard law students, but also members of the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Lab (CSAIL). Hollis laughs now when he recalls wrestling with the audience’s thoughtful, challenging questions. “Then we went to dinner that night with some of the CSAIL folks,” he recollects, “and I remember feeling like I was just hanging onto the edge of the conversation, like someone on a windowsill, hanging on by my fingertips.”
Since that presentation, Hollis has found himself, on multiple occasions, discussing cybersecurity with professionals steeped in the engineering and computer science arenas. Other meetings and conferences might involve hackers, coders, intelligence officials, and policy makers. “Everyone is coming to this space from their own silos,” says Hollis.
In an effort to meet in the middle, Hollis has relied on analogies while advocating for new solutions to cybersecurity problems. It’s a method that led Hollis to his ‘e-SOS Duty to Assist’ idea, his first widely accepted concept in the global cybersecurity dialogue.
The problem in cybersecurity, says Hollis, is that the law tries to regulate cyber threats by proscription, banning behavior and labeling as “bad actors” those who engage in it. Thus, proscription provides criminal laws for individuals, and international laws for states. But how do you know which laws apply when you’re not sure if you’re being hacked by a teenager in Southern California or a cyber criminal organization in Romania?
“Cybersecurity is famous for its attribution problem,” says Hollis. “It’s a little bit of a cat and mouse game … with a good hacker, you might not even know you’ve been hacked. With a really good hacker, they can do something called a false flag and make it look like someone else was responsible.”
Hollis wondered if the law was trying to regulate the wrong piece of the puzzle. “So the other question,” says Hollis, “is if you can’t catch and regulate bad actors, can you regulate the victims?” Hollis analogized cyber threats to a more relatable industry – automobile drivers. “How do we regulate automobile safety? One way is we punish drunk drivers because of the risks they pose. But we also make drivers take a test so if they’re going to be driving, we know they’ve met some minimum requirements. In addition, we might make you wear a seatbelt so if you do get hit you won’t be injured as much,” says Hollis.
Applied to cybersecurity, Hollis believes that when a victim is hacked in a serious way, they should be able to call for help, and responding countries and organizations should be required to afford whatever assistance they can offer. “On the high seas,” Hollis analogizes again, “ if your vessel is going down, and you call for help, anybody who is in position to help has to do so. And the victim can even choose, ‘OK, I have four different people able to help me, I choose this option.’”
Hollis returns to the attack on Estonia. “So, when Estonia went to Russia and said we think these directed denial of service attacks are coming through your networks, the Russian response at the time was, ‘You know how the Internet works. Somebody is probably spoofing us. Good luck with that.’ Whereas if you had a duty to assist, even if they weren’t responsible, they still would have to do something to help because they could have helped block the offending traffic.”
The idea has legs. Hollis has presented it in front of policy makers, cyber security professionals, and representatives from the U.S. State Department, Defense Department, and the National Security Agency. He has heard Estonia’s Foreign Minister formally call for other governments to adopt an e-SOS system. And the Group of Government Experts (GGE) at the United Nations included a version of Hollis’s ‘e-SOS Duty to Assist’ among the peacetime norms they called on states to apply in 2015.
Lately, Hollis has begun to consider how such an idea would take shape. Hollis, and Tim Maurer, a colleague from Carnegie Endowment for International Peace, used analogies again to consider a possible tangible vehicle for coordinating assistance: A Red Cross for cyberspace. If an organization or individual is hacked, says Hollis, the first instinct is often to stay quiet, so as not to show signs of weakness. And even if a victim wants help, there’s the fear that giving governments access may cause more problems, particularly if any assistance is shared with law enforcement or the intelligence community. “Why can’t we provide neutral, non-discriminatory, independent assistance to these victims?” asks Hollis.
The idea, in principle, is already in the works. Back in 1998, in response to the first widespread piece of malware, a group of Internet evangelists created the first Computer Emergency Response Team (CERT). Tasked with ensuring the security of the Internet and the systems that relied on it, other CERTS have emerged subsequently and are now spread across the world, helping secure networks and systems from internal threats, such as inter-operability problems, and external threats from so-called “hacktivists,” organized criminal groups, and foreign nations.
Technology touches everything from your refrigerator, to the clothes you wear, to the OnStar in your car, to all the cows that are being wired to the Internet to monitor their health and predict mad cow disease. It’s no longer just about your credit card data. Everything is going to be wired up, and as a result, everyone is going to have a stake in the security question.
While useful, CERTS remain relatively weak. Their independence is consistently under threat, they lack support from policy makers, and their intentions may be secretly biased towards the intentions of their national governments. Hollis believes that equipped with the principles that have made the Red Cross a trusted international symbol – neutrality, impartiality, and independence – CERTS could become a building block for a Red Cross-like movement. Such principles would create consistency in who they help, when they help, how they help, and when they turn over their data.
With the Red Cross idea, Hollis is acting much more openly as an advocate. “This is more like playing in the policy space than just being a lawyer trying to describe to people what the state of the law is or what it requires or permits,” he says. His advocacy is paying off. The Dutch government is intrigued by the proposal, and has offered its support to Hollis and Maurer to develop a more academic treatment of their idea.
Part of what makes playing in the policy space so much fun for Hollis is the “Madisonian moment” he believes cyberspace faces. “Everyone thought the whole constitution of cyberspace such as it is was being worked out in the 90s,” says Hollis. “But I think we’re now at a new constitutional moment or we just haven’t finished the original one.”
In this Madisonian moment, Hollis has concerns, and chief among them is the speed at which technology is changing. “People don’t remember that 10 years ago there was no iPhone,” says Hollis. Can society agree to norms, laws, and regulations at a pace that keeps up with technological improvements?
Despite his doubts, Hollis does not feel a sense of urgency. Rather, he views cybersecurity as the project of a lifetime; he predicts that he will be able to write about cyberspace for the next four decades. “It will not have gone away. It will only have gotten bigger.”
Hollis appreciates that he does not represent any individual client or industry in his current role as an academic. As such, he is able to advocate for what’s in the best long-term interest of the system itself. Concurrently, however, he recognizes that there isn’t an endgame to cyberspace. “I’m not a utopian,” says Hollis. “I don’t think we’re ever going to get to a world where there is no cyber crime or risk of horrible tragedies.”
Hollis does, however, believe we can make the system better. He hopes governments, academics, industries, and society at large can agree on one or two positive steps to improve global cybersecurity, which, if successful, might mark the beginning of an effort to slowly build trust and confidence. Change will be slow and incremental, Hollis admits, but eventually, he thinks that we can reach a point where we can look back on an unstable time in cyberspace and take satisfaction in a more secure system, as opposed to a system that is insecure, or worse, a system that has collapsed. “It’s a complicated series of problems…but the alternative is chaos. It’s the Wild West, where no one helps anyone, we have no trust, and it’s anarchy. I don’t think we want to live in that world.”