SY110: Law, Policy, and Ethics

Law, Policy, and Ethics

Learning Outcomes

After completing these activities you should be able to:



Policy vs. Law

Our government operates through the implementation of policies and laws, but many people aren't entirely sure what the difference between the two are. A policy outlines what a government hopes to achieve as well as the methods and principles it will use to achieve them. In other words, it's "the set of actions the government should take in order to reach its desired outcome". The DoD Cyber Objectives and Strategic Goals, which we'll learn about later in this lesson, are policies set in place by the Department of Defense. Laws on the other hand are "the constitutional, statutory, and regulatory authorities set for accomplishing the goals of a policy, as well as limiting or constraining what the goal can be or how the policy can achieve the goal". Laws are the standards, procedures, and principles that must be followed by those whom they apply to.

The United States has a complex and evolving landscape of cyber laws, encompassing federal and state regulations that often depend on the industry, type of data, and specific activities involved. These laws aim to protect data confidentiality, integrity, and availability, prevent cybercrime, and ensure accountability for security incidents.

Federal Cyber Laws and Regulations

CFAA Developments


The CFAA continues to be hotly debated even today. The language and scope of the law are constantly being challenged like in Van Buren v. United States, which was decided in June of 2021. The decision significantly narrowed the scope of the CFAA. It clarified that misusing information or accessing it for an "improper purpose," in violation of a policy or terms of service, does not violate the "exceeds authorized access" provision, as long as the user is authorized to access the specific information itself.

The CFAA also has jurisdictional limits. Even if hackers have been identified and indicted, it is very difficult to bring them to a U.S. court for a criminal trial if they are located in a country with which we do not have extradition procedures. So when the Russian hackers of the DNC were recently indicted, there was little expectation that they would ever appear in a U.S. courtroom for prosecution, because we do not expect Russia to extradite them.

State Laws and Regulations

Many U.S. states have enacted their own cybersecurity and privacy regulations, often offering greater consumer protections and stricter business requirements than federal laws. Some notable examples include:

Frameworks and Best Practices (Often Referenced by Laws)

Other Relevant Laws (Not Necessarily Cybersecurity Focused)

Would you like to know more?

Understanding the source of where your information comes from is crucial in providing clear perspectives on how it was written, any potential influence as to why it was written, and validating additional details that may have been ambiguous or left out. AllSides and GroundNews is a good place to start, both of which evaluates sources and provides countering perspectives to get a balanced view on emotionally charged topics as well as misinformation.



Agencies Responsible for Cyber Security

The U.S. Government has many agencies and authorities that provide for cybersecurity policies within their areas. Distinguishing factors are defense, law enforcement, and judicial roles between the major federal agencies with this graphic providing an illustration of cybersecurity interrelationships in government.

What can the government actually do?


From a diplomatic perspective, the U.S. Government is very careful about attributing cyber attacks to certain nation states. Aside from the diplomatic sensitivity associated with U.S. Foreign Policy and foreign relationships, it is often difficult to specifically attribute an attack or series of attacks to a specific nation state with a high enough degree of certainty. In this article by The New York Times, the U.S. Government attributed attacks in the Summer of 2021 specifically to sources in the Chinese government. These types of attributions may increase due to increased cyber attacks and governments exercising soft power accountability.



Strategic Cyber Policy

Modern military theory organizes warfare into three levels - strategic, operational, and tactical. At its broadest, strategic policies provide a combination of art and science for the use of national power and resources. Understanding where policy originates from and its purpose can be confusing but it's typically initiated with the Executive Government (White House), works its way to the Department of Defense (Secretary of Defense), then to the Joint Chiefs of Staff (JCS), and finally to each of the service Chiefs.

At the operational levels, the strategic guidelines are further scrutinized into further policy documents that are conveyed as manuals, instructions, directives, and publications. The DoD Cybersecurity Policy Chart captures each area of implementation, to include governance, organization, training and development, acquisitions, information sharing, sustainment, and resilience. It also includes laws and authorities that allow for executing those responsibilities set forth in policy. As you look at the different categories and documents, consider the focus areas discussed in the Cybersecurity Fundamentals class and how the cybersecurity policies comprehensively considers people, processes, and technologies.

National Cyber Strategy

In 2023, the White House released a National Cybersecurity Strategy, organized around five pillars. The Office of the National Cyber Director (ONCD) coordinates the implementation of this strategy.

  1. Defend Critical Infrastructure
  2. Disrupt and Dismantle Threat Actors
  3. Shape Market Forces to Drive Security and Resilience
  4. Invest in a Resilient Future
  5. Forge International Partnerships to Pursue Shared Goals

DoD Cyber Objectives and Strategic Goals

Objectives

In 2023, the DoD released an unclassified summary of its revised Cyber Strategy , laying out four lines of effort for the DoD to pursue:

  1. Defend the Nation
  2. Prepare to Fight and Win the Nation's wars
  3. Protect the Cyber Domain with Allies and Partners
  4. Build Enduring Advantages in Cyberspace

Defending Forward


The previous DoD Cyber Strategy, released 2018, outlined a concept known as “defending forward”, where the United States strives to defend its own networks by being on adversary networks which causes the adversary to waste resources defending its own networks rather than attacking the United States in cyberspace. For more on this concept, check out this article on the concept of "defending forward."

Contemporary Ethical Challenges

Law and ethics are interrelated, influencing each other and inciting discussions as both evolve over time. While the Hacker Ethic has not significantly changed over the years, the use of technology and its implications impact every industry. The cyberspace domain is inescapably a human domain, and human habits, choices, and behaviors are constrained by both legal and ethical considerations. That is, legal and ethical considerations impact the way individuals use computers and therefore affects security (Workman 2009). Ethics and law are not the same; a law can be unethical (e.g. slavery in the US prior to the Thirteenth Amendment) and unethical behavior need not be illegal (e.g. a passerby in the street loudly insults someone’s appearance just for fun). Policy can be partly defined as prudent action in light of these ethical and legal constraints.

A series of famous thought experiments, known as the Trolley Problem where an onlooker has to decide to either save five people in danger of being hit by a trolley or intervening knowing the result will kill one person, has continued to drive ethical discussions to this day. Variations have developed over time, from the observer riding in the trolley to them in a control tower, controls requiring action that would result in the changing of tracks, to runaway trains and school children. The main premise is the ethical debate on action v. inaction, responsibility v. obligation, and utilitarianism ("for the greater good") v. deontological (right v. wrong) beliefs. In 2016, students at MIT updated the hypothetical trolley into an automated, self-driving vehicle, posing a similar thought experiment in which developers would build algorithms to potentially decide on life-threatening or life-saving situations. This Moral Machine intended to collect on a limited number of human inputs based on scenarios that included age, wealth, social status, and more. This project resulted in several academic publications, a Tedx presentation, and revived a decades-old ethical thought experiment into a modern-day cyberspace thought experiment.

One of the most important questions that has arisen in the contemporary ethics of cybersecurity has to do with the role that informed consent plays in the ethical evaluation of a cyber operation. Hackers are labeled “ethical” or white hat when they are hired by their victims to discover security vulnerabilities and improve the victim’s cybersecurity. In this context, the term “cybersecurity” includes not just the maintenance of the Pillars of Information Assurance within computer networks, but also broader concerns such as physical access to buildings and computer hardware, and virtual (or remote) access via the internet. In the industry this practice is called penetration-testing. For example, a bank might hire a penetration-testing team to attempt to break into the bank’s network and gain access to sensitive data—perhaps even to notionally steal funds. Once the attempt is made, a report is given to the bank detailing whether and how the team was able to breach the various security barriers along with recommendations aiming to eliminate these vulnerabilities. Because the penetration-test occurs only after consent is given by the victim, the conduct of the breach itself—on site reconnaissance, physical trespass (e.g. lock picking), dumpster diving, malicious computer-to-computer interaction—is usually understood to be ethical (Hatfield 2019).

Types of Hackers
Types of Hackers. Origin of colored-hat hackers are from old western
black & white movies, where the good guys and bad guys were
distinguished based on the color of hat they wore.

Cybersecurity experts therefore distinguish between “white” and “black” hat hackers using two criteria:

  1. The purpose of the hacking activity
  2. Whether prior consent has been granted by the victim
Penetration-testers (also called “ethical hackers”) are said to wear the white hat because the purpose of their activity is ultimately to improve the security of the company from which they first receive approval before searching for vulnerabilities (Symantec 2018). For this reason Abu-Shaqra Baha and Rocci Luppicini (2016, 64) describe ethical hacking as a “risk-based, cost-effective information security risk assessment strategy.” By contrast, black hat hackers take pains to avoid alerting their victim of their activities which are conducted for malicious purposes, such as the theft of information or money. Sometimes, black hat hackers do make formal approaches to their victims, even pretending to help a victim’s security (as is often the case when a technique called “scareware” is deployed), but this consent cannot be considered informed because crucial information is intentionally withheld from victims in order for such ploys to be effective. For these reasons, black hat hacking is nearly universally considered unethical. There are also grey hat hackers who fail to gain the consent of their victims before launching their attacks but once a vulnerability is discovered the information is brought to the attention of the victim, sometimes with an expectation that a reward will be given else the hacker may post the vulnerability online for the world to see (Symantec 2018). Grey hat hacking would still be considered “grey” without the threat of exposure or demand for a reward, since consent was never granted prior to the hacking activity.

hats_chart

Consent, therefore, is often understood as a critical component for a cyber operation’s ethical status. Without it, most suppose, even the lightest shade of grey hat hacking—where a noble purpose guided the activity, no threats were employed, and no reward was expected—would remain morally ambiguous. White hat hacking is often thought to be morally unambiguous precisely because it can be characterized by both (1) and (2) above (Hatfield 2019).

Hack the Pentagon


Bug bounty programs provide rules for which hackers can report vulnerabilities for payouts depending on severity of exploits uncovered. Many organizations track programs, such as HackerOne, as major tech companies like Google, Apple, Microsoft, Amazon, and others offer similar programs. The DoD may offer non-monetary rewards through programs like the Defense Cyber Crime Center's (DC3) Hack the Pentagon or monetary rewards with the Air Force Research Lab's (AFRL) Hack-a-Sat.

Nevertheless, the role of consent as a measure of ethical status can be questioned particularly in cases where acquiring consent would undermine the efficacy of a cyber operation conducted for legitimate ends. Consider the case of state-sponsored cyber operations where a state or coalition of states deploys cyber capabilities within the context of broader considerations, such as alternative means of persuasion and coercion, intelligence assessments, the prudent use of risk-assessment tools, and both domestic and international legal rules. In such a context, the victim will typically not provide consent and yet—depending upon the case in question—such cyber operations are not always deemed unethical. For example, although it has not been confirmed or denied by any government, press reporting indicates that the Stuxnet attack, which may have begun around 2007, was a joint venture by the United States and Israel seeking to sabotage Iran’s nuclear program by infecting Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLC) at Iran’s Natanz nuclear facility. Some analysts concluded that Stuxnet delayed Iran’s program by up to two years (Sanger 2012). Determining whether this operation was ethical may involve much more than simply considering whether the Iranians had given their consent. Rather, many people think such a case must be placed within a broader context of international relations, strategic policy, and considerations of just and unjust wars. The just war tradition, for example, requires that among other factors the use of force must be a last resort, be discriminatory (i.e. no harm to non-combatants), and be proportional to the threat at hand (Smit 2005, Rengger 2010). Whether Stuxnet qualified as a just action under these conditions continues to be debated.

questions1

Or consider the case arising in late 2018 when an unknown Russian-speaking grey hat persona going by the nickname “Alexey” started breaking into thousands of MikroTik routers and updating their software to help patch their known vulnerabilities. By October 2018, Alexey had patched over 100,000 routers, even leaving comments to their owners about the vulnerability and information about how to contact him if desired. The vulnerability allowed routers to fall prey to “botnet herders” who troll the internet amassing control over thousands of routers for use in DDoS attacks. Alexey told reporters that only 50 people reached out to say “thanks” and the rest were outraged (Cimpanu 2018). In this case, the purpose of Alexey’s hacks was to improve the security of the targeted routers, and there is no evidence to suggest Alexey had any other motive but to help. Furthermore, it is probably true that had he sought consent from each router’s owner he would likely have received little interest from wary strangers. The routers would likely have remained unpatched, thereby increasing the risk of DDoS attacks against innocent victims. Finally, even if consent had been granted, the process of attaining it would have greatly slowed down Alexey’s otherwise very helpful updates. Nevertheless, victim outrage in this case seems ethically justified to many.

questions2

Even in white hat hacking, where victim consent is attained, there are confounding ethical questions. Consider the ethics of white hat social engineering, when human manipulation occurs during penetration-testing. For example, a bank might hire a penetration-testing team that utilized impersonation techniques to trick employees into letting them into a secure vault (perhaps by posing as an audit team). Human manipulation raises a host of ethical questions pertaining to both the efficacy of the test and the meaning of informed consent. For a penetration-test to reflect an accurate assessment of a genuine security posture, victims cannot be told they are being tested. Such information changes their behavior and may vitiate the accuracy of test results and by extension any future security upgrades that may result. However, since meaningful consent is usually understood as being a necessary condition for a hacking activity to be considered ethical, withholding consent puts a penetration-tester’s ethical status in jeopardy (Hatfield 2019).

Two recent cases illustrate the dangers inherent in human manipulation without consent. Jacintha Saldanha was a nurse at the same British hospital where Kate Middleton, the Duchess of Cambridge, was admitted. Ms. Saldanha received a phone call from an Australian radio DJ, Malanie Greig, who manipulated her into providing detailed information about the Duchess of Cambridge’s medical condition. Saldanha later committed suicide after the breach went public, even leaving a note blaming the DJ for her death (Sawer 2014). This voice phishing (or vishing) attack involved lying, by means of malicious impersonation, to solicit personal and private information that could be used to benefit the interests of the DJ and her wider audience. In another example, in 2015 a 17-year old autistic British boy named Joseph Edwards hung himself after being manipulated by a ransomware email hoax. The scam claimed that indecent images had been found on the computer and demanded that unless a ransom were paid the images would be reported to the police (Telegraph 2015).

saldanha
Jacintha Saldanha

Such tragic cases illustrate how potent human-manipulation can be. Yet white hat social engineering seems to require that some amount of non-consented human manipulation be allowed if the security of banks and other institutions is to be improved. How can this occur ethically?

Some scholars have proposed a procedure of post-event informed consent, where data is collected without the victim’s knowledge but not included in the analysis until consent is given (Mack 2014, Pieters et al. 2016). However, this approach simply highlights the fact that there are two potential points of ethical failure, the first when data is being collected and the second when it is being analyzed and disseminated. Philosophically, post-event informed consent therefore reduces to an “ends justify the means” rationale, as if taking nude photos of some unknowing person was excused if that person agreed later to have them published. There are also practical difficulties to this idea. For example, if a significant number of victims fail to give consent the validity of the entire penetration-test is largely undermined. In fact, it will be the most vulnerable employees, those most likely to have practices that result in security violations, who fail to give consent once informed that their behavior had been monitored. Given the fact that often only one phishing email is needed to compromise an entire organization, having only a small number of employees opt-out of the analysis largely nullifies the penetration tester’s conclusions (Hatfield 2019).

edwards
Joseph Edwards (17) committed
suicide after being manipulated
in a cyber ransom scam

Another strategy is to inform employees during the hiring process that penetration-testing may occur and they must agree to this before being hired. However, ethicists agree that any consent must be non-coerced; for coerced consent is not consent at all. Such coercion does not have to take explicit and direct forms, but rather may be implicit and indirect. For example, although illegal in the European Union, some companies in the US ask applicants to supply social media login credentials during the hiring process so that they can run a profile check for objectionable content prior to hiring. Many applicants undoubtedly see this as a breach of privacy, but compliance is often attained through the implicit threat that failure to provide credentials would invalidate an applicant’s chances of being hired (Drake 2016). Additionally, individuals without the proper information to know what they are consenting to cannot be said to be providing meaningful consent. Thus, schemes that have employees sign a generic consent to penetration-testing upon being hired (or as a condition of employment) cannot be said to have provided meaningfully informed (and non-coerced) consent to any specific and potentially emotionally-damaging technique employed months or even years afterwards. Such schemes give the legal veneer of informed uncoerced consent but are unable to provide meaningfully-informed consent in an ethical sense. Yet, as noted above, the greater one assures themselves that meaningful consent has been attained the less it seems they can be sure of the validity of their penetration test. This amounts to a dichotomy forcing penetration-testing firms and their customers to choose between the good of their employees (i.e. the victims) and that of the broader firm (Hatfield 2019).


Supplemental Media:



MIT Moral Machine

From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices. https://www.moralmachine.net/

What moral decisions should driverless cars make?


Review Questions:

  1. What is the difference between laws and policies?
  2. How are laws and ethics interrelated?
  3. What are the primary Cybersecurity laws?
  4. What is the implication of Section 230 of the CDA as it pertains to the proliferation of misinformation published on social media sites?
  5. What agencies within the federal government provide authorities and responsibilities for cybersecurity?
  6. What are the five pillars of the National Cybersecurity Strategy organized?
  7. What are the DoD's lines of efforts in cyberspace?
  8. What is the difference between white, gray, and black hat hackers?
  9. What ethical considerations are relevant within the cyberspace domain?


References

  1. C. Doyle, "Cyber-crime: An Overview of the Federal Computer Fraud and Abuse Statute and Related Federal Criminal Laws," Congressional Research Service, Oct. 2014.
  2. J. Band, "Expanded DMCA Exemptions Enhance Competition and Innovation." Disruptive Competition Project, Oct. 2018.
  3. Federal Trade Commission, "Data Security and Enforcement Web-pages."
  4. Foley & Lardner LLP, "State Data Breach Notifications Laws."
  5. C. Doyle, "The Posse Comitatus Act and Related Matters: The Use of the Military to Execute Civilian Law." Congressional Research Service, Jun 2000.
  6. National Institute of Standards and Technology, "Framework for Improving Critical Infrastructure Cybersecurity," April 2018.
  7. Department of Defense, "Summary 2023 Cyber Strategy of The Department of Defense," September 2023.