After completing these activities you should be able to:
Our government operates through the implementation of policies and laws, but many people aren't entirely sure what the difference between the two are. A policy outlines what a government hopes to achieve as well as the methods and principles it will use to achieve them. In other words, it's "the set of actions the government should take in order to reach its desired outcome". The DoD Cyber Objectives and Strategic Goals, which we'll learn about later in this lesson, are policies set in place by the Department of Defense. Laws on the other hand are "the constitutional, statutory, and regulatory authorities set for accomplishing the goals of a policy, as well as limiting or constraining what the goal can be or how the policy can achieve the goal". Laws are the standards, procedures, and principles that must be followed by those whom they apply to.
The United States has a complex and evolving landscape of cyber laws, encompassing federal and state regulations that often depend on the industry, type of data, and specific activities involved. These laws aim to protect data confidentiality, integrity, and availability, prevent cybercrime, and ensure accountability for security incidents.
The CFAA also has jurisdictional limits. Even if hackers have been identified and indicted, it is very difficult to bring them to a U.S. court for a criminal trial if they are located in a country with which we do not have extradition procedures. So when the Russian hackers of the DNC were recently indicted, there was little expectation that they would ever appear in a U.S. courtroom for prosecution, because we do not expect Russia to extradite them.
The CFAA also has jurisdictional limits. Even if hackers have been identified and indicted, it is very difficult to bring them to a U.S. court for a criminal trial if they are located in a country with which we do not have extradition procedures. So when the Russian hackers of the DNC were recently indicted, there was little expectation that they would ever appear in a U.S. courtroom for prosecution, because we do not expect Russia to extradite them.
Many U.S. states have enacted their own cybersecurity and privacy regulations, often offering greater consumer protections and stricter business requirements than federal laws. Some notable examples include:
A chart with the rules of each state can be found here.
Understanding the source of where your information comes from is crucial in providing clear perspectives on how it was written, any potential influence as to why it was written, and validating additional details that may have been ambiguous or left out. AllSides and GroundNews is a good place to start, both of which evaluates sources and provides countering perspectives to get a balanced view on emotionally charged topics as well as misinformation.
The U.S. Government has many agencies and authorities that provide for cybersecurity policies within their areas. Distinguishing factors are defense, law enforcement, and judicial roles between the major federal agencies with this graphic providing an illustration of cybersecurity interrelationships in government.
Modern military theory organizes warfare into three levels - strategic, operational, and tactical. At its broadest, strategic policies provide a combination of art and science for the use of national power and resources. Understanding where policy originates from and its purpose can be confusing but it's typically initiated with the Executive Government (White House), works its way to the Department of Defense (Secretary of Defense), then to the Joint Chiefs of Staff (JCS), and finally to each of the service Chiefs.
At the operational levels, the strategic guidelines are further scrutinized into further policy documents that are conveyed as manuals, instructions, directives, and publications. The DoD Cybersecurity Policy Chart captures each area of implementation, to include governance, organization, training and development, acquisitions, information sharing, sustainment, and resilience. It also includes laws and authorities that allow for executing those responsibilities set forth in policy. As you look at the different categories and documents, consider the focus areas discussed in the Cybersecurity Fundamentals class and how the cybersecurity policies comprehensively considers people, processes, and technologies.
In 2023, the White House released a National Cybersecurity Strategy, organized around five pillars. The Office of the National Cyber Director (ONCD) coordinates the implementation of this strategy.
In 2023, the DoD released an unclassified summary of its revised Cyber Strategy , laying out four lines of effort for the DoD to pursue:
Law and ethics are interrelated, influencing each other and inciting discussions as both evolve over time. While the Hacker Ethic has not significantly changed over the years, the use of technology and its implications impact every industry. The cyberspace domain is inescapably a human domain, and human habits, choices, and behaviors are constrained by both legal and ethical considerations. That is, legal and ethical considerations impact the way individuals use computers and therefore affects security (Workman 2009). Ethics and law are not the same; a law can be unethical (e.g. slavery in the US prior to the Thirteenth Amendment) and unethical behavior need not be illegal (e.g. a passerby in the street loudly insults someone’s appearance just for fun). Policy can be partly defined as prudent action in light of these ethical and legal constraints.
A series of famous thought experiments, known as the Trolley Problem where an onlooker has to decide to either save five people in danger of being hit by a trolley or intervening knowing the result will kill one person, has continued to drive ethical discussions to this day. Variations have developed over time, from the observer riding in the trolley to them in a control tower, controls requiring action that would result in the changing of tracks, to runaway trains and school children. The main premise is the ethical debate on action v. inaction, responsibility v. obligation, and utilitarianism ("for the greater good") v. deontological (right v. wrong) beliefs. In 2016, students at MIT updated the hypothetical trolley into an automated, self-driving vehicle, posing a similar thought experiment in which developers would build algorithms to potentially decide on life-threatening or life-saving situations. This Moral Machine intended to collect on a limited number of human inputs based on scenarios that included age, wealth, social status, and more. This project resulted in several academic publications, a Tedx presentation, and revived a decades-old ethical thought experiment into a modern-day cyberspace thought experiment.
One of the most important questions that has arisen in the contemporary ethics of cybersecurity has to do with the role that informed consent plays in the ethical evaluation of a cyber operation. Hackers are labeled “ethical” or white hat when they are hired by their victims to discover security vulnerabilities and improve the victim’s cybersecurity. In this context, the term “cybersecurity” includes not just the maintenance of the Pillars of Information Assurance within computer networks, but also broader concerns such as physical access to buildings and computer hardware, and virtual (or remote) access via the internet. In the industry this practice is called penetration-testing. For example, a bank might hire a penetration-testing team to attempt to break into the bank’s network and gain access to sensitive data—perhaps even to notionally steal funds. Once the attempt is made, a report is given to the bank detailing whether and how the team was able to breach the various security barriers along with recommendations aiming to eliminate these vulnerabilities. Because the penetration-test occurs only after consent is given by the victim, the conduct of the breach itself—on site reconnaissance, physical trespass (e.g. lock picking), dumpster diving, malicious computer-to-computer interaction—is usually understood to be ethical (Hatfield 2019).
Cybersecurity experts therefore distinguish between “white” and “black” hat hackers using two criteria:
Consent, therefore, is often understood as a critical component for a cyber operation’s ethical status. Without it, most suppose, even the lightest shade of grey hat hacking—where a noble purpose guided the activity, no threats were employed, and no reward was expected—would remain morally ambiguous. White hat hacking is often thought to be morally unambiguous precisely because it can be characterized by both (1) and (2) above (Hatfield 2019).
Nevertheless, the role of consent as a measure of ethical status can be questioned particularly in cases where acquiring consent would undermine the efficacy of a cyber operation conducted for legitimate ends. Consider the case of state-sponsored cyber operations where a state or coalition of states deploys cyber capabilities within the context of broader considerations, such as alternative means of persuasion and coercion, intelligence assessments, the prudent use of risk-assessment tools, and both domestic and international legal rules. In such a context, the victim will typically not provide consent and yet—depending upon the case in question—such cyber operations are not always deemed unethical. For example, although it has not been confirmed or denied by any government, press reporting indicates that the Stuxnet attack, which may have begun around 2007, was a joint venture by the United States and Israel seeking to sabotage Iran’s nuclear program by infecting Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLC) at Iran’s Natanz nuclear facility. Some analysts concluded that Stuxnet delayed Iran’s program by up to two years (Sanger 2012). Determining whether this operation was ethical may involve much more than simply considering whether the Iranians had given their consent. Rather, many people think such a case must be placed within a broader context of international relations, strategic policy, and considerations of just and unjust wars. The just war tradition, for example, requires that among other factors the use of force must be a last resort, be discriminatory (i.e. no harm to non-combatants), and be proportional to the threat at hand (Smit 2005, Rengger 2010). Whether Stuxnet qualified as a just action under these conditions continues to be debated.
Or consider the case arising in late 2018 when an unknown Russian-speaking grey hat persona going by the nickname “Alexey” started breaking into thousands of MikroTik routers and updating their software to help patch their known vulnerabilities. By October 2018, Alexey had patched over 100,000 routers, even leaving comments to their owners about the vulnerability and information about how to contact him if desired. The vulnerability allowed routers to fall prey to “botnet herders” who troll the internet amassing control over thousands of routers for use in DDoS attacks. Alexey told reporters that only 50 people reached out to say “thanks” and the rest were outraged (Cimpanu 2018). In this case, the purpose of Alexey’s hacks was to improve the security of the targeted routers, and there is no evidence to suggest Alexey had any other motive but to help. Furthermore, it is probably true that had he sought consent from each router’s owner he would likely have received little interest from wary strangers. The routers would likely have remained unpatched, thereby increasing the risk of DDoS attacks against innocent victims. Finally, even if consent had been granted, the process of attaining it would have greatly slowed down Alexey’s otherwise very helpful updates. Nevertheless, victim outrage in this case seems ethically justified to many.
Even in white hat hacking, where victim consent is attained, there are confounding ethical questions. Consider the ethics of white hat social engineering, when human manipulation occurs during penetration-testing. For example, a bank might hire a penetration-testing team that utilized impersonation techniques to trick employees into letting them into a secure vault (perhaps by posing as an audit team). Human manipulation raises a host of ethical questions pertaining to both the efficacy of the test and the meaning of informed consent. For a penetration-test to reflect an accurate assessment of a genuine security posture, victims cannot be told they are being tested. Such information changes their behavior and may vitiate the accuracy of test results and by extension any future security upgrades that may result. However, since meaningful consent is usually understood as being a necessary condition for a hacking activity to be considered ethical, withholding consent puts a penetration-tester’s ethical status in jeopardy (Hatfield 2019).
Two recent cases illustrate the dangers inherent in human manipulation without consent. Jacintha Saldanha was a nurse at the same British hospital where Kate Middleton, the Duchess of Cambridge, was admitted. Ms. Saldanha received a phone call from an Australian radio DJ, Malanie Greig, who manipulated her into providing detailed information about the Duchess of Cambridge’s medical condition. Saldanha later committed suicide after the breach went public, even leaving a note blaming the DJ for her death (Sawer 2014). This voice phishing (or vishing) attack involved lying, by means of malicious impersonation, to solicit personal and private information that could be used to benefit the interests of the DJ and her wider audience. In another example, in 2015 a 17-year old autistic British boy named Joseph Edwards hung himself after being manipulated by a ransomware email hoax. The scam claimed that indecent images had been found on the computer and demanded that unless a ransom were paid the images would be reported to the police (Telegraph 2015).
Such tragic cases illustrate how potent human-manipulation can be. Yet white hat social engineering seems to require that some amount of non-consented human manipulation be allowed if the security of banks and other institutions is to be improved. How can this occur ethically?
Some scholars have proposed a procedure of post-event informed consent, where data is collected without the victim’s knowledge but not included in the analysis until consent is given (Mack 2014, Pieters et al. 2016). However, this approach simply highlights the fact that there are two potential points of ethical failure, the first when data is being collected and the second when it is being analyzed and disseminated. Philosophically, post-event informed consent therefore reduces to an “ends justify the means” rationale, as if taking nude photos of some unknowing person was excused if that person agreed later to have them published. There are also practical difficulties to this idea. For example, if a significant number of victims fail to give consent the validity of the entire penetration-test is largely undermined. In fact, it will be the most vulnerable employees, those most likely to have practices that result in security violations, who fail to give consent once informed that their behavior had been monitored. Given the fact that often only one phishing email is needed to compromise an entire organization, having only a small number of employees opt-out of the analysis largely nullifies the penetration tester’s conclusions (Hatfield 2019).
Another strategy is to inform employees during the hiring process that penetration-testing may occur and they must agree to this before being hired. However, ethicists agree that any consent must be non-coerced; for coerced consent is not consent at all. Such coercion does not have to take explicit and direct forms, but rather may be implicit and indirect. For example, although illegal in the European Union, some companies in the US ask applicants to supply social media login credentials during the hiring process so that they can run a profile check for objectionable content prior to hiring. Many applicants undoubtedly see this as a breach of privacy, but compliance is often attained through the implicit threat that failure to provide credentials would invalidate an applicant’s chances of being hired (Drake 2016). Additionally, individuals without the proper information to know what they are consenting to cannot be said to be providing meaningful consent. Thus, schemes that have employees sign a generic consent to penetration-testing upon being hired (or as a condition of employment) cannot be said to have provided meaningfully informed (and non-coerced) consent to any specific and potentially emotionally-damaging technique employed months or even years afterwards. Such schemes give the legal veneer of informed uncoerced consent but are unable to provide meaningfully-informed consent in an ethical sense. Yet, as noted above, the greater one assures themselves that meaningful consent has been attained the less it seems they can be sure of the validity of their penetration test. This amounts to a dichotomy forcing penetration-testing firms and their customers to choose between the good of their employees (i.e. the victims) and that of the broader firm (Hatfield 2019).
From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices. https://www.moralmachine.net/