After completing these activities you should be able to:
From the early 1960s through the 1990s, the realization gradually spread that computer networks were vulnerable to information breaches, whether through spillage or offensive cyber operations. Sometimes this realization came from scientists deep within the national security infrastructure, such as National Security Agency (NSA) scientist Bernard Peters, who, in 1967, warned that security cannot be guaranteed in any “Multi-Programmed” computer system with remote terminal access (Peters 1967). At other times this realization came from quite unexpected directions. For example, the 1983 Hollywood thriller WarGames, in which a technically proficient high-schooler nearly causes a US-Soviet nuclear exchange by hacking into a top secret Pentagon computer, had so impressed President Ronald Reagan, a former actor who loved films and received a private screening of the movie in the White House, that he asked top security advisors whether US military networks were vulnerable to such intrusion. His question had surprised them. Their answer—“unfortunately yes”—which came a week later, surprised Reagan even more and in 1984 led to the first US National Security Decision Directive (NSDD-145) making the National Security Agency responsible for protecting government telecommunications systems (Warner 2012).
Unfortunately, throughout this history cyber operations had been too often understood as a purely technical activity—clever hackers attacking networks defended by equally smart technical defenders. Within this framework it seemed that technical superiority, such as faster processing, stronger encryption, or more advanced protocols, ultimately determined the security of networks. This point of view still captures the imagination of the general public. Yet this picture is completely wrong for it overlooks the human element. Almost every prominent contemporary cyber operation has targeted human vulnerabilities as its central exploit, whether it is exploiting an individual’s trust (2015 SWIFT banking hack), the ability to trick people through impersonation (2015 Office of Personnel Management), the human proclivity to fall for phishing scams (2016 DNC Hack), an inability or unwillingness to patch software (2017 Petya, 2017 WannaCry) or the human choice to use outdated cryptographic tools (2018 Under Armour).
That human beings comprise the most potent attack-vector comes as no surprise to cybersecurity researchers, who have come to see humans as the “weakest link” in any computer network (Greiner 2008). This lesson traces the history of human-computer interaction, from the earliest days of the Computer Age to how we look at and define cyberspace today.
In 1821, the Cambridge mathematician Charles Babbage (1791-1871) lamented the fact that the only way to check to see if an arithmetical result was correct was to either consult a pre-calculated reference table, which were often full of errors, or to have several human beings (called “computers” at that time) do the computation by hand and compare their results. Referencing the greatest invention of his day, the steam engine, a frustrated Babbage exclaimed, “I wish the sum could be calculated by steam!” He began his quest to design a calculating machine that would do arithmetic utilizing purely mechanical properties. The result was his “Difference Engine,” a decimal-based mechanical calculator that reliably computed basic arithmetical functions simply by setting up initial conditions and turning a hand crank. Eventually, Babbage’s research led him, in 1837, to conceive of the “Analytical Engine,” a greatly improved design that was never fully built within Babbage’s lifetime but was documented meticulously in his notebooks. The Analytical Engine was much more complex than the Difference Engine and utilized a punch-card system to run Babbage’s arithmetic tasks. Ada Lovelace (1815-1852), a mathematician and daughter of Lord Byron (the English Poet), assisted Babbage’s work and was the first to recognize that the Analytical Engine should not be understood simply as an arithmetical device but rather as a generalized symbol manipulator capable of algebraic and even alphabetical results. That is, Lovelace correctly recognized Babbage’s invention to be the first universal computing machine (Swade 2001). Lovelace is often credited as the first computer programmer because of her contributions.
Nearly a century later, in 1936, another Cambridge mathematician named Alan Turing (1912-1954) formally proved that a hypothetical machine (later referred to as a “Turing machine”) using a binary pair of symbols could perform any mathematical computation if it were representable by an algorithm (Turing 1936). The idea that a machine could perform such rule-based computations, previously only accomplished by humans, was revolutionary and laid the groundwork for modern digital computers. Turing later utilized this insight in his contributions to the work of WWII-era codebreakers at Bletchley Park, which led to the cracking of the Enigma code utilized by the German military.
Despite the fact that Babbage and Turing both succeeded in producing machines that could replace human cognitive activities, the human element remained essential to the design and implementation of their work. For Babbage, humans were required to conceive of the machines in the first place, manufacture its parts, write the programs, set up their initial conditions, provide mechanical force for its operation, and interpret the results (Swade 2001). At each stage errors or malicious actors would be able to thwart its intended outcome. With physical access to Babbage’s device, a crafty enemy could introduce errant initial settings or even sabotage the physical machine itself—perhaps by grinding away a single cog-tooth thereby undermining the reliability of its results. In the case of the Enigma codebreakers, despite the fact that Enigma settings allowed for over 1.5 x 1019 possible setting permutations, Bletchley analysts were able to break the code because human implementation errors and human-induced cryptographic design weaknesses limited the number of actual permutations to a number solvable by the machine-based decryption techniques available to them (Thimbleby 2016).
By 1946, the University of Pennsylvania had built the Electronic Numerical Integrator and Computer (ENIAC), arguably the first general-purpose electronic computer. It covered 1,000 square feet and was 10 feet tall. Programs, once written, took several people to load by setting dials, cable connections, and switches. Around 50 vacuum tubes per day had to be replaced simply to keep ENIAC running (Grudin 2005). Through the 1950s, the Soviet Union also made critical advances in computer technology, particularly through their BESM-1, BESM-2, BESM-6, and M-20 computers, which were then the fastest and most powerful computers in Europe (IIS 2015). Such early computer projects employed people in three critical roles (Grudin 2005): management, programming, and operation.
Each of these roles was subject to vulnerabilities. Managers could mismanage, or be caused to do so by malevolent actors seeking to undermine their activities. Programmers could write unsecure software that introduced vulnerabilities into the system. Operators could make mistakes or be induced into doing so, thereby affecting the implementation of the program. As technical sophistication and complexity grew, human-related vulnerabilities seemed to grow in tandem. From the earliest days, humans were essential elements to the functioning and security of computers.
Leading up to the 1950's, Rear Admiral Grace Hopper (1906-1992) was assigned to Bureau of Ordnance Computation Project (BOCP) at Harvard University where she worked on applying Mark I technology to solving US Navy equations and problems that led her to publishing the first compilers for programming language, known as Arithmetic Language version 0 (A-0). This paved the way for programming code that was machine-dependent to being able to "compile" software that should be written and used on multiple machines. She was involved in the creation of the Universal Automatic Computer (UNIVAC), the first all-electronic digital computer and the brainchild of the creators of ENIAC. Credited for "debugging" the Harvard Mark II when removing a moth from the relay contacts, she noted in her logbook on September 9, 1945 at 1545, "Relay #70 Panel F (moth) in relay. First actual case of a bug being found." It is this reason in which
she is credited for the use of the word "bug" to describe a
computer glitch.
By the mid-50's, Hopper led the release of the first English-command language Flow-Matic (B-0) and served as the director of the Navy Programming Languages Group in the Navy’s Office of Information Systems Planning and developed validation software for Common Business Oriented Language (COBOL) and its compiler as part of a COBOL standardization program for the Navy. COBOL is a standardized business computer language still widely used in applications based on mainframe computers used by banks and insurance companies today. In her later years, she was a senior consultant for Digital Equipment Corporation (DEC), where she worked well into her 80's.
Her contributions to the Navy’s computing infrastructure made her an invaluable asset to the service. After retiring from the Navy Reserve in 1966, at age 60, with the rank of Commander, she was recalled and continued to serve until 1986 when she retired as a Rear Admiral. At the time of her retirement, at 79 years of age, she was the oldest commissioned officer in the United States Navy. With 40 honorary degrees, the National Medal of Technology, the Defense Distinguished Service Medal, and (posthumously) the Presidential Medal of Freedom, Hopper's work with computers not only gained national attention but was recognized internationally.
Throughout her entire career Hopper had to overcome many challenges, not only as a woman in the areas of Science, Technology, Engineering, and Mathematics (STEM) but also in the Navy. Inaugurated in 2020, Hopper Hall is the first building across the service academies to be named after a woman. The Guided-missile destroyer USS Hopper (DDG-70) was comissioned in 1996, the second US Navy warship to be named for a woman from the Navy's own ranks. Nvidia, a company that designs and manufactures computer graphics processors, released the GH200 Grace™ Hopper™ Superchip in May of 2023 for use in giant-scale AI and high-performance computing (HPC) applications (Nvidia, 2023).
|
|
The mid-20th century kicked off the information age. The World Wars drove the need to use advanced methods for improving and speeding up mathematical computations but contributions from Babbage, Lovelace, Turing, and Hopper all set the stage for modern day computing. Bulky electronics that filled entire rooms began to condense and fit onto desks. Data was stored on physical mediums and moved from room-to-room and building-to-building, known now as "sneakernet." Complex machine coding transitioned to computer languages that could be compiled across systems. The confluence of interconnected computers, or networking, and Operating System (OS), which connected machine hardware to software that would manage its resources, accelerated the use of computing from academic institutions to personal computing and having one in every home.
Early computers weren’t connected to one another, which meant that security concerns largely remained a localized problem. This changed in the late 1960’s when the Advanced Research Projects Agency Network (ARPANET) was created. By December 1969, the ARPANET connected computers at the University of California, Los Angeles (UCLA), the University of California, Santa Barbara (UCSB), Stanford University, and the University of Utah. Six months later computers at the Massachusetts Institute of Technology (MIT), Harvard University, and a small Boston-based firm called Bolt, Beranek and Newman (BBN) were connected (IIS 2015). As computer networks grew, security concerns relating to lost information, intercepted communications, and the verification of identities began to emerge, albeit among only a small number of specialists (Warner 2012). Willis H. Ware, an early computer security pioneer, specifically emphasized the importance of humans to security, noting that “[T]here are human vulnerabilities throughout; individual acts can accidentally or deliberately jeopardize the protection of information in a system” (Ware 1967, 11).
If computers had not yet been connected to form a network, what had? In the 1960s-1970s, the most sophisticated networked technology was the telephone system. Early phone-hackers, calling themselves “phone phreaks” used their growing technical knowledge of the way phone system networks operated—their circuits, switches, relays, tonal complexities, and network diagrams—to hijack the telephone system for their own purposes, whether that be to avoid fees, connect to foreign conference calls, or gain access to areas of the network considered off-limits using normal telephonic protocols (Orth 1971 & Rosenbaum 1971).
One of the early pioneers in phone phreaking, John Draper (aka “Cap’n Crunch”), discovered that a 2600 hertz tone would, when produced by a regular phone user, provide access to the Operator Mode used by phone operators to connect calls to anywhere in the world. Draper’s nickname “Cap’n Crunch” originated in his discovery that a small toy whistle offered in Cap’n Crunch cereal boxes at the time effectively produced exactly the 2600 Hz necessary to access this Operator Mode (Orth 1971, 28).
Another technique used by phone phreakers utilized what came to be known as a “Blue Box,” a mechanical telephone keypad box—constructed by phreakers themselves—with a speaker allowing tonal output. By understanding how the telephone calls get routed over network trunk lines, phreakers could trick the network into bypassing the normal toll collection and allow them to route calls on their own (Rosenbaum 1971).
Phone phreakers didn’t confine themselves to technical hacks. Interviews with John Draper reveal that often he and his friend and fellow pioneer phreaker, Dennis Dan “Denny” Teresi, would use human manipulation techniques called “social engineering” to gain needed information from unsuspecting Bell Telephone employees. Draper described social engineering as “the ability of going in and talking to people on the inside of the phone company…making them believe you were working for the phone company" and acclaimed Teresi as its foremost expert of the day (Draper 2001). You will learn more about modern social engineering techniques later in the course.
Before phone phreaking, the term “social engineering” had only been applied to the activities of powerful policy planners—individuals in business or government attempting to cure what they identified as “social ills” through the use of their superior technical knowledge of public policy and economics. Phone phreakers inverted this power structure, thereby inaugurating what would later be called the “hacker mentality.” Here were relatively powerless individuals—often teenagers—usurping the designs of powerful phone companies. The other inversion that took place under this new application was from the allegedly benign purposes of the powerful policy planners to the nefarious purposes of the phreakers themselves. Phreakers reversed the social hierarchy that had stood alongside the concept of social engineering and, at the same time, put this tactic to their own disreputable—typically illegal—uses (Hatfield 2018).
Two early phone phreakers, Steve Jobs and Steve “Woz” Wozniak, later founded Apple Computers which helped, in the 1980s, usher in the era of the personal computer (Lee 2001). Throughout that decade, the adoption of the Transmission Control Protocol / Internet Protocol (TCP/IP) model, the invention of the World Wide Web in 1989, the proliferation of personal computers, and the client-server model for network services, effectively united the computer community with the phone phreakers—particularly since early computer networks communicated over telephone lines (Denning and Denning 2016, 5).
In 1984, the earliest hacker magazine 2600: The Hacker Quarterly, which took its name from the 2600 hertz tone used by phone phreakers, began publishing anonymous articles on how to manipulate and repurpose, or “hack,” technologies such as telephones and the newly available personal computers. Other rival magazines soon appeared such as Phrack (a portmanteau of “phreak” and “hack”) founded by Craig Neidorf (aka "Knight Lightning") alongside Randy Tischler (aka "Taran King"). Phrack began publishing in November 1985 and continues today online (Hatfield 2018). In 1990, Neidorf (alongside Robert Riggs) was later arrested and charged with possession and distribution of a stolen BellSouth document, which BellSouth claimed was worth $80K. Facing up to 31 years in prison, Neidorf’s defense, in United States v. Riggs, was able to demonstrate that the information in the documents could have been acquired for $13 (Denning 1991).
Early phone phreakers and computer hackers shared what scholars call the “hacker ethic,” a sensibility that developed over time and remains prevalent in the hacker underworld. Principles of the hacker ethic include (Levy 1984, Steinmetz and Gerber 2015):
Rewinding back to the same year as ARPANET was being established in 1969, Ken Thompson, Dennis Ritchie and others started working on the DEC Program Data Processor (PDP) 7 at Bell Labs, which would later become to be known as UNIX -- yes the same DEC mentioned earlier where Hopper eventually worked at. This stemmed from the need to rewrite an Operating System (OS) in order to play space war on another smaller machine with only 4k of memory, resulting in a system which a colleague called UNICS (UNiplexed Information and Computing Service) and was a play on the Bell Labs failed time-sharing operating system called Multics. Although it cannot be confirmed how the spelling eventually became UNIX, the early versions released in the early 1970's aren't too far off from today's systems.
The first website in the world was published and hosted in 1989 by Tim Berners-Lee, a British scientist working at CERN. Within a few years, the World Wide Web (WWW) software was made available in the public domain with an open license, allowing the web to expand to what it's known as today.
In 1991, Linus Torvalds created the Linux OS out of a need to provide an alternative to the Microsoft (MS) Desktop Operating System (DOS). The name derived from a combination of his name and the UNIX with his mascot a penguin named "Tux" and is a recognizable symbol for the OS around the world. What set Linux apart was that he released the Linux OS source code for free, accessible for anyone to use anywhere in the world, on the Internet. His philosophy was that if the software was free, anyone with interest in computer programming could modify and improve the system, ultimately improving it over time. Licensed under GNU General Public License (GPL), there were an estimated seven million computers running Linux by 1999.
Torvalds describes his work with statements like "Linux hackers do something because they find it to be very interesting" (Himanen, 2001). For the hacker, "the computer itself is entertainment," meaning that the hacker programs because he finds programming intrinsically interesting, exciting, and joyous (Himanen, 2001). Companies like Oracle, Intel, Netscape, Corel, and IBM financially supported Linux development and made their hardware and software Linux-compatible. The Apache web server, originally written for Linux, led to many companies switching to the operating system and is also what is used at the Naval Academy.
21st century societies rely on information to make informed decisions about every aspect of life due to the inherent value information contains through the benefits it provides to its owners. However, individuals and/or groups authorized to utilize information must maintain possession of their information and keep it out of reach from those who seek to use it for their own advantage. Securing information has been an important aspect throughout history and the widespread use of a public internet has created an environment in which information can be compromised from any geographic location.
Internet-related vulnerabilities can be traced back to the 1980s and have grown exponentially more complex and numerous in the decades since. The United States is a wealthy and powerful nation and maintaining an uninterrupted and uncompromised flow of information is vital to our continued success. In May of 2009, President Obama directed a 60-Day Cyberspace Policy Review which among a variety of recommendations included an action item to "expand and train the workforce, including ... cyber security expertise in the Federal government". In the fall semester of 2011, the Superintendent of the United States Naval Academy directed that an Introduction to Cyber Security course be added as part of the core curriculum for all Midshipmen to fulfill the action item from the 2009 60-Day Cyberspace Policy Review and prepare future Officers for the Cyberspace challenges they will encounter in the Navy and Marine Corps.
The term "Cyber" is relatively new (first mentioned in the 1970s) and professionals have not reached a consensus on what Cyber means. That being said, the term Cyber is so widely used that most people have a general idea of what it entails. For the purpose of this course we will use the following two definitions when thinking about Cyberspace:
The cyberspace model can be described in terms of three interrelated layers: (1) physical network, (2) logical network, and (3) cyber-persona.
As a midshipman, you have multiple physical devices (laptop, tablet, cell phone, Common Access Card) that may be secured in a wall locker at Bancroft Hall, forgotten at Nimitz Library, or stolen in Downtown Annapolis. Accessing the GNBA wireless network requires physical proximity or a network cable to connect you to network resources. Accessing records in the Midshipmen Information Database System (MIDS) will provide logical information that exists on servers, storage devices, or in the cloud. Logging into your email with your 'alpha' as your username is a cyber-persona that will be specifically associated with you over the next four years.
Over the past decade, an ever growing number of high profile incidents have caught the attention of governments, organizations, companies, and individuals around the world. Understanding these incidents and their implications will help Navy and Marine Corps Officers understand the challenges they may be confronted with in the fleet and how to avoid becoming the subject of a future Cyber incident. Below are some of the most significant and impactful Cyber incidents in recent years:
Computers store and transmit data in the form of 1's and 0's. Alone, this data doesn't have any value or significance but when combining data sets it becomes information. As you'll learn over the next few classes, data can represent different forms of information where 10
can equal ten, 10
can equal two, 10
can equal A, or 10
can equal 49 48
.
Information may not be adequate in itself as well and can change depending on how it's viewed. Take the five numeric digits - 21402
. What comes to mind?
If those sequence of numbers are found in your bank account with a deposit of $21402
you might be excited or if you received a tuition bill of $21402
the emotions will be very different. The $
symbol leading the numeric sequence takes the information and applies knowledge of a financial gain or burden based on its context. How about Annapolis, MD 21402
? Now you've applied wisdom knowing the same five-digit sequence is a zip code.
The DIKW hierarchy builds on itself, representing Data < Information < Knowledge < Wisdom. When considering cybersecurity, little bits of data can lead to the compromise of information based on the knowledge of how computer systems work. The main purpose of cybersecurity is to protect data and that is what our pursuit will be over the next 16 weeks in order for you to attain some wisdom!
In all of the major cybersecurity incidents mentioned earlier, data compromise resulted in the violation of confidentiality, integrity, and availability - the CIA triad. This term was first found in the NASA Information Security Plan document in 1989 but may have been coined as early as 1986 (Saylor Academy, 2023). You may notice pillars outside of the classrooms, which define each part of the CIA-triad. Let's look more into definitions and examples involving the tenets of cybersecurity.
Confidentiality:
Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information.If other midshipmen are able to access the Midshipmen Information Database System (MIDS) to view your Midshipmen Academic Performance Reviews (MAPRs) written by your professors, then the concept of confidentiality has been violated.
Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity.Modifying your Order of Merit (OOM) in MIDS to something other than what the system should calculate as for your graduating class violates the principle of data integrity. This includes accessing MIDS using someone else's credentials (authentication) or spoofing and email from someone authorized to make changes, such as a professor, to a system administrator (non-repudiation). Note that data integrity is different from the honor concept of integrity.
Ensuring timely and reliable access to and use of information.Not being able to log into MIDS to register for classes that are in high demand next semester violates the principle of availability. The first Denial of Service (DoS) attack was the Morris Worm that prevented availability for 10 percent of the systems connected to the Internet in 1988.
Here is visualization of the world's biggest data breaches and hacks: Information is Beautiful Blog
An overview of Verizon's annual report on data breach investigations: Verizon's Data Breach Investigation Report