After reviewing this material you should be able to:
So far in this course we have examined mostly technical matters, including those utilized by attackers to gain access to networks (e.g. code injection, cross-site scripting) and those utilized by defenders to protect their networks (e.g. digital cryptography, firewalls, authentication using passwords). By contrast, this lesson will examine non-technical human factors in cyber operations.
Successful cyber operations require the integration of both technical and human factors. Technical factors include the theory and design of computers and network architectures, but also the practical employment and exploitation of programs, hardware, communication protocols, cryptographic techniques, firewalls, and so on. Human factors involve people—the choices people and groups make, the behaviors they habituate, which influence the success of cyber operations. Human decisions can render a technically secure network insecure, and, in fact, do so quite regularly. Humans can be tricked, manipulated, and influenced to an attacker’s advantage. Similarly, they can be trained to recognize human-focused attacks and mitigate that risk through proper countermeasures. Management policies—another form of human decision making—can limit the scope of human choices within an organization, thereby bettering or worsening an organization’s vulnerability to attack. Human behavior is also influenced by ethical, legal, and normative constraints which can be exploited by an attacker, buttressed by a defender, and discovered by inquisitive collectors. For these reasons the Cyber Domain is an inescapably human domain. Humans develop new technology, manage information processes, create and code new applications, use and misuse these products, and act as attackers and defenders. There is no part of the cyber domain that is not affected by human choices and behavior.
As such, human factors are at least as relevant to the successful attack or defense of computer networks as technical factors. With the advent of powerful security tools—privacy-focused operating systems, end to end encryption, and anonymizing browsers—human factors have arguably become even more important to network security than technical factors. For this reason, some observers refer to human-focused cyber-attacks as “the highest form of hacking” (Grenier 2008). Scholars and practitioners who limit their focus purely to technical factors have an incomplete understanding of the cyber domain, just as inadequate technical exposure leads to misunderstandings of the opposite kind.
While data thefts that result from high-profile cyber operations (e.g. The OPM hack, the Facebook hack) receive a disproportionate amount of media coverage, the truth is that simple human error accounts for the majority of privacy breaches. Error is “the failure to achieve the intended outcome in a planned sequence of mental or physical activities when failure is not due to chance” (Reason 1990). Human errors can be categorized either as slips or mistakes. A slip is the incorrect execution of a correct action (execution failures). A mistake is the correct execution of an incorrect action (planning failures).
A study (Liginlal et al. 2009) of 1,046 privacy breach incidents found that 67% of the incidents could be attributed to human error, while 33% could be attributed to malicious acts. Of these errors, 74% were mistakes and 26% were slips.
Since humans are involved in every facet of the cyber domain, preventing human error may be the most important strategy defenders can take to secure their networks. As a corollary, offensive cyber operators have found that discovering and exploiting human errors is one of the most successful attack strategies.
Research suggests that mistakes arise from incorrect or incomplete knowledge, a misuse of knowledge, the application of faulty heuristics (methods or processes), or information overload. Therefore, preventing mistakes is best accomplished through better education, information reduction, decision support, and by increasing supervisory controls. This can be accomplished by creating a past error database, reassessing operator performance regularly, studying operator habits during routine operations, and making individuals aware of risk-enhancing factors (Liginlal et al. 2009).
Research suggests that a loss of situational awareness is the main cause of slips. Situational awareness is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley 1995). Better training, reducing interruptions and multitasking, and providing memory aids are common methods for reducing slips (Liginlal et al. 2009).
Empirical studies of human interaction with security tools have concluded that technology-based security solutions often fail because security features are presented in such a way that users cannot understand them. For example, a Pew Research study (2017) of highly-educated regular computer users found that:
Other studies focusing on user interactions with firewalls found that most users are unaware of the functionality of firewalls or even their existence. Most users did not have a useful mental model of what a firewall does and therefore could not even begin to configure them. When actively prompted by a security tool display (e.g. firewall prompt) most users lack the required knowledge to assess the consequences of allowing or blocking a connection. For this reason, most users choose “Allow” simply so they can have access. Users base their decisions on prior experience with prompts, but because most connections aren’t malicious, their inference that any given future prompt should be allowed is flawed. Eventually, many users falsely conclude that the firewall is an enemy and turn it off completely (Raja et al. 2010).
To help prevent these misunderstandings, scholars argue (Furnell et al. 2006) that security tools, such as firewall interfaces, should be:
With a lot still to learn, the US military has made big improvements in its cyber security stance over the last decade. Many of these improvements are thought to be not due to a large leap forward in technology, but rather a large leap forward in personnel training. The level of training from the average user to the administrator or IT professional in the US military has proven to make a real difference in the US military's ability to deter and respond to cyber attacks. The answer to providing cyber security lays not in new technology, but in the human operator.
Humans have been integral parts of understanding the workings of computational devices since their invention. The relationship between humans and computers has been studied since the 1950s (Grudin 2005). One of the most important discoveries that researchers have uncovered is that humans are quite unreliable when it comes to self-assessments on topics relating to cybersecurity, with most rating their own security awareness much higher than what is actually true (Friedman et al. 2002). This overconfidence makes users vulnerable to offensive cyber attacks.
Research into users’ browsing habits provides a useful example of this unreliability. One study (Kline et al. 2011) of technologically-informed young people, mostly aged 17-22, found that:
Using eye-tracking technology, where experimental participants wear glasses that track their eye gaze while they browse websites, scholars have shown that an individual’s self-reported areas of interest on the screen does not match their gaze data. Subjects report, for example, studying the URL or taking notice of the protocol, while their gaze never actually looked for these items at all. Or conversely, gaze data that dwelt on security-related features did not match the subjects’ self-reports which said they didn’t take any notice of these features (Whalen and Inkpen 2005).
Interestingly, gaze data is a much more reliable indicator of security awareness than people’s self-reporting. For example, studies have shown that when gaze data showed an interest in browser-related security features, subjects were more inclined to conduct business transactions when connections were encrypted. When no security-interest was evident in gaze data, individuals showed no difference in willingness to conduct transactions using unencrypted vs. encrypted connections (Sobey et al. 2008).
Somewhat surprisingly, these results do not improve with subjects’ security knowledge. Researchers have found (Arianezhad et al. 2013) that when users had security-related gaze points (as tracked by gaze data) they were better at identifying phishing websites than those whose gaze points were not security-related, but whether their gaze points were security related was not determined by security knowledge. Rather, while browsing, a user’s task context influences their attention to security features to a greater extent than any other factor (Arianezhad et al. 2013).
Task Context
The old debate about nature vs. nurture also applies to human activity in the cyber domain. Is user vulnerability hard-wired or can behavioral factors such as education, training, and habit-formation mitigate our susceptibility to cyber attacks? The answer, unsurprisingly, is that both innate and behavioral factors play a role. Let’s take a look at these within the context of cyber operations.
Studies of personality traits, cultural background, sex, age, and other characteristics have shown that these categories impact the way technology is used, and therefore how vulnerable its users are to exploitation by cyber attackers. For example, extroverted cell phone users tend to be more prone to risky behavior than introverted cell phone users (Bianchi & Phillips 2005). And extroversion is associated with sensitive data misuses more generally (Butt & Phillips 2008). Many mobile phone users do not use (or are not aware of) security features, and this correlates with the age of the user. For example, 80% of young people aged 16-30 are aware of PINs, yet a 2010 study found that less than a third use them to lock their smart phones (Kurkovsky & Syta 2010). Other studies show that 75% of young people who use smart phone passwords never change them. While 54% never share their password, 23% share with one person, 17% share with two to three people, and seven percent share with four or more people (Barn et al. 2013).
In this same (UK-based) study, student participants with nonwhite ethnic backgrounds tended to have a higher awareness of mobile phone data security than whites. Black and Asian ethnicities reported greater awareness. UK/EU students reported less mobile phone security awareness than overseas students. Older students reported significantly greater awareness of security threats on mobile phones than younger students. White and UK/EU students were more likely to report a more extroverted personality, while white and UK/EU students were also less IT literate (Barn et al. 2013). Other studies on user-susceptibility to phishing e-mails (much more about these below) also underscore the importance of innate factors by placing these alongside behavioral ones. Phishing e-mails are a critical vulnerability for defenders and an easy way into a network for attackers. One such study found that users who are experienced with computers, less impulsive, more psychologically open, extraverted, and frequently trained, have a greater degree of success in identifying phishing scams (Pattinson et al. 2012).
These considerations show that there are factors which are not chosen by individuals, and are unchangeable in themselves, but which may significantly impact an individual’s innate vulnerability to cyber attacks. Offensive attackers would do well to understand such factors when designing their exploits and identifying their victims. Similarly, defenders who ignore these psychological, cultural, and ethnic predictors will find their ability to mitigate against targeted attacks at least partially vitiated by these innate factors.
Of course, many vulnerabilities result from behavioral habits that cannot be reduced simply to innate factors, and such habits have been studied by scholars seeking to identify and cultivate defensive behaviors. For instance, in an in-depth analysis of real-world operationally gathered cybersecurity data about human behavior, researchers (Ovelgönne et al. 2017) analyzed over 1.6 million end-hosts from January to August 2011 to identify human behaviors which increase the risk of malware attacks on a host. They found that the risk of malware infection can be directly linked not just to technical security features but also to human choices and behaviors such as:
Through avoiding these risky behaviors individuals can decrease their vulnerability to malware quite significantly even if they are members of an innately risk-prone sociological group.
One way in which organizations—such as USNA—seek to alter human behavior is through security-related policies. Policies help constrain human choices, and good policies tend to do so in a way that tips the balance in favor of computer network security without overburdening the human user. Users who feel overburdened will be prone to breaking policies meant to help them.
The psychology of password management provides a useful example of the complexities surrounding the creation of sound organizational policies. Researchers have found (Tam et al. 2010) that when users strongly associate immediate negative consequences with password breaches they are more likely to choose a strong password. For this reason, online banking passwords tend to be much stronger than e-mail passwords, despite the fact that bank-related password reset options typically involve using e-mail accounts to authenticate the user. A bank whose password reset policy necessitates the use of two-factor authentication (2FA), such as the use of a secure one-time pin sent to the user’s smartphone in addition to an e-mail, will greatly secure their customer while only slightly increasing the hardship felt by the user. Similar tradeoffs occur with secure data deletion policies (Reardon et al. 2014), policies that govern teleworking (Godlove 2012), those that determine which employees get access to an organization’s network (Greitzer et al. 2012 & Sokolowski et al. 2016), and so on.
In cyber operations, the attempt to target and manipulate human vulnerabilities in order to gain access to or otherwise exploit computer networks is called “social engineering” or human hacking. Like a traditional engineer who bends and stretches metal to build a bridge, social engineers manipulate the human dimension of a computer network in order to achieve their goals. Scholars have offered theoretical models identifying the various stages of social engineering attacks (Mouton et al. 2016, Indrajit 2017) and the underlying principles that have governed the use of social engineering throughout history (Hatfield 2018). In this section we will consider a number of forms that social engineering takes and try to understand why humans remain vulnerable to such attacks.
A classic example of social engineering occurred during the 2016 US election, when Russian hackers stole the credentials of Hilary Clinton’s campaign manager, John Podesta, by sending him a phony e-mail claiming to be a Google message warning him that his password had been compromised and that his password needed to be changed. The “change password” button was actually a password harvester, which asked Mr. Podesta to enter his old and new passwords thereby giving the Russians access to his gmail account. They then logged into the real gmail, updated his real password to the new one, thereby preventing him from realizing anything was amiss (AP 2017). Rather than a regular phishing email that is sent indiscriminately to a large number of people, this is an example of “spearphishing” because it targeted a single high-value target. Unlike phishing, spearphishing requires a considerable amount of reconnaissance work and is associated with a much higher success rate. Mass phishing e-mails on the other hand require little reconnaissance work and rely on the fact that someone is bound to be tricked by the scam.
Not all phishing is e-mail based. Websites, instant messenger, online social networks, blogs and forums, mobile apps, and VOIP have all been used as the communication medium in phishing attacks (Aleroud & Zhou 2017). Alarmingly, research suggests that the amount of training or security knowledge one possesses does not correlate with a higher ability to identify phishing scams over those with less security knowledge (Pattinson et al. 2012).
In traditional social engineering, such as most phishing scams, the attacker contacts the victim. In reverse social engineering, it is the victim who contacts the attacker, either by accident or because they were manipulated into doing so. For example, an attacker might set up a website that looks very much like a normal site but is actually a misspelling of a normal site (example: www.amazom.com) and wait for unwary victims to misspell their way into the trap. The source code of the phony site might also have been copied from the original site so that the two look quite similar. Unfortunately for the victim, the misspelled site has no security features and harvests usernames and passwords from its victims. As an added bonus, when users enter their credentials into the fake site a document.location= redirects them to the real site immediately after victims click the “submit” button. Most users will see their screens appear to “reload” the intended site and will continue on as if nothing had happened, especially if they have a cookie in their browser that logs them into the real site. This is called domain-squatting and is a potent reverse social engineering attack. A variant of this attack occurs when a victim’s DNS server is compromised (called DNS poisoning) allowing the attacker to replace the IP Address for www.amazon.com with that of their fake site. This means that the attack can take place without any misspellings on the part of the victim. Still another variant, called pharming, utilizes an onmouseover event to redirect from a legitimate site to an illegitimate copy (Abu-Nimeh & Nair 2008). A final variant (although the possibilities are endless), called a visitor tracking attack, involves the attacker attracting the victim’s notice by visiting the victim’s social media site many times and then allowing curiosity to run its course. Many victims will wonder who it is that seems so interested in their profile and will find their way to the attacker’s malicious page (Irani et al. 2011).
Not all social engineering attacks involve human-to-human contact. Increasingly, social engineers have automated their attacks particularly in the domain of social networks, where automated scripts or “bots” are used in interpersonal communication. This type of attack is known as automated social engineering (ASE) and takes a number of forms.
For example, in social engineering parlance a Sybil Attack is a social media attack in which many online personas are created using bots who then, using automated and pre-programmed responses, flood a social media space and fill it with opinions and commentary desired by the attacker. This can be used in a variety of ways, some of which are merely disruptive and annoying while other uses contribute directly to violating a Pillar of Information Assurance. For example, real human victims in a social media space who find that their opinions, political views, etc. are seemingly in the minority may begin to shift their views bit-by-bit to the “group norm” which the attacker has manipulated (Jhaveri et al. 2014). This disruptive method was used by Russia during the 2016 US election to influence voters in key districts. In another example, an attacker may increase the likelihood that a victim might click on a malicious link (a phishing attack) by surrounding the victim with social media bots that speak favorably of the link. The term “Sybil” comes from the famous case in the history of psychology wherein a woman pseudonymously called by that name claimed to have many personalities living inside her, although this was later revealed to have been a hoax (NPR 2011).
Another type of ASE attack, called a honeybot or bot-in-the-middle attack, occurs when an attacker programs a bot to enter a social network and begin a discussion with at least two members where the bot passes questions and replies back and forth between the two users whilst the users do not know they are interacting through a third party. Auto-replies and keyword lookups allow the bot to alter male and female pronouns, replace links with links of its own, and other techniques. The bot often lets a majority of traffic flow through it unaltered so as to remain largely undetected, particularly because natural language remains difficult for bots to mimic except in small samples (i.e. short responses). The goal is to gather information, insert a malicious link into the flow of the conversation, or even shift the subject of conversation in a direction desired by the attacker (Lauinger 2010, Kaul & Sharma 2013).
The Russian Internet Research Agency, located in St. Petersburg, is well-known to use these sorts of techniques, even creating fake organizations that fill social media with memes and other political opinions in an effort to inflame domestic political disagreements within the US and elsewhere (NYT 2015). This has led to recent efforts on the part of victims to monitor fake profiles on the web, such as the Alliance for Securing Democracy.
Many social engineering attacks involve little use of technology at all, yet these are some of the most effective cyberattacks against unwary victims. Social engineers often find success because they are less restricted in their ability to use social and psychological techniques than when attacker-to-victim communication is mediated by technology.
One type of in person social engineering attack is impersonation, when an attacker takes on or spoofs the identity of someone else in order to gain access to a computer system, a server room, a secure vault, or to elicit information from their victim, implant malware, or otherwise compromise their security. An attacker might use a simple smartphone application, such as Spoofcard, to spoof the location of their call as if they were inside an organization. The attacker could then call a member of the organization’s staff, pretending to be part of the IT department, and ask the victim to allow them to remotely access (using rdesktop, ssh, etc.) their machine in order to make a routine security update. Not all impersonation attacks are low-tech. For instance, targeted biometric impersonation involves locating an innocent person in the verification system with a similar biometric signature and then fraudulently assuming that identity to spoof a verification check (Bustard et al. 2014).
Tailgating involves the manipulation of a social custom to gain access to rooms, buildings, and vaults that one does not have permission to enter. The social custom to hold the door for someone coming into a room after you is so ingrained into our habits that, assuming the timing is right and no obvious “red flags” are evident, most people will fall victim to its misuse. This is particularly true if the attacker is holding items that prevents their hands from allowing them to open the door. The principle of reciprocity, if a favor has been given to you it is important that you return it, is particularly effective if (as is often the case) the secure room is inside a foyer or another set of doors. In this case, the attacker makes sure s/he opens the door for the victim who then “returns the favor” by allowing the attacker through the secure inner doors (AlliedBarton 2018).
Another very effective in person attack involves looking over the shoulder of a victim to see information on their computer screen, or view their keypad as they type. This can be done organically (using eyeballs) or through recording devices (using cameras). Such attacks are called shoulder surfing attacks and allow attackers to crack passwords and view sensitive data otherwise denied them by security features of a victim’s system. For example, in Oct. 2018, the artist Kanye West was recorded in the Oval Office putting his password “000000” into his smartphone. The ease of shoulder surfing attacks has led researchers to propose various shoulder surfing resistant password schemes (Rao & Yalamanchili 2012).
A final in person attack is generally referred to as dumpster diving. As the name suggests, it involves sifting through the discarded trash that a victim has thrown away in order to gain access to clues about the victim’s security posture. Dumpster diving may or may not involve actual dumpsters, but when it does and those dumpsters are held under lock and chain, another in person attack—lock picking—becomes necessary. Individuals and organizations routinely discard information valuable to their security, including such items as credit card offers, billing and banking statements, 401K statements, legal paperwork, usernames and passwords (Fried 2018).
Humans seem innately vulnerable to social engineering attacks, and scholars have proposed a number of psychological models that explain why human beings continually fall prey to them. These models utilize studies in human psychology and human-computer interaction to categorize various cognitive and emotional tendencies most humans possess. Not every social engineering attack will utilize all of these tendencies, but the degree to which a social engineer utilizes these principles the greater the chance of success. Similarly, defenders who find ways to mitigate these psychological tendencies will find themselves relatively less vulnerable to human hacking.
David Gragg (2003) identified a number of psychological principles that exhibit the power to influence or persuade people, leading them to become victims of social engineering attacks.
Social engineering attacks are effective because these psychological tendencies are present in all of us. Since humans are central to the operation of computer networks, some aspect of human hacking is typically present in every sophisticated computer network attack. Successful defensive policies and postures require a multi-layer strategy (Gragg 2003). Although scholars have identified a number of ways in which organizations can mitigate these risks, so long as human beings form a central part of computer networks they will remain potent attack vectors.
Abu-Nimeh, Saeed, and Saku Nair. 2008. “Bypassing Security Toolbars and Phishing Filters via DNS Poisoning.” IEEE Global Telecommunications Conference (2008), pp. 1-6.
Aleroud, Ahmed, and Lina Zhou. 2017. “Phishing environments, techniques, and countermeasures: A survey.” Computers & Security, Vol. 68, pp. 168-196.
https://www.researchgate.net/publication/315853086_Phishing_Environments_Techniques_and_Countermeasures_A_Survey
AlliedBarton, “Security Tailgating: Best Practices in Access Control” White Paper. Accessible at this URL: http://www.alliedbarton.com/Portals/0/SRC/WhitePapers/Security%20Tailgating%20-%20Best%20Practices%20in%20Access%20Control.pdf
Associated Press. 2017. “Inside Story: How Russians Hacked the Democrats’ Emails.” https://www.apnews.com/dea73efc01594839957c3c9a6c962b8a (accessed 20 October 2018).
Arianezhad, M., Camp, L.J., Kelley, T. and Stebila, D. 2013. “Comparative eye tracking of experts and novices in web single sign-on”, Proceedings of the third ACM Conference on Data and Application Security and Privacy – CODASPY ’13, ACM Press, New York, NY, p. 105.
Barn, Balbir S., Ravinder Barn, and Jo-Pei Tan. 2013. “Smart Phone Activity: Risk-Taking Behaviours and Perceptions on Data Security among Young People in England.” International Journal of Social and Organizational Dynamics in IT 3(4): 43-58.
Bianchi, A., & Phillips, J. G. 2005. Psychological predictors of problem mobile phone use. Cyberpsychology & Behavior, 8(1), 39–51.
Bustard, John D., John N. Carter, Mark S. Nixon, and Abdenour Hadid. 2014. “Measuring and Mitigating Targeted Biometric Impersonation.” IET Biometrics 3(2): 55-61.
Butt, S., & Phillips, J. G. 2008. Personality and self-reported mobile phone use. Computers in Human Behavior, 24(2), 346–360.
Endsley M. R. 1995. “Toward a theory of situation awareness in dynamic systems.” Human Factors 37: 32–64.
Fried, Robert B. 2018. “Dumpster Diving.” Social-Engineer.org. https://www.social-engineer.org/wiki/archives/DumpsterDiving/CrimeandClues_dumpster_diving.htm
Friedman, B., Hurley, D., Howe, D.C., Felten, E. and Nissenbaum, H. 2002. “Users’ conceptions of web security”, CHI ’02 Extended Abstracts on Human Factors in Computing Systems – CHI ‘02, ACM Press, New York, NY, p. 746.
Furnell, S.M., A. Jusoh, and D. Katsabas. 2006. “The Challenges of Understanding and Using Security: A Survey of End-Users.” Computers & Security 25(1): 27-35.
Godlove, Timothy. 2012. “Examination of the Factors that Influence Teleworkers’ Willingness to Comply with Information Security Guidelines.” Information Security Journal: A Global Perspective 21(4): 216-229.
Gragg, David. 2003. “A Multi-Level Defense Against Social Engineering.” SANS Institute InfoSec Reading Room, pp. 1-21.
Greiner, Lynn. 2008. “Hacking your network’s weakest link – you.” Network Magazine 12(1):9–12.
Greitzer, Frank L., Lars J. Kangas, Christine F. Noonan, Angela C. Dalton, Ryan E. Hohimer. 2012. “Identifying At-risk Employees: Modeling Psychosocial Precursors of Potential Insider Threats.” 45th Hawaii International Conference on System Sciences (2012), pp. 2,392-2,401.
Grudin, Jonathan. 2005. “Three Faces of Human-Computer Interaction.” IEEE Annals of the History of Computing 27(4): 46-62.
Hatfield, Joseph. 2018. “Social Engineering in Cybersecurity: The Evolution of a Concept.” Computers & Security, Vol. 73, pp. 102-113.
Indrajit, Richardus Eko. 2017. “Social Engineering Framework: Understanding the Deception Approach to the Human Element of Security.” International Journal of Computer Science Issues, 14(2): 8-16.
Irani D., Balduzzi M., Balzarotti D., Kirda E., Pu C. 2011. "Reverse Social Engineering Attacks in Online Social Networks." In: Holz T., Bos H. (eds) Detection of Intrusions and Malware, and Vulnerability Assessment. DIMVA 2011. Lecture Notes in Computer Science, vol. 6739. Springer, Berlin, Heidelberg. pp. 55-74.
Jhaveri, Hardik, Harshit Jhaveri, and Dhaval Sanghavi. 2014. "Sybil Attack and its Proposed Solution." International Journal of Computer Applications 105(3): 17-19.
Kaul, Priya, and Deepak Sharma. 2013. "Study of Automated Social Engineering, its Vulnerabilities, Threats and Suggested Countermeasures." International Journal of Computer Applications 67(7): 13-16.
Kelley, Timothy, and Bennett I. Bertenthal. 2016. “Attention and Past Behavior, Not Security Knowledge, Modulate Users’ Decisions to Login to Insecure Websites.” Information and Computer Security 24(2): 164-176.
Kline, Douglas M., Ling He, and Ulku Yaylacicegi. 2011. “User Perceptions of Security Technologies.” International Journal of Information Security and Privacy 5(2): 1-12.
Kurkovsky, S., & Syta, E. 2010. Digital natives and mobile phones: A survey of practices and attitudes about privacy and security. Paper presented at the 2010 IEEE International Symposium on Technology and Society (ISTAS).
Lauinger, Tobias, Veikko Pankakoski, Davide Balzarotti, and Engin Kirda. 2010. “Honeybot, Your Man in the Middle for Automated Social Engineering.” Proceedings of USENIX Symposium on Networked Systems Design and Implementation, April 2010.
Liginlal, Divakaran, Inkook Sim, and Lara Khansa. 2009. “How Significant is Human Error as a Cause of Privacy Breaches? An Empirical Study and a Framework for Error Management.” Computers & Security 28(3-4): 215-228.
Mouton, Francois, Louise Leenen, and H. S. Venter. 2016. "Social Engineering Attack Examples, Templates, and Scenarios." Computers & Security, Vol. 59, pp. 186-209.
National Public Radio. 2011. “Real ‘Sybil’ Admits Multiple Personalities Were Fake.” https://www.npr.org/2011/10/20/141514464/real-sybil-admits-multiple-personalities-were-fake (accessed 20 October 2018).
New York Times. 2015. “The Agency.” https://www.nytimes.com/2015/06/07/magazine/the-agency.html (accessed 20 October 2018).
Ovelgönne, Michael, Tudor Dumitras, B. Aditya Prakash, V.S. Subrahmanian, and Benjamin Wang. 2017. “Understanding the Relationship between Human Behavior and Susceptibility to Cyber Attacks: A Data-Driven Approach.” ACM Transactions on Intelligent Systems and Technology 8(4): 1-25.
Pattinson, Malcolm, Cate Jerram, Kathryn Parsons, Agata McCormac, and Marcus Butavicius. 2012. “Why Do Some People Manage Phishing E-mails Better Than Others?” Information Management & Computer Security, 20(1): 18-28.
Raja, Fahimeh, Kirstie Hawkey, Pooya Jaferian, Konstantin Beznosov, and Kellogg S. Booth. 2010. “It’s Too Complicated, So I Turned It Off! Expectations, Perceptions, and Misconceptions of Personal Firewalls.” Proceedings of the 3rd ACM Workshop on Assurable and Usable Security Configuration (Oct. 2010): pp. 53-62.
Reason J. 1990. Human Error. New York, NY: Cambridge University Press.
Reardon, Joel, David Basin, and Srdjan Capkun. 2014. “On Secure Data Deletion.” IEEE Security & Privacy 12(3): 37-44.
Rao, Kameswara, and Sushma Yalamanchili. 2012. “Novel Shoulder-Surfing Resistant Authentication Schemes using Text-Graphical Passwords.” International Journal of Information and Network Security, 1(3): 163-170.
Sobey, J., Biddle, R., van Oorschot, P. and Patrick, A.S. 2008. “Exploring user reactions to new browser cues for extended validation certificates”, in Jajodia, S. and Lopez, J. (Eds), Proceeding 13th European Symposium on Research in Computer Security (ESORICS) 2008, Springer, pp. 411-427
Sokolowski, John A., Catherine M. Banks, and Thomas J. Dover. 2016. “An Agent-Based Approach to Modeling Insider Threat.” Computational and Mathematical Organization Theory 22(3): 273-287.
Stajano, Frank, and Paul Wilson. 2011. “Understanding Scam Victims: Seven Principles for Systems Security.” Communications of the ACM 54(3): 70-75.
Tam, L., M. Glassman, and M. Vandenwauver. 2010. “The Psychology of Password Management: A Tradeoff between Security and Convenience.” Behaviour & Information Technology 29(3): 233-244.
Whalen, T. and Inkpen, K.M. 2005. “Gathering evidence: use of visual security cues in web browsers”, Proceedings of Graphics Interface, Waterloo, ON, pp. 137-144.
Winnefeld Jr., James A. (Sandy), Kirchhoff, Christopher and David M. Upton. 2015. Cybersecurity’s Human Factor: Lessons from the Pentagon. Harvard Business Review