Learning Outcomes [Top]
After completing these activities you should be able to:
- Explain the general security principles.
- Describe cyber security defense best practices.
- Explain defense strategies and security tools used to mitigate various cyber attacks.
The Asymmetry of Cyber Security
The fundamental reality that makes securing an information system so difficult is that an attacker only has to find one vulnerability to exploit in order to start an attack, whereas a defender must guard against an attack on any and every service or user.
A large system, like a corporate network or USNA's IT infrastructure, consists of many, many hosts, and users, and provides many, many services, and any of these could provide vectors for attackers.
That doesn't mean give up, but it does mean the securing a system is a difficult task, and is an on-going process, not a one-shot job.
General Security Principles and Practices
Mitigating Risks in the Cyber Domain — Principles and Best Practices
We know that risk related to an information system depends not only on the likelihood of system vulnerabilities being exploited by threats, but also on the impact of successful exploits.
Of course we try to reduce the vulnerabilities of our information systems.
However, trying to design a system with no vulnerabilities is unrealistic.
Instead it makes sense to acknowledge the risks and try to minimize the impact of any one successful exploit.
What follows are three principles that should be kept in mind when considering the design of an information system to help not only limit the number of successful exploits, but reduce the impact of any one successful exploit.
- Principle 1: Least Privilege.
"Give users and programs the privileges they need, and no more."
This common-sense idea has far-ranging consequences.
For example, if your network only needs to provide web and name-resolution services to users/hosts outside your network, then you should employ a firewall that blocks inbound connections to ports other than 80 and 53.
If a file or collection of files is only needed by a specific user or by a particular network service, then that file or those files should only be accessible to the user that needs them, or to the server processes that need them.
Operating system file permissions are a natural way to limit access like this.
But this principle should also guide decisions on what information goes on network drives (as opposed to a host's local drive), what goes on "shared" drives.
Even access to information in databases and web applications.
- Principle 2: Defense in Depth.
If you have a single wall protecting everything that you try to make impregnable, if you fail and that wall is breached, all is lost (see the Maginot Line).
If on the other hand, you have concentric rings of walls, breaching a single wall gains only a limited amount of access.
Castle builders understood this, so they designed their castles with moats and outer walls and inner walls and keeps; the same thing goes for information systems.
In fact, we've seen a bit of this, as our diagrammatic view of the target host has shown, we have a firewall for the host's network, the host's individual firewall, and policy regarding which server processes are running, and password authentication along with OS policy regarding access to resources.
There are some other common practices that provide additional defense in depth (see 'Best Practices' section, below).
Maginot Line Vulnerability
History gives us a great example of the consequences of imagining that you can defend yourself with a single, impenetrable barrier: The Maginot Line. If you're not familiar with the Maginot Line, check out the Wikipedia
- Principle 3: Vigilance.
Following the principles of Least Privilege and Defense in Depth is likely to slow attackers down.
They may successfully exploit a vulnerability and gain some access, but they need to find another vulnerability and another way to exploit it before they can overcome the next barrier.
If we are vigilant, we may recognize the intrusion and be able to kick the attackers off our system before they get to the asset they were really out to attack.
We have mentioned along the way in this course just a few of the many kind of events that get logged by information systems.
Administrators need to keep their eyes on these logs files in order to recognize that they're under attack.
Programs called intrusion detection systems can be used to help this effort by automatically combing log files looking for unusual activity and alarming administrators when it's found.
Zero-Days – a Need for Vigilance.
Lets take a quick look at the last three best practices, and why vigilance and an operational approach is needed for cyber security, just like security in other domains.
A look through the CVE list will quickly show that there is often a period of time from when a vulnerability is known, and when a patch for that vulnerability is available.
A zero-day is vulnerability that attackers can exploit, but no patch exists to correct the vulnerability.
The crucial part of the definition is that a patch does not exist for a zero-day exploit.
Since, no patch exists for a zero-day, keeping systems patched will not mitigate the risk of zero-day exploits.
All hope is not lost though, this is where vigilance comes in.
There are steps that can be taken in between the time that a vulnerability is known and the time when a patch is available.
While a software developer works to patch a zero-day vulnerability, other security practitioners work to provide ways to mitigate a zero-day vulnerability, and system operators need to remain vigilant and watch for abnormalities.
The notification of vulnerabilities are usually coupled with proof of concept exploits.
Proof of concept exploits are double edged swords.
Defenders and operators can use proof of concept exploits to develop ways to detect and block attempts to use the exploit, and take other steps to mitigate the risk of zero-day until a patch is available.
Additionally, past logs can be reviewed to search for past exploit attempts.
Attackers can use proof of concept exploits to further weaponize the exploit, and to operationally deploy the exploit to infiltrate systems.
Relating Attacks to Specific Defenses
- Defending Against DDoS.
In the Cyber Attack Lab, you executed a DDoS attack against a server.
There are two principal methods for defending against a DDoS:
If you can identify the source IP Addresses you can: try to block them locally, get your ISP to block them, and get them added to shared IP blacklists, so other network segments will block them.
However, because DDoS attacks can come from thousands of computers spanning thousands of different geographic locations, blocking strategies are difficult to implement.
For many network services, you can increase resiliency by having more servers in more locations.
Many "cloud" providers will let you host your applications on virtual servers.
The advantage is that you can purchase more servers on the fly, when you need them, and redirect traffic as needed to dynamically balance the load.
If one server location experiences a DDoS attack, you can migrate your servers and customer traffic to other locations to reduce the impact.
Your success against DDoS attacks is then determined largely by who has more resources, you or the attacker.
- Defending Against Remote Exploitation.
In Cyber Attack Lab, you attacked the web server and workstation by exploiting a bug in the software that was running, using Metasploit.
The software you exploited was not updated with the latest patches.
The best defense against this is to keep software up to date, or patched.
When software vulnerabilities become known (and often they are publicly announced), developers (the people/organizations responsible for the software) will usually look for a way to fix the problems and issue updates to the software, called patches.
We can also protect running services from remote attacks using firewalls, and the design of our network architecture.
- Keep software up to date.
- Disable unnecessary services.
- Network Barriers: employ an appropriately configured perimeter firewall and network architecture (possibly including NAT, a DMZ, etc.).
- Host Barrier: employ an appropriately configured host-based firewall and local network protections.
- Defending Against Password Guessing. During the attack lab, you were able to gain access to some targets host using a username and password guessed from publicly available information. One approach to mitigating this is to limit what information is publicly available. Far more effective, though, is to have users use strong passwords and limit the exposure of those passwords, i.e. limit opportunities to steal them.
- Use strong passwords.
Passwords should be:
- Drawn from a large character set
- Unusual (not common like these)
- Not be based on dictionary words or personal information
- Don't re-use passwords.
- Change default admin passwords, especially when configuring new equipment.
- Defend Against Privilege Escalation.
In the attack lab, you were able to escalate privileges on the target host in one of two ways.
- Password Cracking.
To defend against John the Ripper and other password-cracking tools, we should employ the following lessons from our Passwords/Hashing lab:
- Ensure the password file is only accessible to root/administrator processes
- Require users to choose strong passwords (enforce a strong password policy)
- Store only password hashes (not plaintext) and use salt
- Enforce account lockouts
- Local Exploits. In the lab, you executed a local exploit that hijacked a process running with a higher level of privileges. To defend against this type of attack, we should:
- Keep software up to date (patched)
- Minimize the system's use of programs that run with a high level of privilege, so that there are fewer candidates for processes to hijack
- Eliminate unnecessary software and services
- Implement software execution policies that enforce the principle of least privilege.
These policies are often employed by a server called a "domain controller", which pushes out the policies to all users and workstations inside its domain (e.g., the Academy domain at USNA).
These defense strategies may not preclude all possible local exploits, but they make the attacker's job difficult enough to be an effective deterrent and certainly gives operators more time to recognize and respond to attacker actions.
Several organizations summarize their own best practices for cyber defense.
Here are some excellent examples: