The Asymmetry of Cybersecurity
The fundamental reality that makes securing an information
system so difficult is that an attacker only has to find one
vulnerability to exploit in order to start an attack, whereas a
defender must guard against an attack on any and every service
or user. A large system, like a corporate network or the
Acadmey's IT infrastructure, consists of many, many hosts, and
users, and provides many, many services, and any of these could
provide vectors for attackers.
That doesn't mean give up, but it does mean the securing a
system is a difficult taks, and is an on-going process, not a
one-shot job.
Fix the Immediate Problems that Allowed Us to Attack Successfully
We will begin our discussion of network defense by looking at
how to defend against the kind of attacks we discussed in the
network attack lesson. Recall that in that lesson we dicussed a
scenario in which we as attackers were supposed to steal a file
off a host. The attack proceeded in three phases:
-
Gaining access to the webserver host. In the attack
scenario, discussed attacking the webserver by exploiting a
bug in the server software — for example with a buffer
overflow attack. The best defense against this is to keep
sofware and OS's up-to-date, or "patched". When software
vulnerabilities become known (and often they are publicly
announced), vendors (the people/organizations responsible
for the software) will usually look for a way to fix the
problems and issue either newer versions of the software or
"patches" (updates of portions of a system).
Many users and administrators do a bad job of keeping their
systems up-to-date with the newest patches and versions of
software, and thus leave themselves vulnerable.
Microsoft regularly issues patches for its products on
"Patch Tuesday", which is the second Tuesday of each month.
Keeping software up-to-date is really important,
but it doesn't completely eliminate the threat of attacks,
like buffer overflow attacks, that exploit bugs in software.
Obviously, if an attacker finds a previously undiscovered
bug (vulnerability) in a piece of software, he can exploit
that until others notice the problem and patches for it come
out. An exploit that takes advantage of a previously unkown
vulnerability is called a zero-day exploit, and
this is something patching can't fix.
-
Password guessing. The second step in the attack scenario
was to gain access to the target host by remote login (from
the webserver host) using a username and password guessed
from publicly available information. One approach to
mitigating this is to limit what information is publicly
available. However, this is not terribly effective, however
attractive to monolithic institutions. Far more effective
is to have users use strong passwords and limit the exposure
of those passwords, i.e. limit opportunties to steal them.
As discussed elsewhere, passwords should be long, should not
be words or be based on personal information like a pet
or family member's name, and should be drawn from a large
character set.
Above all, you shouldn't use these passwords:
25 Worst Internet Passwords.
Administrators can set requirements on
passwords to try to force users to choose good ones. For
example, the Academy requires us to use at least two
lower-case, upper-case, numeric and punctuation characters.
Also, don't use the same password for multiple accounts.
As a side note: we have seen that unchanged default
passwords are a serious vulnerability, so making sure all of
those default passwords are changed is very important.
-
Escalating privileges. The third step in our attack
scenario was to escalate privileges on the target host.
We discussed two ways this might be done. The first was
that, being on the host in question, we might be able to
steal the password file and use a password cracking
program. The first defense against this is to make sure the
password file is only accessible to adminstrator processes.
The second is to, once again, make sure users choose strong
passwords and, of course, to use "salt".
The second approach to escalating privileges we discussed
was to hijack a process running with a higher level of
privileges. This is essentially the same thing we
discussed doing to the webserver, except that, once we are
on the target host, we have more processes to attack,
because we are not limited to servers listening on ports.
Once again, keeping software patched is an important part
of defending against this kind of attack. Another is to
minimize the system's use of programs that run with a high
level of privilege, so that there are fewer candidates for
processes to highjack. This idea will come back again later.
History gives us a great example of the consequences of
imagining that you can defend yourself with a single,
impenetrable barrier: The Maginot Line.
If you're not familiar with it, check out
the Wikipedia entry.
Designing a system to reduce the impact of an exploit
We know that
risk related to an information system is
depends not only on the system's exploitable
vulnerabilities,
but also an the
impact of successful exploits.
Of course we try to reduce the vulnerabilities of our
information systems.
However, trying to design a system with no vulnerabilities is
unrealistic. Instead it makes sense to acknowledge that there
will be exploitable vulnerabilities and to also design systems to
minimize the
impact of any one successful exploit. What follows are three principles
that should be kept in mind when considering the design of an
information system to help not only limit the number of
successful exploits, but reduce the
impact of any one
successful exploit.
Principle 1: Least Privilege
"Give users and
programs the privileges they need, and no more."
This common-sense idea has far-ranging consequences.
For example, if your network only
needs to provide
web and name-resolution services to users/hosts outside your
network, then you should employ a firewall that blocks inbound
connections to ports other than 80 and 53.
If a file or collection of files is only needed by a specific
user or by a particular network service, then that file or those
files should only be accessible to the user that needs them, or
to the server processes that need them. Operating system file
permissions are a natural way to limit access like this. But
this principle should also guide decisions on what information
goes on network drives (as opposed to a host's local drive),
what goes on "shared" drives. Even access to information in
databases and web applications. Do you think the mids system
follows this principle?
Principle 2: Defense in Depth
If you have a single wall
protecting everything that you try to make impregnable, if you
fail and that wall is breached, all is lost. (see the Maginot Line)
If on the other hand, you have concentric rings of walls,
breaching a single wall gains only a limited amount of access.
Castle builders understood this, so they designed their castles
with moats and outer walls and inner walls and keeps. The same
thing goes for information systems. In fact, we've seen a bit
of this, as our diagrammatic view of the target host has shown,
we have firewall for the host's network, the host's individual
firewall and policy regarding which server processes are
running, and password authentication along with OS polciy
regarding access to resources. However, there are some other
common practices that provide even more defense in depth. Below
are a few that are relatable in terms of the scenario from our
network attack lesson.
Note: You'll find that
least privilege and
defense in depth are complementary —
following the principle of least privilege leads to designs with
depth, and designing with defense in depth mind often brings us
closer to the ideal of least privilege.
Principle 3: Vigilance
Following the principles of Least Privilege and Defense in Depth
is likely to slow attackers down. They may
successfully exploit a vulnerability and gain some access, but
they need to find another vulnerability and another way to
exploit it before they can overcome the next barrier. And so on.
If we are vigilant, we may well recognize the intrusion
and be able to kick the attackers off our system before
they get to the asset they were really out to attack.
We have mentioned along the way in this course just a few of the
many kind of events that get logged by information systems.
Administrators need to keep their eyes on these logs files in
order to recognize that they're under attack.
Programs called
intrusion detection systems
can be used to help this effort by
automatically combing log files looking for unusual activity and
alarming administrators when it's found.
Best Practices for designing systems that reduce the
impact of an exploit
There are a number of "Best Practices" for desiging systems
that adhere to our our three principles:
least privilege, defense in depth, and vigilance.
-
Removing unnecessary accounts and services
An important implication of principle of least privilege is that
unnecessary accounts and services should be removed.
Every service and account represents a potential vulnerability.
Thus, it is important to remove accounts and disable services
that are not really necessary. For example, if a host is a
dedicated DNS server and nothing more, it doesn't make much
sense for ordinary users to have accounts on the host. If a
host is intended to be used for image scanning and other
multimedia work, it wouldn't make much sense to have a webserver
running on it.
- DMZ / Perimeter Network:
The attack scenario we looked at was predicated on attacking
the webserver host as a stepping stone to the target host.
This worked because the webserver was inside the target
host's network. An alternative is to keep servers that are
visible to the outside world, like the webserver in our
scenario, in a DMZ (Dimilitarized Zone). That
means that there is a second firewall in the network that
sits between the webserver (and other public-facing servers)
and the internal network. The area between the outer and
inner firewalls is called the "DMZ".
The inner firewall limits access from the DMZ to the
internal network. For example, the inner firewall my not
allow ssh traffic (port 22) from the dmz into the inner
network. This would've foiled the attack strategy in our
scenario. USNA employs this, in fact. You can, for example,
ssh from rona.cs.usna.edu to www.usna.edu, but you cannot
ssh from www.usna.edu to rona. If you have an account on
www.usna.edu (which the instructors do), you can try it.
Hosts inside the
DMZ usually do nothing but provide their service. This
allows adminstrators to remove many programs and files that
would normally be on a host, further limiting the options of
an attacker that gains access to that host.
- Sandboxing servers
If an attacker is able to hijack a server process with
something like a buffer overflow attack, what he gains is
the privileges and access that the server process itself
enjoys. To mitigate the damage caused by a successful
exploit of a server, a system should be designed so that the
privileges and access the server enjoys are precisely what
it needs to provide its service, and no more.
In our scenario, we attacked a webserver. If a system is
setup with a dummy user account, called perhaps
web
,
and the webserver runs as a process owned by that user, a
successful attacker would have no special privilges on the
system. If, on the other hand, the webserver process is
owned by the administrator account, a successful attacker
has unlimited access on the host. Other steps can limit the
privileges and access of the server even further. OS's
allow processes to be sandboxed, which means that
the set of files and services accessible by and even visible
to the process is severely limited.
-
Minimizing what executes with higher priveleges
This best-practice is an instance of "least privilege" but
is also an important part of implementing defence in depth.
Any process that runs as root/administrator or some other
user account with special access is a
potential avenue by which an attacker that has
gained unprivileged access can obtain privileged
access and make use of data or services that were previously
unavailable to him.
-
Keep log files, and monitor them
The principle of vigilance states that we should be
monitoring our systems for signs of attack. One of the
primary tools we have for doing this is log files. Most
servers can be configured to keep more or less logging
information on their activities. Not only should this
feature be used, but system administrators have to look
through the log files for signs of suspicious activity.
As mentioned previously, there's software that can
automate this to a degree.
-
Watch what leaves, not just what comes in
One thing to watch is outbound traffic: if there is a lot of
data going out of a host on your network, and that's not typical
behaviour, you might be seeing an attacker sending himself the
data he's stolen. This is another example of the
principle of vigilance.
Defending against DDoS
A distributed denial of service attack (DDoS) is a difficult
thing to defend against, because it can because there's no
obvious vulnerability to fix: with enough requests, you will
eventually swamp a server, that's all there is to it.
However, if you can identify what IP Addresses the attacks are
coming from, you can configure your firewall to drop traffic
from those IP Addresses and, thus, save your server from being
overloaded by those requests.
More Information
Defense in Depth
- Multiple layers of defense are placed throughout an information
system
- Addresses security vulnerabilities in personnel, technology, and
operations
- Protection mechanisms cover weaknesses / vulnerabilities of other
mechanisms in use
- Delay the success of an attack
- Provide time to detect and respond
|
Management and Monitoring
- Configuration Management
- Know what's on your network
- Similar systems share similar configurations
- Changes in baseline require approval
- Track all changes made to systems
- Use standardized protocols and centralized systems for management
- SNMP, Software and Anti-virus Management Servers
- Use centralized logging
- Logs from multiple sources collected in a single location
- Allows for centralized monitoring and response
|
Network Access Control
- Systems that control network access based on defined policies
- Permit, deny, or limit access based on
- User identity
- Required OS / application updates applied
- Anti-malware installed / up-to-date
- Malware detected on system
- System meets security policies
- Can deny access if an authorized system violates policies
- Usually provide a mechanism for remediation
- Can rely on agent software installed on system
|
Firewalls
- Device or software application designed to permit or deny network
traffic based upon a set of rules
- Protects networks from unauthorized access
- Permits legitimate communication to pass
- Logs traffic that violates rules
- Many routers contain firewall components
- Many firewalls can perform basic routing
|
Intrusion Detection Systems (IDS)
- Device or software application that monitors network and/or system
activities for malicious activities or policy violations
- Notifies when violations detected
- Two detection techniques
- Signature Based
- Compare traffic to preconfigured / predetermined attack patterns (signatures)
- Alert on match
- Statistical Anomaly
- Determine normal network activity
- Alert on anomalous traffic
- Must establish baseline
|
Intrusion Prevention Systems (IPS)
- IDS system that attempts to block/stop activity in addition to
reporting
- Must be positioned in-line with network traffic
- IPS actions
- Sends an alarm
- Drop the malicious packets
- Reset the connection
- Blocking traffic from the offender
|
Demilitarized Zones (DMZ)
- Physical or logical sub-network that exposes external services to
an untrusted network
- External services more vulnerable to attack
- Segregate external services from internal networks
- Often referred to as a perimeter network
- DMZ hosts are often bastion hosts
- Designed and configured to withstand attacks
- Generally host a single applicatin
- Limit implied trusts
- Different usernames/passwds from internal servers
- Separate or no domain membership
- Can be special purpose device
|
Proxy Servers
- Server that acts as an intermediary for requests from clients
seeking resources from a server
- Client connects to proxy and requests some service
- Proxy connects to relevant server and requests service
- Proxy forwards response to client
- Purpose
- Keep machines behind it anonymous
- Speed up access to resources (caching)
- Block undesired sites
- Log / audit usage
- Scan content for malware before delivery
- Scan outbound content (data leak protection)
|
Virtual Private Networks (VPN)
- Mechanism to provide remote networks or individual users secure
access to an organization's network
- Host / remote network "appear" physically connected to
organization's network
- Often encrypted
- Mechanisms used
- IPsec
- SSL/TLS tunneling
- Dial-up protocols (PPTP, L2TP, SSTP)
- SSH tunneling
- More secure than opening access through a firewall
|