Past experiences have shown that worm outbreaks are not targetted at anybody
specific but rather aim for as wide an impact as possible. This implies that the
individual victim has not been chosen, but is hit at random. Thus, methods for
worm defense will differ from those against classical network intrusion, which
is usually directed specifically against the victim.
In order to assess the threat of worms to systems, it can be helpful to speculate about the motivations of worm writers.
Looking at past worm incidents, we can assume that worm writers take particular interest in the technical aspects of their creation, namely the quality of the replication strategy. The subversion and subsequent control of infected machines is often taken as a welcome side effect, but not the primary goal, and neither is the damage.
Although worms did cause huge financial losses, the damage they have inflicted has mostly been temporary (i.e. by congesting networks and taking down machines) or purely abstract (i.e. loss of credibility, embarassment). Destructive payloads have been very rare, even though the worms usually possessed sufficient privileges to eradicate important data.
Unlike viruses, which can be readily created and modified using simple
GUI-driven virus construction kits, worms require considerable technical
expertise to find and exploit a security flaw. This is unlikely
to change, and hopefully going to become increasingly difficult, as software
authors and administrators become more security-concious.
It might explain why in general worms have been less hostile than viruses:
worm writers tend to seek respect for their skills and originality, while
virus writing has degrated to the tedious varying of well-known patterns
that all e-mail users have come to hate, and all that's left for "creativity" is
the nastiness of the payload.
When assessing the risk of worm outbreaks, one should not be fooled by the absence of openly destructive worms so far. If a worm gains administrator access, consider your data completely destroyed, stolen, or manipulated (whatever is the worst), and any and all user accounts on that machine to be compromised.
The same holds true for the abuse of system resources. If your systems are controlled by someone else, their resources can be used for minor pranks, but also for illegal activities which will seem to be originating from you. Worse yet, false evidence or illegal content can be planted and used to discredit you.
Obviously these are worst-case assumptions. Given the fact that worms strike more-or-less randomly, the most likely attacks are those that pay no regard to the identity of the victim: the abusing of resources. But once your system has been owned, anything is possible, especially since many backdoors planted by worms are open to everybody or easy to take over. Freeriders might actually pose the greatest threat here, but the worm author might also review the subverted machines one-by-one to look for promising targets to abuse manually.
With a privileged back door, any attacker is able to do the worst conceivable
things to a machine (and possibly others that trust it), and it is only by
circumstance (namely the huge mass of subverted systems) that the chance of this
happening is relatively low.
Therefore, any risk assessment should be based on worst-case scenarios. Assuming
goodwill or lack of interest of the attacker is clearly a bad idea.
It is important to notice that correctly implemented worms show an exponential growth rate before the onset of saturation. They spread so fast that any human-mediated countermeasures will be too late. Therefore it is crucial to take preventive action, and that any reaction to actual worm infections be triggered automatically.
To defend against random crackers, it is usually sufficient to be just a little
more secure than your neighbor, as such attacks usually follow the path of
least resistance. Worms however have the capability to sweep through the
corners, such that if you are vulnerable and open, you will almost certainly
be infected.
On the other hand, worm writers tend to prefer easily
exploitable flaws, and naturally they choose to exploit widespread software
rather than niche products. Sites that maintain a high standard of security,
carefully choose the software they are running and don't simply adopt what
everyone else is using have a significantly lower risk of infection.
Virtually all worms we've seen today exploited weaknesses that were widely known at the time of the worm outbreak, and patches were readily available. Thus, the prime task of system administrators and home users is to keep their machines patched. This is especially important since worm outbreaks impair everybody, not just unpatched machines. Often, professionally maintained sites that are not vulnerable themselves are effectively DoS'ed by the backscatter traffic resulting from massive worm outbreaks on home users' machines, a situation that is considerably worsened by the general availability of broadband connections. Few people realize that with increased bandwidth comes increased responsibility.
Therefore, patching must be regarded as essential social duty, since the asset to be protected is a commons, and prevention is only efficient if everyone participates.
Trivial as it may sound, system operators must precisely know what they are
running and how it is configured, and take to disabling any unneeded services.
Code Red has shown what rogue services turned on by default (without their
owners realizing) can do to the net.
Most of the blame for this and similar incidents lies with the software vendors,
whose default policy for any and all features has been "on" and "free for
all". Fortunately, they are now beginning to adopt the "secure by default" paradigm.
It is a natural consequence of the increased user-friendliness of systems that
administrative expertise is degrading. Since security flaws are usually
non-obvious, they are unlikely to be recognized by part-time admins using
wizard-driven graphical interfaces.
Again, the blame lies with the vendors and their marketing apparatus, which
creates a false sense of simplicity and control.
The key components in any computing environment (and, regardless of what disgruntled admins may say, its prime reason of existince) are the users. Unfortunately, they are also the single most dangerous security weakness.
For security policies to be effective, it is vital that the users understand them, lest they work around them. To that end, they must be given at least a rough idea what worms are capable of and how they spread. More importantly, we must work against the abstract fear of viruses and worms that many users have (which is one of the reasons why hoaxes still work), and convey a realistic understanding of the problem.
The most convenient defense against worms would be to firewall them out.
This is harder than it sounds. Unlike port scans, which can be detected since a
suspicious number of connection attempts to unusual ports originate from the
same host, worm scans come from arbitrary addresses, and will likely be masked
by legitimate traffic.
Once the worm's signature is known, scans and attacks can be efficiently
blocked, but even a stateful firewall cannot stop random inbound scans without
a signature, or without knowing the addresses of all infected hosts.
This is where real-time blacklists come in. It is possible to detect random scanning worms relatively early with the help of network telescopes (cf. [Moore2002-2]), huge blocks of unassigned IP addresses that will collect a portion of the random scans and log their originating addresses. Imagine a whole /8 network redirected to a single machine, which analyses all incoming packets. It will get 1/256 of the entire worm traffic, but not much else, since nobody wants to make legitimate connections to unassigned IPs. The data gathered by such telescopes could be distributed in real-time to all routers, who could quickly blacklist the offending addresses, preventing further spread.
So far, this method has not been implemented. It calls for a lot of hairy cryptography and authentication, and it is not clear yet how to prevent spoofing of the telescope. (What if someone sends out scans with a forged sender IP?)
On the other hand, it is comparably easy to contain infected machines on the
local net. Once the firewall realizes a host is trying to connect to many
different addresses in a short time span, it can assume it has become
infected, blackhole its traffic and alert an administrator.
No further data is necessary, and the chance of false alerts is relatively slim.
We have seen that worms have the potential to render the internet useless, and theoretical designs allude to even scarier incidents in the future. While we can try to avoid intrusion into our systems, there is no defense against the network congestion from worm scanning and backscatter traffic.
The problems with inbound filtering and the realtive ease of outbound filtering emphasize the need to treat network security as a commons: if everybody takes care that their systems are carefully maintained and their internal problems do not propagate to others, security will improve. As with real-world commons such as the environment, solitary measures are not effective, only concerted action and cooperation can protect them. On the downside, if one network member acts careless, all others will be affected.
Since broadband connections are available to home users on a large scale, it is those home users who now bear most of the responsibility to keep the internet running. Therefore, it is crucial to educate them about the non-centralized, cooperative nature of the net, so that they can act accordingly.