> Apache Security: Chapter 1. Apache Security Principles


Contents
Previous
Next

1 Apache Security Principles

This book contains 12 chapters. Of those, 11 cover the technical issues of securing Apache and web applications. Looking at the number of pages alone it may seem the technical issues represent the most important part of security. But wars are seldom won on tactics alone, and technical issues are just tactics. To win, you need a good overall strategy, and that is the purpose of this chapter. It has the following goals:

The Web Application Architecture Blueprints section offers several different views (user, network, and Apache) of the same problem, with a goal of increasing understanding of the underlying issues.

Security can be defined in various ways. One school of thought defines it as reaching the three goals known as the CIA triad:

Another goal, accountability, defined as being able to hold users accountable (by maintaining their identity and recording their actions), is sometimes added to the list as a fourth element.

The other main school of thought views security as a continuous process, consisting of phases. Though different people may name and describe the phases in different ways, here is an example of common phases:

Both lines of thought are correct: one views the static aspects of security and the other views the dynamics. In this chapter, I look at security as a process; the rest of the book covers its static aspects.

Another way of looking at security is as a state of mind. Keeping systems secure is an ongoing battle where one needs be alert and vigilant at all times, and remain one step ahead of adversaries. But you need to come to terms that being 100 percent secure is impossible. Sometimes, we cannot control circumstances, though we do the best we can. Sometimes we slip. Or we may have encountered a smarter adversary. I have found that being humble increases security. If you think you are invincible, chances are you won’t be alert to lurking dangers. But if you are aware of your own limitations, you are likely to work hard to overcome them and ensure all angles are covered.

Knowing that absolute security is impossible, we must accept occasional failure as certainty and design and build defensible systems. Richard Bejtlich (http://taosecurity.blogspot.com) coined this term (in a slightly different form: defensible networks). Richard’s interests are networks but the same principles apply here. Defensible systems are the ones that can give you a chance in a fight in spite of temporary losses. They can be defended. Defensible systems are built by following the essential security principles presented in the following section.

In this section, I present principles every security professional should know. These principles have evolved over time and are part of the information security body of knowledge. If you make a habit of reading the information security literature, you will find the same security principles recommended at various places, but usually not all in one place. Some resources cover them in detail, such as the excellent book Secrets & Lies: Digital Security in a Networked World by Bruce Schneier (Wiley). Here are the essential security principles:

Compartmentalize

Compartmentalization is a concept well understood by submarine builders and by the captain of the Starship Enterprise. On a submarine, a leak that is not contained to the quarter in which it originated will cause the whole submarine to be filled with water and lead to the death of the entire crew. That’s why submarines have systems in place to isolate one part of the submarine from another. This concept also benefits computer security. Compartmentalization is all about damage control. The idea is to design the whole to consist of smaller connected parts. This principle goes well together with the next one.

Utilize the principle of least privilege

Each part of the system (a program or a user) should be given the privileges it needs to perform its normal duties and nothing more. That way, if one part of the system is compromised, the damage will be limited.

Perform defense in depth

Defense in depth is about having multiple independent layers of security. If there is only one security layer, the compromise of that layer compromises the entire system. Multiple layers are preferable. For example, if you have a firewall in place, an independent intrusion detection system can serve to control its operation. Having two firewalls to defend the same entry point, each from a different vendor, increases security further.

Do not volunteer information

Attackers commonly work in the dark and perform reconnaissance to uncover as much information about the target as possible. We should not help them. Keep information private whenever you can. But keeping information private is not a big security tool on its own. Unless the system is secure, obscurity will not help much.

Fail safely

Make sure that whenever a system component fails, it fails in such a way as to change into a more secure state. Using an obvious example, if the login procedure cannot complete because of some internal problem, the software should reject all login requests until the internal problem is resolved.

Secure the weakest link

The whole system is as secure as its weakest link. Take the time to understand all system parts and focus your efforts on the weak parts.

Practice simplicity

Humans do not cope with complexity well. A study has found we can only hold up to around seven concepts in our heads at any one time. Anything more complex than that will be hard to understand. A simple system is easy to configure, verify, and use. (This was demonstrated in a recent paper, “A Quantitative Study of Firewall Configuration Errors” by Avishai Wool: http://www.eng.tau.ac.il/~yash/computer2004.pdf.)

Threat modeling is a fancy name for rational and methodical thinking about what you have, who is out there to get you, and how. Armed with that knowledge, you decide what you want to do about the threats. It is genuinely useful and fun to do, provided you do not overdo it. It is a loose methodology that revolves around the following questions:

The best time to start is at the very beginning, and use threat modeling for system design. But since the methodology is attack-oriented, it is never too late to start. It is especially useful for security assessment or as part of penetration testing (an exercise in which an attempt is made to break into the system as a real attacker would). One of my favorite uses for threat modeling is system administrator training. After designing several threat models, you will see the recurring patterns. Keeping the previous threat models is, therefore, an excellent way to document the evolution of the system and preserves that little bit of history. At the same time, existing models can be used as starting points in new threat modeling efforts to save time.

Table 1-1 gives a list of reasons someone may attack you. This list (and the one that follows it) is somewhat optimized. Compiling a complete list of all the possibilities would result in a multipage document. Though the document would have significant value, it would be of little practical use to you. I prefer to keep it short, simple, and manageable.

Table 1-2 gives a list of typical attacks on web systems and some ways to handle them.

Table 1-2. Typical attacks on web systems

Attack type

Description

Mitigation

Denial of service

Any of the network, web-server, or application-based attacks that result in denial of service, a condition in which a system is overloaded and can no longer respond normally.

Prepare for attacks (as discussed in Chapter 5). Inspect the application to remove application-based attack points.

Exploitation of configuration errors

These errors are our own fault. Surprisingly, they happen more often than you might think.

Create a secure initial installation (as described in Chapter 2-Chapter 4). Plan changes, and assess the impact of changes before you make them. Implement independent assessment of the configuration on a regular basis.

Exploitation of Apache vulnerabilities

Unpatched or unknown problems in the Apache web server.

Patch promptly.

Exploitation of application vulnerabilities

Unpatched or unknown problems in deployed web applications.

Assess web application security before each application is deployed. (See Chapter 10 and Chapter 11.)

Attacks through other services

This is a “catch-all” category for all other unmitigated problems on the same network as the web server. For example, a vulnerable MySQL database server running on the same machine and open to the public.

Do not expose unneeded services, and compartmentalize, as discussed in Chapter 9.

In addition to the mitigation techniques listed in Table 1-2, certain mitigation procedures should always be practiced:

  • Implement monitoring and consider implementing intrusion detection so you know when you are attacked.

  • Have procedures for disaster recovery in place and make sure they work so you can recover from the worst possible turn of events.

  • Perform regular backups and store them off-site so you have the data you need for your disaster recovery procedures.

To continue your study of threat modeling, I recommend the following resources:

One problem I frequently had in the past was deciding which of the possible protection methods to use when initially planning for installation. How do you decide which method is justifiable and which is not? In the ideal world, security would have a price tag attached and you could compare the price tags of protection methods. The solution I came to, in the end, was to use a system-hardening matrix.

First, I made a list of all possible protection methods and ranked each in terms of complexity. I separated all systems into four categories:

Then I made a decision as to which protection method was justifiable for which system category. Such a system-hardening matrix should be used as a list of minimum methods used to protect a system, or otherwise contribute to its security. Should circumstances require increased security in a certain area, use additional methods. An example of a system-hardening matrix is provided in Table 1-3. A single matrix cannot be used for all organizations. I recommend you customize the example matrix to suit your needs.

System classification comes in handy when the time comes to decide when to patch a system after a problem is discovered. I usually decide on the following plan:

A simple patching plan, such as in the previous section, assumes you will have sufficient resources to deal with problems, and you will deal with them quickly. This only works for problems that are easy and fast to fix. But what happens if there aren’t sufficient resources to patch everything within the required timeline? Some application-level and, especially, architectural vulnerabilities may require a serious resource investment. At this point, you will need to make a decision as to which problems to fix now and which to fix later. To do this, you will need to assign perceived risk to each individual problem, and fix the biggest problem first.

To calculate risk in practice means to make an educated guess, usually supported by a simple mathematical calculation. For example, you could assign numeric values to the following three factors for every problem discovered:

Combined, these three factors would provide a quantitive measure of the risk. The result may not mean much on its own, but it would serve well to compare with risks of other problems.

If you need a measure to decide whether to fix a problem or to determine how much to invest in protective measures, you may calculate annualized loss expectancies (ALE). In this approach, you need to estimate the asset value and the frequency of a problem (compromise) occurring within one year. Multiplied, these two factors yield the yearly cost of the problem to the organization. The cost is then used to determine whether to perform any actions to mitigate the problem or to live with it instead.

I will now present several different ways of looking at a typical web application architecture. The whole thing is too complex to depict on a single illustration and that’s why we need to use the power of abstraction to cope with the complexity. Broken into three different views, the problem becomes easier to manage. The three views presented are the following:

Each view comes with its own set of problems, which need to be addressed one at a time until all problems are resolved. The three views together practically map out the contents of this book. Where appropriate, I will point you to sections where further discussion takes place.

The first view, presented in Figure 1-1, is deceptively simple. Its only purpose is to demonstrate how a typical installation has many types of users. When designing the figure, I chose a typical business installation with the following user classes:

  • The public (customers or potential customers)

  • Partners

  • Staff

  • Developers

  • Administrators

  • Management

Members of any of these classes are potential adversaries for one reason or another. To secure an installation you must analyze the access requirements of each class individually and implement access restrictions so members of each class have access only to those parts of the system they need. Restrictions are implemented through the combination of design decisions, firewall restrictions, and application-based access controls.

Technical issues are generally relatively easy to solve provided you have sufficient resources (time, money, or both). People issues, on the other hand, have been a constant source of security-related problems for which there is no clear solution. For the most part, users are not actively involved in the security process and, therefore, do not understand the importance and consequences of their actions. Every serious plan must include sections dedicated to user involvement and user education.

Network design and network security are areas where, traditionally, most of the security effort lies. Consequently, the network view is well understood and supported in the literature. With the exception of reverse proxies and web application firewalls, most techniques employed at this level lie outside the scope of this book, but you will find plenty of recommendations for additional reading throughout. The relevant issues for us are covered in Chapter 9, with references to other materials (books, and documents available online) that offer more detailed coverage. Chapter 12 describes a network-level technique relevant to Apache security, that of web intrusion detection.

The network view is illustrated in Figure 1-2. Common network-level components include:

  • Network devices (e.g., servers, routers)

  • Clients (e.g., browsers)

  • Services (e.g., web servers, FTP servers)

  • Network firewalls

  • Intrusion detection systems

  • Web application firewalls

The Apache view is the most interesting way of looking at a system and the most complicated. It includes all the components you know are there but often do not think of in that way and often not at the same time:

The Apache view is illustrated in Figure 1-3. Making a distinction between applications running within the same process as Apache (e.g., mod_php) and those running outside, as a separate process (e.g., PHP executed as a CGI script), is important for overall security. It is especially important in situations where server resources are shared with other parties that cannot be trusted completely. Several such deployment scenarios are discussed in Chapter 6.

The components shown in the illustration above are situated close together. They can interact, and the interaction is what makes web application security complex. I have not even included a myriad of possible external components that make life more difficult. Each type of external system (a database, an LDAP server, a web service) uses a different “language” and allows for different ways of attack. Between every two components lies a boundary. Every boundary is an opportunity for something to be misconfigured or not configured securely enough. Web application security is discussed in Chapter 10 and Chapter 11.

Though there is a lot to do to maintain security throughout the life of a system, the overall security posture is established before installation takes place. The basic decisions made at this time are the foundations for everything that follows. What remains after that can be seen as a routine, but still something that needs to be executed without a fatal flaw.

The rest of this book covers how to protect Apache and related components.