The remainder of this book describes methods for preventing people from compromising the Apache installation. In this chapter, I will discuss how to retain control and achieve reasonable security in spite of giving your potential adversaries access to the server. Rarely will you be able to keep the server to yourself. Even in the case of having your own private server, there will always be at least one friend who is in need of a web site. In most cases, you will share servers with fellow administrators, developers, and other users.
You can share server resources in many different ways:
Among a limited number of selected users (e.g., developers)
Among a large number of users (e.g., students)
Massive shared hosting, or sharing among a very large number of users
Though each of these cases has unique requirements, the problems and aims are always the same:
You cannot always trust other people.
You must protect system resources from users.
You must protect users from each other.
As the number of users increases, keeping the server secure becomes more difficult. There are three factors that are a cause for worry: error, malice, and incompetence. Anyone, including you and me, can make a mistake. The only approach that makes sense is to assume we will and to design our systems to fail gracefully.
Many problems can arise when resources are shared among a group of users:
File permission problems
Dynamic-content problems
Resource-sharing problems on the server
Domain name-sharing problems (which affect cookies and authentication)
Information leaks on execution boundaries
When a server is shared among many users, it is common for each user to have a seperate account. Users typically work with files directly on the system (through a shell of some kind) or manipulate files using the FTP protocol. Having all users use just one web server causes the first and most obvious issue: problems with file permissions.
Users expect and require privacy for their files. Therefore, file permissions are
used to protect files from being accessed by other users. Since Apache is
effectively just another user (I assume httpd
in this book),
allowances must be made for Apache to access the files that are to be published on
the Web. This is a common requirement. Other daemons (Samba and FTPD come to mind)
fulfill the same requirements. These daemons initially run as
root
and switch to the required user once the user
authenticates. From that moment on, the permission problems do not exist since the
process that is accessing the files is the owner of the files.
When it comes to Apache, however, two facts complicate things. For one, running
Apache as root
is heavily frowned upon and normally not
possible. To run Apache as root
, you must compile from the
source, specifying a special compile-time option. Without this, the main Apache
process cannot change its identity into another user account. The second problem
comes from HTTP being a stateless protocol. When someone connects to an FTP server,
he stays connected for the length of the session. This makes it easy for the FTP
daemon to keep one dedicated process running during that time and avoid file
permission problems. But with any web server, one process accessing files belonging
to user X
now may be accessing the files belonging to user
Y
the next second.
Like any other user, Apache needs read access for files in order to serve them and execute rights to execute scripts. For folders, the minimum privilege required is execute, though read access is needed if you want directory listings to work. One way to achieve this is to give the required access rights to the world, as shown in the following example:
#chmod 701 /home/ivanr
#find /home/ivanr/public_html -type f | xargs chmod 644
#find /home/ivanr/public_html -type d | xargs chmod 755
But this is not very secure. Sure, Apache would get the required access, but so
would anyone else with a shell on the server. Then there is another problem. Users’
public web folders are located inside their home folders. To get into the public web
folder, limited access must be allowed to the home folder as well. Provided only the
execute privilege is given, no one can list the contents of the home folder, but if
they can guess the name of a private file, they will be able to access it in most
cases. In a way, this is like having a hole in the middle of your living room and
having to think about not falling through every day. A safer approach is to use
group membership. In the following example, it is assumed Apache is running as user
httpd
and group httpd
, as described in
Chapter 2:
#chgrp httpd /home/ivanr
#chmod 710 /home/ivanr
#chown -R ivanr:httpd /home/ivanr/public_html
#find /home/ivanr/public_html -type f | xargs chmod 640
#find /home/ivanr/public_html -type d | xargs chmod 2750
This permission scheme allows Apache to have the required access but is much safer
than the previous approach since only httpd
has access. Forget
about that hole in your living room now. The above also ensures any new folders and
files created under the user’s public web folder will belong to the
httpd
group.
Some people believe the public web folder should not be underneath users’ home
folders. If you are one of them, nothing stops you from creating a separate folder
hierarchy (for example /www/users
) exclusively for user public
web folders. A symbolic link will create the setup transparent for most
users:
# ln -s /www/users/ivanr/public_html /home/ivanr/public_html
One problem you will encounter with this is that suEXEC (described later in this chapter) will stop working for user directories. This is because it only supports public directories that are beneath users’ home directories. You will have to customize it and make it work again or have to look into using some of the other execution wrappers available.
The permission problem usually does not exist in shared hosting situations where FTP is exclusively used to manipulate files. FTP servers can be configured to assign the appropriate group ownership and access rights.
On some systems, the default setting for umask
is 002,
which is too relaxed and results in creating group-writable files. This
translates to Apache being able to write to files in the public web folder.
Using umask
022 is much safer. The correct
umask
must be configured separately for the web server
(possibly in the apachectl
script), the FTP server (in its
configuration file) and for shell access. (On my system, the default
umask
for shell access is configured in
/etc/bashrc
.)
If your users have a way of changing file ownership and permissions (through
FTP, shell access, or some kind of web-based file manager), consider installing
automatic scripts to periodically check for permission problems and correct
them. Manual inspection is better, but automatic correction may be your only
option if you have many users. If you do opt for automatic correction, be sure
to leave a way for advanced users to opt out. A good way to do this is to have
automated scripts look for a file with a special name (e.g.,
.disable-permission-fixing
) and not make changes if
that file exists.
To achieve maximum security you can resort to creating virtual filesystems for
users, and then use the chroot(2)
function to isolate them
there. Your FTP daemon is probably configured to do this, so you are half-way
there anyway. With virtual filesystems deployed, each user will be confined
within his own space, which will appear to him as the complete filesystem. The
process of using chroot(2)
to isolate virtual filesystems is
simpler than it may appear. The approach is the same as in Chapter 2, where I showed how to isolate the
Apache server. You have to watch for the following:
Maintaining many virtual filesystems can be difficult. You can save a lot of time by creating a single template filesystem and using a script to update all the instances.
Virtual filesystems may grow in size, and creating copies of the same
files for all users results in a lot of wasted space. To save space, you
can create hard links from the template filesystem to virtual
filesystems. Again, this is something a script should do for you.
Working with hard links can be very tricky because many backup programs
do not understand them. (GNU tar
works fine.) Also,
if you want to update a file in the template, you will have to either
delete it in all virtual filesystems and re-create hard links or not
delete the original file in the first place but just truncate it and
insert the new contents.
Ensure the CGI scripts are properly jailed prior to execution. If your
preferred wrapper is suEXEC, you will have to patch it (since suEXEC
does not normally have chroot(2)
support).
Apache will be the only program running across virtual filesystems.
The virtual system approach will work only if your users cannot use
symbolic links or their .htaccess
files (e.g.,
using mod_rewrite
) to access files outside their
own little territories.
If all users had were static files, the file permission problem I just described would be something we could live with. Static files are easy to handle. Apache only needs to locate a file on disk, optionally perform access control, and send the file verbatim to the HTTP client. But the same root cause (one Apache running for different users) creates an even bigger problem when it comes to dynamic content.
Dynamic content is created on the fly, by executing scripts (or programs) on the server. Users write scripts and execute them as the Apache user. This gives the users all the privileges the Apache user account has. As pointed out in the previous section, Apache must be able to read users’ files to serve them, and this is not very dangerous for static content. But with dynamic content, suddenly, any user can read any other users’ web files. You may argue this is not a serious problem. Web files are supposed to be shared, right? Not quite. What if someone implemented access controls on the server level? And what if someone reads the credentials used to access a separate database account?
Other things can go wrong, too. One httpd
process can control
other httpd
processes running on the same server. It can send
them signals and, at the very least, kill them. (That is a potential for denial of
service.) Using a process known as
ptrace
, originally designed for interactive
debugging, one process can attach to another, pause it, read its data, and change
how it operates, practically hijacking it. (See “Runtime Process Infection” at
http://www.phrack.org/phrack/59/p59-0x08.txt
to learn more about how this is done.) Also, there may be shared memory segments
with permissions that allow access.
Of course, the mere fact that some untrusted user can upload and execute a binary
on the server is very dangerous. The more users there are, the more dangerous this
becomes. Users could exploit a vulnerability in a suid
binary
if it is available to them, or they could exploit a vulnerability in the kernel. Or,
they could create and run a server of their own, using an unprivileged high
port.
No comprehensive solution exists for this problem at this time. All we have is a series of partial solutions, each with its own unique advantages and disadvantages. Depending on your circumstances, you may find some of these partial solutions adequate.
All approaches to solving the single web server user problem have a serious drawback. Since the scripts then run as the user who owns the content, that means executed scripts now have write privileges wherever the user has write privileges. It is no longer possible to control script write access easily.
I have provided a summary of possible solutions in Table 6-1. Subsequent sections provide further details.
Table 6-1. Overview of secure dynamic-content solutions
Solution |
Advantages |
Disadvantages |
---|---|---|
Execution wrappers: suEXEC, CGIWrap, SBOX |
|
|
FastCGI protocol |
|
|
Per-request change of Apache identity:
|
|
|
Perchild MPM and Metux MPM |
|
|
Running multiple Apache instances |
|
|
Increased security through execution wrappers is a hybrid security model. Apache runs as a single user when working with static content, switching to another user to execute dynamic requests. This approach solves the worst part of the problem and makes users’ scripts run under their respective accounts. It does not attempt to solve the problem with filesystem privileges, which is the smaller part of the whole problem.
One serious drawback to this solution is the reduced performance, especially
compared to the performance of Apache modules. First, Apache must start a new
process for every dynamic request it handles. Second, since Apache normally runs
as httpd
and only root
can change user
identities, Apache needs help from a specialized suid
binary. Apache, therefore, starts the suid
binary first,
telling it to run the user’s script, resulting in two processes executed for
every dynamic HTTP request.
There are three well-known suid
execution
wrappers:
suEXEC (part of the Apache distribution)
CGIWrap (http://cgiwrap.unixtools.org
)
SBOX (http://stein.cshl.org/software/sbox/
)
I strongly favor the suEXEC approach since it comes with Apache and integrates
well with it. (suEXEC is described later in this chapter.) The other two
products offer chroot(2)
support but that can also be
achieved with a patch to suEXEC. The other two products are somewhat more
flexible (and thus work where suEXEC would not) since suEXEC comes with a series
of built-in, nonconfigurable restrictions.
FastCGI (http://www.fastcgi.com
) is a
language-independent protocol that basically serves as an extension to CGI and
allows a request to be sent to a separate process for processing. This process
can be on the same machine or on a separate server altogether. It is a stable
and mature technology. The interesting thing about the protocol is that once a
process that handles requests is created, it can remain persistent to handle
subsequent requests. This removes the biggest problem we have with the execution
wrapper approach. With FastCGI, you can achieve processing speeds practically
identical to those of built-in Apache modules.
On the Apache side, FastCGI is implemented with the
mod_fastcgi
module. The increased performance
does not mean reduced security. In fact, mod_fastcgi
can be
configured to use an execution wrapper (e.g., suEXEC) to start scripts, allowing
scripts to run under their own user accounts.
Thus, FastCGI can be viewed as an improvement upon the execution wrapper approach. It has the same disadvantage of only working for dynamic resources but the benefit of achieving greater speeds. The flexibility is somewhat reduced, though, because FastCGI must be supported by the application. Though many technologies support it (C, Java, Perl, Python, PHP, etc.), some changes to scripts may be required. (FastCGI is described later in this chapter.)
In previous sections, I mentioned Apache running as a
non-root
user as a barrier to switching user
identities. One way to solve the problem is with execution wrappers. The other
way is to run Apache as root
. How bad could this be? As I
mentioned, other daemons are doing the same. It comes down to whether you are
prepared to accept the additional risk of running a public service as
root
. You may be already doing something like
that when you are accepting mail via SMTP. But other daemons are carefully
developed applications that do not execute code that cannot be fully trusted, as
is the case with Apache and with other users’ scripts. In my opinion, there is
nothing fundamentally wrong running Apache as root
,
provided you are absolutely certain about what you are doing and you make sure
you are not providing your users with additional privileges that can be
abused.
On many Unix systems the special root
privileges are
fixed and cannot be removed. Some systems, on the other hand, support a new
security model where privileges can be assigned independently and at will.
Consequently, this model makes it possible to have a root
process that is stripped of its “super powers.” Or the opposite, have a
non-root
process that has selected privileges required
for its operation. If your system supports such features, you do not have to run
Apache as root
to allow it to change its identity.
If you decide to try it, recompile
Apache with -DBIG_SECURITY_HOLE
, and choose from
several third-party suid
modules:
mod_become
(http://www.snert.com/Software/mod_become/
)
mod_diffprivs
(http://sourceforge.net/projects/moddiffprivs/
)
mod_suid
(http://www.jdimedia.nl/igmar/mod_suid/
)
mod_suid2
(http://bluecoara.net/servers/apache/mod_suid2_en.phtml
)
Running as root
allows Apache to change its identity to
that of another user, but that is only one part of the problem. Once one Apache
process changes from running as root
to running as (for
example) ivanr,
there is no way to go back to being
root
. Also, because of the stateless nature of the HTTP
protocol, there is nothing else for that process to do but die. As a
consequence, the HTTP Keep-Alive functionality must be turned off and each child
must be configured to serve only one request and then shut down
(MaxRequestsPerChild 1
). This will affect performance but
less than when using execution wrappers.
Would it be smarter to keep that Apache process running as
ivanr
around for later when the next request to run a
script as ivanr
arrives? It would be, and that is what the
two projects I describe in the next section are doing.
The Apache 2 branch was intended to have the advanced running-as-actual-user
capabilities from day one. This was the job of the
mod_perchild
module. The idea was simple: instead of
switching the whole of Apache to run as root
, have one
simple process running as root
and give it the job of
creating other non-root
processes as required. When a
request for the user ivanr
came
in,
Apache would look to see if any processes were running as
ivanr
. If not, a new process would be created. If so,
the request would be forwarded to the existing process. It sounds simple but
mod_perchild
never achieved stability.
There is an ongoing effort to replace mod_perchild
with
equivalent functionality. It is called Metux MPM (http://www.metux.de/mpm/
), and there is some talk about the
possibility of Metux MPM going into the official Apache code tree, but at the
time of this writing it isn’t stable either.
The approach used by Perchild MPM and Metux MPM is the only comprehensive solution for the identity problem. I have no doubt a stable and secure solution will be achieved at some point in the future, at which time this long discussion about user identity problems will become a thing of the past.
One solution to the web server identity problem is to run multiple instances of the Apache web server, each running under its own user account. It is simple, fast, secure, and easy to implement. It is a solution I would choose in most cases. Naturally, there are some problems you will need to overcome.
It is not suitable for mass hosting, where the number of domains per server is in the hundreds or thousands. Having a thousand independent processes to configure and maintain is much more difficult than just one. Also, since a couple of processes must be permanently running for each hosting account, memory requirements are likely to be prohibitive.
Having accepted that this solution is only feasible for more intimate environments (e.g., running internal web applications securely), you must consider possible increased consumption of IP addresses. To have several Apache web servers all run on port 80 (where they are expected to run), you must give them each a separate IP address. I don’t think this is a big deal for a few web applications. After all, if you do want to run the applications securely, you will need to have SSL certificates issued for them, and each separate SSL web site requires a separate IP address anyway.
Even without having the separate IP addresses it is still possible to have the
Apache web server run on other ports but tunnel access to them exclusively
through a master Apache instance running as a reverse proxy on port 80. There
may be some performance impact there but likely not much, especially with steady
increases of mod_proxy
stability and performance.
Other advantages of running separate Apache instances are discussed in Chapter 9.
Continuing on the subject of having httpd
execute the scripts
for all users, the question of shared server resources arises. If
httpd
is doing all the work, then there is no way to
differentiate one user’s script from another’s. If that’s impossible, we cannot
control who is using what and for how long. You have two choices here: one is to
leave a single httpd
user in place and let all users use the
server resources as they please. This will work only until someone starts abusing
the system, so success basically depends on your luck.
A better solution is to have users’ scripts executed under their own user accounts. If you do this, you will be able to take advantage of the traditional Unix controls for access and resource consumption.
When several parties share a domain name, certain problems cannot be prevented, but you should at least be aware that they exist. These are problems with the namespace: If someone controls a fraction of a domain name, he can control it all.
According to the HTTP specification, in Basic authentication (described in Chapter 7), a domain name and a realm name form a single protection space. When the domain name is shared, nothing prevents another party from claiming a realm name that already exists. If that happens, the browser will, assuming the same protection realm already exists, send them the cached set of credentials. The username and the password are practically sent in plaintext in Basic authentication (see Chapter 7). An exploit could function along the following lines:
A malicious script is installed to claim the same realm name as the one that already exists on the same server and to record all usernames and passwords seen. To lower the chances of being detected, the script redirects the user back to the original realm.
Users may stumble onto the malicious script by mistake; to increase the chances of users visiting the script, the attacker can try to influence their actions by putting links (pointing to the malicious script) into the original application. (For example, in the case of a public forum, anyone can register and post messages.) If the application is a web mail application, the attacker can simply send users email messages with links. It is also possible, though perhaps slightly more involved, to attempt to exploit a cross site-scripting flaw in the application to achieve the same result and send users to the malicious script.
Unlike other situations where SSL resolves most Basic authentication vulnerabilities, encrypting traffic would not help here.
When Digest authentication is used, the protection space is explicitly attached to the URL, and that difference makes Digest authentication invulnerable to this problem. The attacker’s approach would not work anyway since, when Digest authentication is used, the credentials are never sent in plaintext.
Each cookie belongs to a namespace, which is defined by the cookie domain name
and path. (Read RFC 2965, “HTTP State Management Mechanism,” at http://www.ietf.org/rfc/rfc2965.txt
, for more
details.) Even if the domain name is the same for the target and the attacker,
if a proper path is assigned to the cookie by the target, no collisions can take
place. Actually, no exploitable collisions can take place. The adversary can
still inject a cookie into the application, but that is only a more complicated
way of doing something that is possible anyway. The gain in the type of attack
discussed here comes from being able to receive someone else’s cookie.
However, most application pages are written for execution on a single domain
name, so programmers do not pay much attention to the value of the cookie path;
it usually has a /
value, which means it will be sent with
any requests anywhere on the domain name. If those who deploy applications do
not pay attention either, a potential for compromise will occur.
For example, in PHP, the session-handling module is configured to send session
cookies with path set to /
by default. This means that if a
user is redirected to some other part of the same domain name, his session ID
will be collected from the cookie, and the session can be hijacked. To prevent
session cookie leaks, the PHP configuration variable
session.cookie_path
should be set to the correct prefix
for each application or user sharing the domain name.
On Unix, when a web server needs to execute an external binary, it does not do
that directly. The exec()
system call, used to execute binaries,
works by replacing the current process with a new process (created from a binary).
So, the web server must first execute fork()
to clone itself and
then make the exec()
call from the child instance. The parent
instance keeps on working. As you would expect, cloning creates two identical copies
of the initial process. This means that both processes have the same environment,
permissions, and open file descriptors. All these extra privileges must be cleaned
up before the control is given to some untrusted binary running as another user.
(You need to be aware of the issue of file descriptor leaks but you do not need to
be concerned with the cleanup process itself.) If cleaning is not thorough enough, a
rogue CGI script can take control over resources held by the parent process.
If this seems too vague, examine the following vulnerabilities:
When a file descriptor is leaked, the child process can do anything it wants with it. If a descriptor points to a log file, for example, the child can write to it and fake log entries. If a descriptor is a listening socket, the child can hijack the server.
Information leaks of this kind can be detected using the helper tool
env_audit
(http://www.web-insights.net/env_audit/
). The tool is distributed
with extensive documentation, research, and recommendations for programmers. To test
Apache and
mod_cgi
, drop the binary into the
cgi-bin
folder and invoke it as a CGI script using a
browser. The output will show the process information, environment details, resource
limits, and a list of open descriptors. The mod_cgi
output
shows only three file descriptors (one for stdin
,
stdout
, and stderr
), which is how it
should be:
Open file descriptor: 0 User ID of File Owner: httpd Group ID of File Owner: httpd Descriptor is stdin. No controlling terminal File type: fifo, inode - 1825, device - 5 The descriptor is: pipe:[1825] File descriptor mode is: read only ---- Open file descriptor: 1 User ID of File Owner: httpd Group ID of File Owner: httpd Descriptor is stdout. No controlling terminal File type: fifo, inode - 1826, device - 5 The descriptor is: pipe:[1826] File descriptor mode is: write only ---- Open file descriptor: 2 User ID of File Owner: httpd Group ID of File Owner: httpd Descriptor is stderr. No controlling terminal File type: fifo, inode - 1827, device - 5 The descriptor is: pipe:[1827] File descriptor mode is: write only
As a comparison, examine the output from executing a binary from
mod_php
. First, create a simple file (e.g., calling
it env_test.php
) containing the following to invoke the audit
script (adjust the location of the binary if necessary):
<? system("/usr/local/apache/cgi-bin/env_audit"); echo("Done."); ?>
Since the audit script does not know it was invoked through the web server, the
results will be stored in the file /tmp/env_audit0000.log
. In
my output, there were five descriptors in addition to the three expected (and shown
in the mod_cgi
output above). The following are fragments of
the output I received. (Descriptor numbers may be different in your case.)
Here is the part of the output that shows an open descriptor 3, representing the socket listening on (privileged) port 80:
Open file descriptor: 3 User ID of File Owner: root Group ID of File Owner: root WARNING - Descriptor is leaked from parent. File type: socket Address Family: AF_INET Local address: 0.0.0.0 Local Port: 80, http NOTICE - connected to a privileged port WARNING - Appears to be a listening descriptor - WAHOO! Peer address: UNKNOWN File descriptor mode is: read and write
In the further output, descriptors 4 and 5 were pipes used for communication with the CGI script, and descriptor 8 represented one open connection from the server to a client. But descriptors 6 and 7 are of particular interest because they represent the error log and the access log, respectively:
Open file descriptor: 6 User ID of File Owner: root Group ID of File Owner: root WARNING - Descriptor is leaked from parent. File type: regular file, inode - 426313, device - 2050 The descriptor is: /usr/local/apache/logs/error_log File's actual permissions: 644 File descriptor mode is: write only, append ---- Open file descriptor: 7 User ID of File Owner: root Group ID of File Owner: root WARNING - Descriptor is leaked from parent. File type: regular file, inode - 426314, device - 2050 The descriptor is: /usr/local/apache/logs/access_log File's actual permissions: 644 File descriptor mode is: write only, append
Exploiting the leakages is easy. For example, compile and run the following program (from the PHP script) instead of the audit utility. (You may need to change the descriptor number from 6 to the value you got for the error log in your audit report.)
#define ERROR_LOG_FD 6 int main( ) { char *msg = "What am I doing here?\n"; write(ERROR_LOG_FD, msg, strlen(msg)); }
As expected, the message will appear in the web server error log! This means
anyone who can execute binaries from PHP can fake messages in the access log and the
error log. They could use this ability to plant false evidence against someone else
into the access log, for example. Because of the nature of the error log (it is
often used as stderr
for scripts), you cannot trust it
completely, but the ability to write to the access log is really dangerous. Choosing
not to use PHP as a module, but to execute it through suEXEC instead (as discussed
later in this chapter) avoids this problem.
Apache configuration data is typically located in one or more files in the
conf/
folder of the distribution, where only the
root
user has access. Sometimes, it is necessary or convenient
to distribute configuration data, and there are two reasons to do so:
Distributed configuration files can be edited by users other than the
root
user.
Configuration directives in distributed configuration files are resolved on every request, which means that any changes take effect immediately without having to have Apache restarted.
If you trust your developers and want to give them more control over Apache or if
you do not trust a junior system administrator enough to give her control over the
whole machine, you can choose to give such users full control only over Apache
configuration and operation. Use Sudo (http://www.courtesan.com/sudo/
) to configure your system to allow
non-root
users to run some commands as
root
.
Apache distributes configuration data by allowing specially-named files,
.htaccess
by default, to be placed together with the
content. The name of the file can be changed using the AccessFileName
directive, but I do not recommend this. While serving a request for a file somewhere,
Apache also looks to see if there are .htaccess
files anywhere on
the path. For example, if the full path to the file is
/var/www/htdocs/index.html
, Apache will look for the following
(in order):
/.htaccess /var/.htaccess /var/www/.htaccess /var/www/htdocs/.htaccess
For each .htaccess
file found, Apache merges it with the existing
configuration data. All .htaccess
files found are processed, and it
continues to process the request. There is a performance penalty associated with Apache
looking for access files everywhere. Therefore, it is a good practice to tell Apache you
make no use of this feature in most directories (see below) and to enable it only where
necessary.
The syntax of access file content is the same as that in
httpd.conf
. However, Apache understands the difference between
the two, and understands that some access files will be maintained by people who are not
to be fully trusted. This is why administrators are given a choice as to whether to
enable access files and, if such files are enabled, which of the Apache features to
allow in them.
Another way to distribute Apache configuration is to include other files from the
main httpd.conf
file using the Include
directive. This is terribly insecure! You have no control over what is written in
the included file, so whoever holds write access to that file holds control over
Apache.
Access file usage is controlled with the AllowOverride
directive. I discussed this directive in
Chapter 2, where I recommended a
None
setting by default:
<Directory /> AllowOverride None </Directory>
This setting tells Apache not to look for .htaccess
files and
gives maximum performance and maximum security. To give someone maximum control over a
configuration in a particular folder, you can use:
<Directory /home/ivanr/public_html/> AllowOverride All </Directory>
Configuration errors in access files will not be detected when Apache starts. Instead,
they will result in the server responding with status code 500
(Internal Server Error) and placing a log message in the error log.
Situations when you will give maximum control over a configuration are rare. More
often than not you will want to give users limited privileges. In the following example,
user ivanr
is only allowed to use access control configuration
directives:
<Directory /home/ivanr/public_html/> AllowOverride AuthConfig Limit </Directory>
You must understand what you are giving your users. In addition to
None
and All
, there are five groups of
AllowOverride
options (AuthConfig
,
FileInfo
, Indexes
, Limit
,
and Options
). Giving away control for each of these five groups gives
away some of the overall Apache security. Usage of AllowOverride
Options
is an obvious danger, giving users the power to enable Apache to
follow symbolic links (potentially exposing any file on the server) and to place
executable content wherever they please. Some AllowOverride
and
Options
directive options (also discussed in Chapter 2), used with other Apache modules, can also
lead to unforeseen possibilities:
If FollowSymLinks
(an Options
directive option) is allowed, a user can
create a symbolic link to any other file on the server (e.g.,
/etc/passwd
). Using
SymLinksIfOwnerMatch
is better.
The mod_rewrite
module can be used to achieve the
same effect as a symbolic link. Interestingly, that is why
mod_rewrite
requires FollowSymLinks
to work in the .htaccess
context.
If PHP is running as a web server user, the PHP auto_prepend
option can be used to make it
fetch any file on the server.
If AllowOverride FileInfo
is specified, users can execute a
file through any module (and filter in Apache 2) available. For example, if you
have the server configured to execute PHP through suEXEC, users can reroute
requests through a running PHP module instead.
More dangerously, AllowOverride
FileInfo
allows the use of the
SetHandler
directive, and that can be exploited
to map the output of special-purpose modules (such as
mod_status
or mod_info
) into
users’ web spaces.
It is possible to use mod_security
(described in Chapter 12) to prevent users who can assign
handlers from using certain sensitive handlers. The following two rules will detect an
attempt to use the special handlers and will only allow the request if it is sent to a
particular domain name:
SecFilterSelective HANDLER ^(server-status|server-info)$ chain SecFilterSelective SERVER_NAME !^www\.apachesecurity\.net$ deny,log,status:404
Securing dynamic requests is a problem facing most Apache administrators. In this section, I discuss how to enable CGI and PHP scripts and make them run securely and with acceptable performance.
Because of the inherent danger executable files introduce, execution should always be disabled by default (as discussed in Chapter 2). Enable execution in a controlled manner and only where necessary. Execution can be enabled using one of four main methods:
Using the ScriptAlias
directive
Explicitly by configuration
Through server-side includes
By assigning a handler, type, or filter
Using ScriptAlias
is a quick and dirty approach to enabling
script execution:
ScriptAlias /cgi-script/ /home/ivanr/cgi-bin/
Though it works fine, this approach can be dangerous. This directive creates a virtual web folder and enables CGI script execution in it but leaves the configuration of the actual folder unchanged. If there is another way to reach the same folder (maybe it’s located under the web server tree), visitors will be able to download script source code. Enabling execution explicitly by configuration will avoid this problem and help you understand how Apache works:
<Directory /home/ivanr/public_html/cgi-bin> Options +ExecCGI SetHandler cgi-script </Directory>
Execution of server-side includes (SSIs) is controlled via the
Options
directive. When the Options
+Includes
syntax is used, it allows the
exec
element, which in turn allows operating system
command execution from SSI files, as in:
<!--#exec cmd="ls" -->
To disable command execution but still keep SSI working, use
Options
+IncludesNOEXEC
.
For CGI script execution to take place, two conditions must be fulfilled.
Apache must know execution is what is wanted (for example through setting a
handler via SetHandler cgi-script
), and script execution must
be enabled as a special security measure. This is similar to how an additional
permission is required to enable SSIs. Special permissions are usually not
needed for other (non-CGI) types of executable content. Whether they are is left
for the modules’ authors to decide, so it may vary. For example, to enable PHP,
it is enough to have the PHP module installed and to assign a handler to PHP
files in some way, such as via one of the following two different
approaches:
# Execute PHP when filenames end in .php AddHandler application/x-httpd-php .php # All files in this location are assumed to be PHP scripts. <Location /scripts/> SetHandler application/x-httpd-php </Location>
In Apache 2, yet another way to execute content is through the use of output filters. Output filters are designed to transform output, and script execution can be seen as just another type of transformation. Server-side includes on Apache 2 are enabled using output filters:
AddOutputFilter INCLUDES .shtml
Some older versions of the PHP engine used output filters to execute PHP on Apache 2, so you may encounter them in configurations on older installations.
There are three Apache directives that help establish control over CGI scripts. Used in the main server configuration area, they will limit access to resources from the main web server user. This is useful to prevent the web server from overtaking the machine (through a CGI-based DoS attack) but only if you are not using suEXEC. With suEXEC in place, different resource limits can be applied to each user account used for CGI script execution. Such usage is demonstrated in the virtual hosts example later in this chapter. Here are the directives that specify resource limits:
Each directive accepts two parameters, for soft and hard limits, respectively.
Processes can choose to extend the soft limit up to the value configured for the
hard limit. It is recommended that you specify both values. Limits can be configured
in server configuration and virtual hosts in Apache 1 and also in directory contexts
and .htaccess
files in Apache 2. An example of the use of these
directives is shown in the next section.
Having discussed how execution wrappers work and why they are useful, I will now give more attention to practical aspects of using the suEXEC mechanism to increase security. Below you can see an example of configuring Apache with the suEXEC mechanism enabled. I have used all possible configuration options, though this is unnecessary if the default values are acceptable:
> $./configure \
>--enable-suexec \
>--with-suexec-bin=/usr/local/apache/bin/suexec \
>--with-suexec-caller=httpd \
>--with-suexec-userdir=public_html \
>--with-suexec-docroot=/home \
>--with-suexec-uidmin=100 \
>--with-suexec-gidmin=100 \
>--with-suexec-logfile=/var/www/logs/suexec_log \
>--with-suexec-safepath=/usr/local/bin:/usr/bin:/bin \
>--with-suexec-umask=022
Compile and install as usual. Due to high security expectations, suEXEC is known to be rigid. Sometimes you will find yourself compiling Apache several times until you configure the suEXEC mechanism correctly. To verify suEXEC works, look into the error log after starting Apache. You should see suEXEC report:
[notice] suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec)
If you do not see the message, that probably means Apache did not find the
suexec
binary (the --with-suexec-bin
option is not configured correctly). If you need to check the parameters used to
compile suEXEC, invoke it with the -V
option, as in the following
(this works only if done as root
or as the user who is supposed
to run suEXEC):
# /usr/local/apache/bin/suexec -V
-D AP_DOC_ROOT="/home"
-D AP_GID_MIN=100
-D AP_HTTPD_USER="httpd"
-D AP_LOG_EXEC="/var/www/logs/suexec_log"
-D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
-D AP_SUEXEC_UMASK=022
-D AP_UID_MIN=100
-D AP_USERDIR_SUFFIX="public_html"
Once compiled correctly, suEXEC usage is pretty straightforward. The following is
a minimal example of using suEXEC in a virtual host configuration. (The syntax is
correct for Apache 2. To do the same for Apache 1, you need to replace
SuexecUserGroup
ivanr
ivanr
with User
ivanr
and Group
ivanr
.) This example also demonstrates the use of CGI script
limit configuration:
<VirtualHost *> ServerName ivanr.example.com DocumentRoot /home/ivanr/public_html # Execute all scripts as user ivanr, group ivanr SuexecUserGroup ivanr ivanr # Maximum 1 CPU second to be used by a process RLimitCPU 1 1 # Maximum of 25 processes at any one time RLimitNPROC 25 25 # Allow 10 MB to be used per-process RLimitMEM 10000000 10000000 <Directory /home/ivanr/public_html/cgi-bin> Options +ExecCGI SetHandler cgi-script </Directory> </VirtualHost>
A CGI script with the following content comes in handy to verify everything is configured correctly:
#!/bin/sh echo "Content-Type: text/html" echo echo "Hello world from user <b>`whoami`</b>! "
Placed in the cgi-bin/
folder of the above virtual host, the
script should display a welcome message from user ivanr
(or
whatever user you specified). If you wish, you can experiment with the CGI resource
limits now, changing them to very low values until all CGI scripts stop
working.
Because of its thorough checks, suEXEC makes it difficult to execute binaries using the SSI mechanism: command line parameters are not allowed, and the script must reside in the same directory as the SSI script. What this means is that users must have copies of all binaries they intend to use. (Previously, they could use any binary that was on the system path.)
Unless you have used suEXEC before, the above script is not likely to work on your
first attempt. Instead, one of many suEXEC security checks is likely to fail,
causing suEXEC to refuse execution. For example, you probably did not know that the
script and the folder in which the script resides must be owned by the same user and
group as specified in the Apache configuration. There are many checks like this and
each of them contributes to security slightly. Whenever you get an “Internal Server
Error“ instead of script output, look into the suexec_log
file
to determine what is wrong. The full list of suEXEC checks can be found on the
reference page http://httpd.apache.org/docs-2.0/suexec.html
. Instead of
replicating the list here I have decided to do something more useful. Table 6-2 contains a list of suEXEC
error messages with explanations. Some error messages are clear, but many times I
have had to examine the source code to understand what was happening. The messages
are ordered in the way they appear in the code so you can use the position of the
error message to tell how close you are to getting suEXEC working.
Table 6-2. suEXEC error messages
Error message |
Description |
---|---|
User mismatch (%s instead of %s) |
The suEXEC binary can only be invoked by the user specified at
compile time with the |
Invalid command (%s) |
The command begins with |
Invalid target user name: (%s) |
The target username is invalid (not known to the system). |
Invalid target user id: (%s) |
The target |
Invalid target group name: (%s) |
The target group name is invalid (not known to the system). |
Cannot run as forbidden uid (%d/%s) |
An attempt to execute a binary as user
|
Cannot run as forbidden gid (%d/%s) |
An attempt to execute a binary as group
|
Failed to setgid (%ld: %s) |
Change to the target group failed. |
Failed to setuid (%ld: %s) |
Change to the target user failed. |
Cannot get current working directory |
suEXEC cannot retrieve the current working directory. This would possibly indicate insufficient permissions for the target user. |
Cannot get docroot information (%s) |
suEXEC cannot get access to the document root. For nonuser
requests, the document root is specified at compile time using
the - |
Command not in docroot (%s) |
The target file is not within the allowed document root directory. See the previous message description for a definition. |
Cannot stat directory: (%s) |
suEXEC cannot get information about the current working directory. |
Directory is writable by others: (%s) |
Directory in which the target binary resides is group or world writable. |
Cannot stat program: (%s) |
This probably means the file is not found. |
File is writable by others: (%s/%s) |
The target file is group or world writable. |
File is either setuid or setgid: (%s/%s) |
The target file is marked |
Target uid/gid (%ld/%ld) mismatch with directory (%ld/%ld) or program (%ld/%ld) |
The file and the directory in which the file resides must be owned by the target user and target group. |
File has no execute permission: (%s/%s) |
The target file is not marked as executable. |
AP_SUEXEC_UMASK of %03o allows write permission to group and/or other |
This message is only a warning. The selected
|
(%d)%s: exec failed (%s) |
Execution failed. |
You can use suEXEC outside virtual hosts with the help of the
mod_userdir
module. This is useful in cases where the
system is not (or at least not primarily) a virtual hosting system, but users
want to obtain their home pages using the ~username
syntax.
The following is a complete configuration example. You will note suEXEC is not
explicitly configured here. If it is configured and compiled into the web
server, as shown previously, it will work automatically:
UserDir public_html UserDir disabled root <Directory /home/*/public_html> # Give users some control in their .htaccess files. AllowOverride AuthConfig Limit Indexes # Conditional symbolic links and SSIs without execution. Options SymLinksIfOwnerMatch IncludesNoExec # Allow GET and POST. <Limit GET POST> Order Allow,Deny Allow from all </Limit> # Deny everything other than GET and POST. <LimitExcept GET POST> Order Deny,Allow Deny from all </LimitExcept> </Directory> # Allow per-user CGI-BIN folder. <Directory /home/*/public_html/cgi-bin/> Options +ExecCGI SetHandler cgi-script </Directory>
Ensure the configuration of the UserDir
directive
(public_html
in the previous example) matches the
configuration given to suEXEC at compile time with the
--with-suexec-userdir
configuration option.
Do not set the UserDir
directive to
./
to expose users’ home folders directly. This will
also expose home folders of other system users, some of which may contain
sensitive data.
A frequent requirement is to give your (nonvirtual host) users access to PHP,
but this is something suEXEC will not support by default. Fortunately, it can be
achieved with some mod_rewrite
magic. All users must have a
copy of the PHP binary in their cgi-bin/
folder. This is an
excellent solution because they can also have a copy of the
php.ini
file and thus configure PHP any way they want.
Use mod_rewrite
in the following way:
# Apply the transformation to PHP files only. RewriteCond %{REQUEST_URI} \.php$ # Transform the URI into something mod_userdir can handle. RewriteRule ^/~(\w+)/(.*)$ /~$1/cgi-bin/php/~$1/$2 [NS,L,PT,E=REDIRECT_STATUS:302]
The trick is to transform the URI into something
mod_userdir
can handle. By setting the
PT
(passthrough) option in the rule, we are telling
mod_rewrite
to forward the URI to other modules (we
want mod_userdir
to see it); this would not take place
otherwise. You must set the REDIRECT_STATUS
environment
variable to 302 so the PHP binary knows it is safe to execute the script. (Read
the discussion about PHP CGI security in Chapter
3.)
There are two ways to implement a mass virtual hosting system. One is to use
the classic approach and configure each host using the
<VirtualHost>
directive. This is a very
clean way to support virtual hosting, and suEXEC works as you would expect, but
Apache was not designed to work efficiently when the number of virtual hosts
becomes large. Once the number of virtual hosts reaches thousands, the loss of
performance becomes noticeable. Using modern servers, you can deploy a maximum
of 1,000-2,000 virtual hosts per machine. Having significantly more virtual
hosts on a machine is possible, but only if a different approach is used. The
alternative approach requires all hosts to be treated as part of a single
virtual host and to use some method to determine the path on disk based on the
contents of the Host
request header. This is what
mod_vhost_alias
(http://httpd.apache.org/docs-2.0/mod/mod_vhost_alias.html
)
does.
If you use
mod_vhost_alias,
suEXEC will stop working and
you will have a problem with security once again. The other execution wrappers
are more flexible when it comes to configuration, and one option is to
investigate using them as a replacement for suEXEC.
But there is a way of deploying mass virtual hosting with suEXEC enabled, and
it comes with some help from mod_rewrite
. The solution provided below is a
mixture of the mass virtual hosting with mod_rewrite
approach documented in Apache documentation (http://httpd.apache.org/docs-2.0/vhosts/mass.html
) and the
trick I used above to make suEXEC work with PHP for user home pages. This
solution is only meant to serve as a demonstration of a possibility; you are
advised to verify it works correctly for what you want to achieve. I say this
because I personally prefer the traditional approach to virtual hosting which is
much cleaner, and the possibility of misconfiguration is much smaller. Use the
following configuration data in place of the two
mod_rewrite
directives in the previous example:
# Extract the value of SERVER_NAME from the # Host request header. UseCanonicalName Off # Since there has to be only one access log for # all virtual hosts its format must be modified # to support per virtual host splitting. LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon CustomLog /var/www/logs/access_log vcommon RewriteEngine On RewriteMap LOWERCASE int:tolower RewriteMap VHOST txt:/usr/local/apache/conf/vhost.map # Translate the hostname to username using the # map file, and store the username into the REQUSER # environment variable for use later. RewriteCond ${LOWERCASE:%{SERVER_NAME}} ^(.+)$ RewriteCond ${VHOST:%1|HTTPD} ^(.+)$ RewriteRule ^/(.*)$ /$1 [NS,E=REQUSER:%1] # Change the URI to a ~username syntax and finish # the request if it is not a PHP file. RewriteCond %{ENV:REQUSER} !^HTTPD$ RewriteCond %{REQUEST_URI} !\.php$ RewriteRule ^/(.*)$ /~%{ENV:REQUSER}/$1 [NS,L,PT] # Change the URI to a ~username syntax and finish # the request if it is a PHP file. RewriteCond %{ENV:REQUSER} !^HTTPD$ RewriteCond %{REQUEST_URI} \.php$ RewriteRule ^/(.*)$ /~%{ENV:REQUSER}/cgi-bin/php/~%{ENV:REQUSER}/$1 \ [NS,L,PT,E=REDIRECT_STATUS:302] # The remaining directives make PHP work when content # is genuinely accessed through the ~username syntax. RewriteCond %{ENV:REQUSER} ^HTTPD$ RewriteCond %{REQUEST_URI} \.php$ RewriteRule ^/~(\w+)/(.*)$ /~$1/cgi-bin/php/~$1/$2 [NS,L,PT,E=REDIRECT_STATUS:302]
You will need to create a simple mod_rewrite
map file,
/usr/local/apache/conf/vhost.map
, to map virtual hosts
to usernames:
jelena.example.com jelena ivanr.example.com ivanr
There can be any number of virtual hosts mapping to the same username. If
virtual hosts have www
prefixes, you may want to add them
to the map files twice, once with the prefix and once without.
If mod_fastcgi
(http://www.fastcgi.com
) is added to Apache, it can work to make
scripts persistent, where scripts support persistent operation. I like FastCGI
because it is easy to implement yet very powerful. Here, I demonstrate how you can
make PHP persistent. PHP comes with FastCGI support built-in that is compiled in by
default, so you only need to install mod_fastcgi
. The example
is not PHP specific so it can work for any other binary that supports
FastCGI.
To add mod_fastcgi
to Apache 1, type the following while you
are in the mod_fastcgi
source folder:
$apxs -o mod_fastcgi.so -c *.c
#apxs -i -a -n fastcgi mod_fastcgi.so
To add mod_fastcgi
to Apache 2, type the following while you
are in the mod_fastcgi
source folder:
$cp Makefile.AP2 Makefile
$make top_dir=/usr/local/apache
#make top_dir=/usr/local/apache install
When you start Apache the next time, one more process will be running: the FastCGI process manager, which is responsible for managing the persistent scripts, and the communication between them and Apache.
Here is what you need to add to Apache configuration to make it work:
# Load the mod_fastcgi module. LoadModule fastcgi_module modules/mod_fastcgi.so # Tell it to use the suexec wrapper to start other processes. FastCgiWrapper /usr/local/apache/bin/suexec # This configuration will recycle persistent processes once every # 300 seconds, and make sure no processes run unless there is # a need for them to run. FastCgiConfig -singleThreshold 100 -minProcesses 0 -killInterval 300
I prefer to leave the existing cgi-bin/
folders alone so
non-FastCGI scripts continue to work. (As previously mentioned, scripts must be
altered to support FastCGI.) This is why I create a new folder,
fastcgi-bin/
. A copy of the php
binary
(the FastCGI version) needs to be placed there. It makes sense to remove this binary
from the cgi-bin/
folder to avoid the potential for confusion.
A FastCGI-aware php
binary is compiled as a normal CGI version
but with the addition of the --enable-fastcgi
switch on the
configure line. It is worth checking for FastCGI support now because it makes
troubleshooting easier later. If you are unsure whether the version you have
supports FastCGI, invoke it with the -v
switch. The supported
interfaces will be displayed in the brackets after the version number.
$./php -v
PHP 5.0.2 (cgi-fcgi
) (built: Nov 19 2004 11:09:11) Copyright (c) 1997-2004 The PHP Group Zend Engine v2.0.2, Copyright (c) 1998-2004 Zend Technologies.
This is what an suEXEC-enabled and FastCGI-enabled virtual host configuration looks like:
<VirtualHost *> ServerName ivanr.example.com DocumentRoot /home/ivanr/public_html # Execute all scripts as user ivanr, group ivanr SuexecUserGroup ivanr ivanr AddHandler application/x-httpd-php .php Action application/x-httpd-php /fastcgi-bin/php <Directory /home/ivanr/public_html/cgi-bin> Options +ExecCGI SetHandler cgi-script </Directory> <Directory /home/ivanr/public_html/fastcgi-bin> Options +ExecCGI SetHandler fastcgi-script </Directory> </VirtualHost>
Use this PHP file to verify the configuration works:
<? echo "Hello world!<br>"; passthru("whoami"); ?>
The first request should be slower to execute than all subsequent requests. After
that first request has finished, you should see a php
process
still running as the user (ivanr
in my case). To ensure FastCGI
is keeping the process persistent, you can tail the access and suEXEC log files. For
every persistent request, there will be one entry in the access log and no entries
in the suEXEC log. If you see the request in each of these files, something is wrong
and you need to go back and figure out what that is.
If you configure FastCGI to run as demonstrated here, it will be fully dynamic.
The FastCGI process manager will create new processes on demand and shut them down
later so that they don’t waste memory. Because of this, you can enable FastCGI for a
large number of users and achieve security and adequate dynamic
request performance. (The mod_rewrite
trick to get PHP to run
through suEXEC works for FastCGI as well.)
Running PHP as a module in an untrusted environment is not recommended. Having said that, PHP comes with many security-related configuration options that can be used to make even module-based operation decently secure. What follows is a list of actions you should take if you want to run PHP as a module (in addition to the actions required for secure installation as described in Chapter 3):
Use the open_basedir
configuration option with a
different setting for every user, to limit the files PHP scripts can
reach.
Deploy PHP in safe mode. (Be prepared to wrestle with the safe-mode-related problems, which will be reported by your users on a regular basis.) In safe mode, users can execute only the binaries that you put into a special folder. Be very careful what you put there, if anything. A process created by executing a binary from PHP can access the filesystem without any restrictions.
Use the disable_function
configuration option to
disable dangerous functions, including the
PHP-Apache integration functions. (See Chapter 3 for more information.)
Never allow PHP dynamic loadable modules to be used by your users (set the
enable_dl
configuration directive to Off
).
The above list introduces so many restrictions that it makes PHP significantly less useful. Though full-featured PHP programs can be deployed under these conditions, users are not used to deploying PHP programs in such environments. This will lead to broken PHP programs and problems your support staff will have to resolve.
The trick to handling large numbers of users is to establish a clear, well-defined policy at the beginning and stick to it. It is essential to have the policy distributed to all users. Few of them will read it, but there isn’t anything else you can do about it except be polite when they complain. With all the work we have done so far to secure dynamic request execution, some holes do remain. System accounts (virtual or not) can and will be used to attack your system or the neighboring accounts. A well-known approach to breaking into shared hosting web sites is through insecure configuration, working from another shared hosting account with the same provider.
Many web sites use PHP-based content management programs, but hosted on servers where PHP is configured to store session information in a single folder for all virtual accounts. Under such circumstances, it is probably trivial to hijack the program from a neighboring hosting account. If file permissions are not configured correctly and dynamic requests are executed as a single user, attackers can use PHP scripts to read other users’ files and retrieve their data.
Though very few hosting providers give shells to their customers, few are aware
that a shell is just a tool to make use of the access privileges customers already
have. They do not need a shell to upload a web script to simulate a shell (such
scripts are known as web shells
), or even to upload a daemon
and run it on the provider’s server.
If you have not used a web shell before, you will be surprised how full-featured some of them are. For examples, see the following:
CGITelnet.pl (http://www.rohitab.com/cgiscripts/cgitelnet.html
)
PhpShell (http://phpshell.sourceforge.net/
)
PerlWebShell (http://yola.in-berlin.de/perlwebshell/
)
You cannot stop users from running web shells, but by having proper filesystem
configuration or virtual filesystems, you can make them a nonissue. Still, you may
want to have cron scripts that look through customers’ cgi-bin/
folders searching for well-known web shells. Another possibility is to implement
intrusion detection and monitor Apache output to detect traces of web shells in
action.
When users are allowed to upload and execute their own binaries (and many are), that makes them potentially very dangerous. If the binaries are being executed safely (with an execution wrapper), the only danger comes from having a vulnerability in the operating system. This is where regular patching helps. As part of your operational procedures, be prepared to disable executable content upload, if a kernel vulnerability is discovered, until you have it patched.
Some people use their execution privileges to start daemons. (Or attackers exploit
other people’s execution privileges to do that.) For example, it is quite easy to
upload and run something like Tiny Shell (http://www.cr0.net:8040/code/network/
) on a high port on the
machine. There are two things you can do about this:
Monitor the execution of all user processes to detect the ones running for a long time. Such processes can be killed and reported. (However, ensure you do not kill the FastCGI processes.)
Configure the firewall around the machine to only allow unsolicited traffic to a few required ports (80 and 443 in most cases) into the server, and not to allow any unrelated traffic out of the server. This will prevent the binaries run on the server from communicating with the attacker waiting outside. Deployment of outbound traffic filtering can have a negative impact on what your customers can do. With the rise in popularity of web services, may web sites use services provided by other sites anywhere on the Internet. Closing unrelated outgoing traffic from taking place will break such web sites. If you are really paranoid (and must allow unrelated outgoing traffic) consider allowing HTTP traffic only but routing it through a reverse proxy where you can inspect and control the payload.