LPI 303: Security

In this document you find information for the different objectives from the LPIC 303 exam. Before using this document you should check on the LPI site if the objectives are still the same. This document is provided as an aid in studying and is in noway a guaranty for passing the exam. Try to gain some practical knowledge and really understand the concepts how things work, that should help.

Topic 320: Cryptography

320.1 OpenSSL (weight: TBD)

Candidates should know how to configure and use OpenSSL. This includes creating your own Certificate Authority and issues SSL certificates for various applications.

Key Knowledge Areas

  • security and other social issues
  • example2

The following is a partial list of the used files, terms and utilities:

  • security
  • /etc/security
  • security manpage(1)

OpenSSL

How to be your own Certificate Authority

1. Install OpenSSL and make sure it is available in your path.

$ openssl version

This command should display the version and date OpenSSl was created.

OpenSSL 0.9.5 28 Feb 2000


2. Some system may require the creation of a random number file. Cryptographic software needs a source of unpredictable data to work correctly. Many Open Source operating systems provide a “random device”. Systems like AIX do not. The command to do this is:

$ openssl rand -out .rnd 512


3. Edit the /etc/ssl/openssl.cnf file and search for _default. Edit each of these default settings to fit you needs. Also search for “req_distinguished_name”, in this section you find the default answer for some of the openssl questions. If you need to create multiple certificates with the same details, it is helpful to change these default answers.

4. Create a RSA private key for your CA (will be Triple-DES encrypted and PEM formatted):

$ cd /var/ssl or $ cd /usr/lib/ssl (for ubuntu)
$ misc/CA.pl -newca


If this doesn't work try:

$ openssl req -new -x509 -keyout demoCA/private/cakey.pem -out demoCA/cacert.pem -days 3650


This command creates two files. The first is the Private CA key in demoCA/private/cakey.pem. The second is demoCA/cacert.pem. This is the Public CA key. As a part of this process you are ask several questions. Answer them as you see fit. The password is the access to your Private Key. Make it a good one. With this key anyone can sign ohter certificates as you. The “Common Name” questions should reflect the fact your are a CA. A name like MyCompany Certificate Authority would be good.


Creating a server certificate

1. Now you need to create a key to use as your server key.

$ misc/CA.pl -newreq

or

$ openssl genrsa -des3 -out server.key 1024

Generating RSA private key, 1024 bit long modulus
.........................................................++++++
........++++++
e is 65537 (0x10001)
Enter PEM pass phrase:
Verifying password - Enter PEM pass phrase: 


2. Generate a CSR (Certificate Signing Request)
Once the private key is generated a Certificate Signing Request can be generated.
During the generation of the CSR, you will be prompted for several pieces of information. These are the X.509 attributes of the certificate. One of the prompts will be for “Common Name (e.g., YOUR name)”. It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. If the website to be protected will be https://www.domain.com, then enter www.domain.com at this prompt. The command to generate the CSR is as follows:

$ openssl req -new -key server.key -out server.csr

Country Name (2 letter code) [GB]: NL
State or Province Name (full name) [Berkshire]: Zuid-Holland
Locality Name (eg, city) [Newbury]: Delft
Organization Name (eg, company) [My Company Ltd]: HostingComp
Organizational Unit Name (eg, section) []: Information Technology
Common Name (eg, your name or your server's hostname) []: www.domain.com
Email Address []: admin at domain.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []: 


3. Remove Passphrase from Key
One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each time the web server is started. Obviously this is not necessarily convenient as someone will not always be around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. With that being said, use the following command to remove the pass-phrase from the key:

$ cp server.key server.key.org
$ openssl rsa -in server.key.org -out server.key

The newly created server.key file has no more passphrase in it.

4. Now you have three option:

  • Let an official CA sign the CSR.
  • Self-sign the CSR
  • Self-sign the CSR using your own CA.


4.1 Let an official CA sign the CSR. You now have to send this Certificate Signing Request (CSR) to a Certifying Authority (CA) for signing. The result is then a real Certificate which can be used for Apache. Here you have to options: First you can let the CSR sign by a commercial CA like Verisign or Thawte. Then you usually have to post the CSR into a web form, pay for the signing and await the signed Certificate you then can store into a server.crt file.

4.2 Self-sign the CSR

$ openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt


4.3 Self-sign the CSR using your own CA. Now you can use this CA to sign server CSR's in order to create real SSL Certificates for use inside an Apache webserver (assuming you already have a server.csr at hand):

$ /var/ssl/misc/CA.pl -sign

or

$ openssl ca -policy policy_anything -out server.crt -infiles server.csr

This signs the server CSR and results in a server.crt file.

5. You can see the details of the received Certificate via the command:

$ openssl x509 -noout -text -in server.crt


6. Now you have two files: server.key and server.crt. These now can be used as following inside your Apache's httpd.conf file:

SSLCertificateFile    /path/to/this/server.crt
SSLCertificateKeyFile /path/to/this/server.key

The server.csr file is no longer needed.


Convert the signed key to P12 format so it can be inported into a browser.

openssl pkcs12 -export -in newcert.pem -inkey newreq.pem -name "www.domain.com" -certfile demoCA/cacert.pem -certfile demoCA/cacert.pem -out cert.p12


Source1
Source2
:!:

Changes against source:
- corrected some typo's.
- deleted irrelevant sections.
- added some clarifications.
- moved some text between sections.
- split topics, mixed the info from the sources.
- layout.

security

root@richard:/etc/security# ls -alh
total 52K
drwxr-xr-x   2 root root 4.0K 2008-11-07 18:06 .
drwxr-xr-x 149 root root  12K 2008-11-08 17:13 ..
-rw-r--r--   1 root root 4.6K 2008-10-16 06:36 access.conf
-rw-r--r--   1 root root 3.4K 2008-10-16 06:36 group.conf
-rw-r--r--   1 root root 1.9K 2008-10-16 06:36 limits.conf
-rw-r--r--   1 root root 1.5K 2008-10-16 06:36 namespace.conf
-rwxr-xr-x   1 root root 1003 2008-10-16 06:36 namespace.init
-rw-r--r--   1 root root 3.0K 2007-10-01 20:49 pam_env.conf
-rw-r--r--   1 root root  419 2008-10-16 06:36 sepermit.conf
-rw-r--r--   1 root root 2.2K 2007-10-01 20:49 time.conf

Config files:

  • access.conf - Login access control table.
  • group.conf - pam_group module.
  • namespace.conf - configuration of polyinstantiate directories.
  • pam_env.conf - pam environment variables.
  • time.conf - pam_time module.


root@richard:/etc/security# cat limits.conf 
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4

# End of file

320.2 Advanced GPG (weight: TBD)

Candidates should know how to use GPG. This includes key generation, signing and publishing to keyservers. Managing multiple private key and IDs is also included.

Key Knowledge Areas

  • security part 2
  • mall security

The following is a partial list of the used files, terms and utilities:

  • more files
  • more commands
  • more concepts

GPG key generation

gpg --gen-key

A new key-pair is created (key pair: secret and public key). The first question is which algorithm can be used. You can easily (and maybe you should - since it is used so widely) use DSA/ ElGamal. This is not patented.

The next question is key length. This is something that is very user dependent. You need to choose between security and calculating time. If a key is longer the risk for cracking the message when intercepted decreases. But with a larger key calculation time also increases. If computing time is an issue you still should consider that you want to use the key for sometime. We all know that arithmetic performance increases very quickly, since new processors are getting quicker and quicker. So keep this in mind. The minimal key length GnuPG demands is 768 bits. However some people say you should have at a key-size of 2048 bits (which is also really a maximum with GnuPG at this moment). For DSA 1024 is a standard size. When security is a top priority and performance is less an issue you ought to pick the largest key-size available. The system now asks to enter names, comment and e-mail address. Based upon the entries here the code is calculated. You can change these settings later. Finally you have to enter a password (actually passphrase would be more appropriate, since blanks are allowed). This password is used to be able to use the functionality which belongs to your secret key. A good passphrase contains the following elements:

  • it is long,
  • it has special (non alphanumeric) characters,
  • it is something special (not a name),
  • it is very hard to guess (so NOT names, birth dates, phone numbers, number of a credit card/checking account, names and number of children, …)

GPG key signing

As mentioned before in the introduction there is one major Achilles' heel in the system. This is the authenticity of public keys. If you have a wrong public key you can say bye bye to the value of your encryption. To overcome such risks there is a possibility of signing keys. In that case you place your signature over the key, so that you are absolutely positive that this key is valid. This leads to the situation where the signature acknowledges that the user ID mentioned in the key is actually the owner of that key. With that reassurance you can start encrypting.
Using the gpg –edit-key UID command for the key that needs to be signed you can sign it with the sign command.
You should only sign a key as being authentic when you are ABSOLUTELY SURE that the key is really authentic!!!. So if you are positive you got the key yourself (like on a key signing party) or you got the key through other means and checked it (for instance by phone) using the fingerprint-mechanism. You should never sign a key based on any assumption.
Based on the available signatures and “ownertrusts” GnuPG determines the validity of keys. Ownertrust is a value that the owner of a key uses to determine the level of trust for a certain key. The values are:

  • 1 = Don't know
  • 2 = I do NOT trust
  • 3 = I trust marginally
  • 4 = I trust fully

If the user does not trust a signature it can say so and thus disregard the signature. Trust information is not stored in the same file as the keys, but in a separate file.

Signing and checking signatures

To sign data with your own key, use the command:

gpg -s (or --sign) [Data]

By doing this also compression takes place. This means that the result is not legible. If you want a legible result you can use:

gpg --clearsign [Data]

this will make sure that the results are clearly legible. Furthermore it does the same (signing data).
With

gpg -b (or --detach-sign) [Data]

you can write the signature in a separate file. It is highly recommended to use this option especially when signing binary files (like archives for instance). Also the –armor option can be extremely useful here.
Quite often you find that data is encrypted and signed as well. The full instruction looks like:

gpg [-u Sender] [-r Recipient] [--armor] --sign --encrypt [Data]

The functionality of the options -u (–local-user) and -r (–recipient) are as described before.
When encrypted data has been signed as well, the signature is checked when the data is decrypted. You can check the signature of signed data by using the command:

gpg [--verify] [Data]

This will only work (of course) when you own the public key of the sender.

GPG publishing to keyservers

Now it's time to send your key to an keyserver. Type this to send your key to the keys.indymedia.org server, for example:

gpg --keyserver keys.indymedia.org --send-key <your keyid>

When you pulled up your fingerprint you got your keyid, it's listed after 1024D/ and is also the last 8 digits of your fingerprint. Once you've completed this step, you're key is out there for others to start using. You can receive friend's keys or get an updated copy of your own that a friend has signed with the following command:

gpg --keyserver keys.indymedia.org --recv-key <keyid>

Managing multiple private key and IDs

Exporting keys
The command for exporting a key for a user is:

gpg --export [UID]

If no UID has been submitted all present keys will be exported. By default the output is set to stdout. But with the -o option this is sent to a file. It may be advisable using the option -a to write the key to a 7-bit ASCII file instead of a binary file.
By exporting public keys you can broaden your horizon. Others can start contacting you securely. This can be done by publishing it on your homepage, by finger, through a key server like http://www.pca.dfn.de/dfnpca/pgpkserv/ or any other method you can think of.

Importing keys
When you received someone's public key (or several public keys) you have to add them to your key database in order to be able to use them. To import into the database the command looks like this:

gpg --import [Filename]

if the filename is omitted the data will be read from stdin.

Revoke a key
For several reasons you may want to revoke an existing key. For instance: the secret key has been stolen or became available to the wrong people, the UID has been changed, the key is not large enough anymore, etc. In all these cases the command to revoke the key is:

gpg --gen-revoke

This creates a revocation certificate. To be able to do this, you need a secret key, else anyone could revoke your certificate. This has one disadvantage. If I do not know the passphrase the key has become useless. But I cannot revoke the key! To overcome this problem it is wise to create a revoke license when you create a key pair. And if you do so, keep it safe! This can be on disk, paper, etc. Make sure that this certificate will not fall into wrong hands!!!! If you don't someone else can issue the revoke certificate for your key and make it useless.

Key administration
With the GnuPG system comes a file that acts as some kind of database. In this file all data regarding keys with the information that comes with the keys is stored (everything until the Ownertrust values: for more information on that read Key signing). With

gpg --list-keys

all present keys will be displayed. To see the signatures as well type:

gpg --list-sigs

(see Key signing for further information). To see the fingerprints type:

gpg --fingerprint

You want to see “Fingerprints” to ensure that somebody is really the person they claim (like in a telephone call). This command will result in a list of relatively small numbers.
To list the secret keys you type:

gpg --list-secret-keys

Note that listing fingerprints and signatures from private keys has no use what soever.
In order to delete a public key you type:

gpg --delete-key UID

For deleting a secrete key you type:

gpg --delete-secret-key

There is one more important command that is relevant for working with keys.

gpg --edit-key UID

Using this you can edit (among other things) the expiration date, add a fingerprint and sing your key. Although it is too logic to mention. For this you need your passphrase. When entering this you will see a command line.

Source1
Source2 - Sending GPG key to keyserver

320.3 Encrypted Filesystems (weight: TBD)

Candidates should be able to setup and configure encrypted filesystems.

Key Knowledge Areas

  • LUKS, cryptsetup-luks
  • dm-crypt and awareness CBC, ESSIV, LRW and XTS modes
  • cryptmount

The following is a partial list of the used files, terms and utilities:

  • more files
  • more commands
  • more concepts

LUKS, cryptsetup-luks

LUKS (Linux Unified Key Setup) provides a standard on-disk format for encrypted partitions to facilitate cross distribution compatability, to allow for multiple users/passwords, effective password revocation, and to provide additional security against low entropy attacks. To use LUKS, you must use an enabled version of cryptsetup.

Create the Container and Loopback Mount it
First we need to create the container file, and loopback mount it.

root@host:~$  dd if=/dev/urandom of=testfile bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.77221 seconds, 5.9 MB
root@host:~$ losetup /dev/loop/0 testfile
root@host:~$ 

Note: Skip this step for encrypted partitions.

luksFormat
Before we can open an encrypted partition, we need to initialize it.

root@host:~$ cryptsetup luksFormat /dev/loop/0

WARNING!
========
This will overwrite data on /dev/loop/0 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.
root@host:~$ 

Note: For encrypted partitions replace the loopback device with the device label of the partition.

luksOpen
Now that the partition is formated, we can create a Device-Mapper mapping for it.

root@host:~$ cryptsetup luksOpen /dev/loop/0 testfs
Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.
root@host:~$ 


Formating the Filesystem
The first time we create the Device-Mapper mapping, we need to format the new virtual device with a new filesystem.

root@host:~$ mkfs.ext2 /dev/mapper/testfs


Mounting the Virtual Device
Now, we can mount the new virtual device just like any other device.

root@host:~$ mount /dev/mapper/testfs /mnt/test/
root@host:~$ 


Mounting an Existing Encrypted Container File or Partition

root@host:~$ losetup /dev/loop/0 testfile
root@host:~$ cryptsetup luksOpen /dev/loop/0 testfs
Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.
root@host:~$ mount /dev/mapper/testfs /mnt/test/
root@host:~$ 

Note: Skip the losetup setup for encrypted partitions.

Unmounting and Closing an Encrypted Container File or Partition

root@host:~$ umount /mnt/test
root@host:~$ cryptsetup luksClose /dev/mapper/testfs
root@host:~$ losetup -d /dev/loop/0
root@host:~$ 

Note: Skip the losetup setup for encrypted partitions.

Handling Multiple Users and Passwords

The LUKS header allows you to assign 8 different passwords that can access the encyrpted partition or container. This is useful for environments where the CEO & CTO can each have passwords for the device and the administrator(s) can have another. This makes it easy to change the password in case of employee turnover while keeping the data accessible.

Adding passwords to new slots

root@host:~$ cryptsetup luksAddKey /dev/loop/0
Enter any LUKS passphrase:
Verify passphrase:
key slot 0 unlocked.
Enter new passphrase for key slot:
Verify passphrase:
Command successful.
root@host:~$ 

Deleting key slots

root@host:~$ cryptsetup luksDelKey /dev/loop/0 1
Command successful.
root@host:~$ 


Displaying LUKS Header Information

root@host:~$ cryptsetup luksDump /dev/loop/0
LUKS header information for /dev/loop/0

Version:        1
Cipher name:    aes
Cipher mode:    cbc-essiv:sha256
Hash spec:      sha1
Payload offset: 1032
MK bits:        128
MK digest:      a9 3c c2 33 0b 33 db ff d2 b9 dc 6c 01 d6 90 48 1d c1 2e bb
MK salt:        98 46 a3 28 64 35 f1 55 f0 2b 8e af f5 71 16 64
                3c 30 1f 6c b1 4b 43 fd 23 49 28 a6 b0 e4 e2 14
MK iterations:  10
UUID:           089559af-41af-4dfe-b736-9d9d48d3bf53

Key Slot 0: ENABLED
        Iterations:             254659
        Salt:                   02 da 9c c3 c7 39 a5 62 72 81 37 0f eb aa 30 47
                                01 1b a8 53 93 23 83 71 20 03 1b 6c 90 84 a5 6e
        Key material offset:    8
        AF stripes:             4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
root@host:~$ 


Source

dm-crypt and awareness CBC, ESSIV, LRW and XTS modes

CBC
Despite its deficiencies the CBC (Cipher Block Chaining) mode is still the most commonly used for disk encryption[citation needed]. Since auxiliary information isn't stored for the IV of each sector, it is thus derived from the sector number, its content, and some static information. Several such methods were proposed and used.

ESSIV
Encrypted Salt-Sector Initialization Vector (ESSIV)[3] is a method for generating initialization vectors for block encryption to use in disk encryption.
The usual methods for generating IVs are predictable sequences of numbers based on for example time stamp or sector number and permits certain attacks such as a Watermarking attack.
ESSIV prevents such attacks by generating IVs from a combination of the sector number with the hash of the key. It is the combination with the key in form of a hash that makes the IV unpredictable.

LRW
In order to prevent such elaborate attacks, different modes of operation were introduced: tweakable narrow-block encryption (LRW and XEX) and wide-block encryption (CMC and EME).
Whereas a purpose of a usual block cipher EK is to mimic a random permutation for any secret key K, the purpose of tweakable encryption E_K^T is to mimic a random permutation for any secret key K and any known tweak T.

XTS
XTS is XEX-based Tweaked CodeBook mode (TCB) with CipherText Stealing (CTS). Although XEX-TCB-CTS should be abbreviated as XTC, “C” was replaced with “S” (for “stealing”) to avoid confusion with the abbreviated ecstasy. Ciphertext stealing provides support for sectors with size not divisible by block size, for example, 520-byte sectors and 16-byte blocks. XTS-AES was standardized in 2007-12-19 [1] as IEEE P1619 Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices.
The XTS proof[2] yields strong security guarantees as long as the same key is not used to encrypt much more than 1 terabyte of data. Up until this point, no attack can succeed with probability better than approximately one in eight quadrillion. However this security guarantee deteriorates as more data is encrypted with the same key. With a petabyte the attack success probability rate decreases to at most eight in a trillion, with an exabyte, the success probability is reduced to at most eight in a million.
This means that using XTS, with one key for more than a few hundred terabytes of data opens up the possibility of attacks (and is not mitigated by using a larger AES key size, so using a 256-bit key doesn't change this).
The decision on the maximum amount to data to be encrypted with a single key using XTS should consider the above together with the practical implication of the attack (which is the ability of the adversary to modify plaintext of a specific block, where the position of this block may not be under the adversary's control).

Source

cryptmount

In order to create a new encrypted filing system managed by cryptmount, you can use the supplied ’cryptmount-setup’ program, which can be used by the superuser to interactively configure a basic setup.
Alternatively, suppose that we wish to setup a new encrypted filing system, that will have a target-name of “opaque”. If we have a free disk partition available, say /dev/hdb63, then we can use this directly to store the encrypted filing system. Alternatively, if we want to store the encrypted filing system within an ordinary file, we need to create space using a recipe such as:

dd if=/dev/zero of=/home/opaque.fs bs=1M count=512

and then replace all occurences of ’/dev/hdb63’ in the following with ’/home/opaque.fs’. (/dev/urandom can be used in place of /dev/zero, debatably for extra security, but is rather slower.)
First, we need to add an entry in /etc/cryptmount/cmtab, which describes the encryption that will be used to protect the filesystem itself and the access key, as follows:

opaque {
    dev=/dev/hdb63 dir=/home/crypt
    fstype=ext2 fsoptions=defaults cipher=twofish
    keyfile=/etc/cryptmount/opaque.key
    keyformat=builtin
}

Here, we will be using the “twofish” algorithm to encrypt the filing system itself, with the built-in key-manager being used to protect the decryption key (to be stored in /etc/cryptmount/opaque.key).
In order to generate a secret decryption key (in /etc/cryptmount/opaque.key) that will be used to encrypt the filing system itself, we can execute, as root:

cryptmount --generate-key 32 opaque


This will generate a 32-byte (256-bit) key, which is known to be supported by the Twofish cipher algorithm, and store it in encrypted form after asking the system administrator for a password.
If we now execute, as root:

cryptmount --prepare opaque

we will then be asked for the password that we used when setting up /etc/cryptmount/opaque.key, which will enable cryptmount to setup a device-mapper target (/dev/mapper/opaque). (If you receive an error message of the form device-mapper ioctl cmd 9 failed: Invalid argument, this may mean that you have chosen a key-size that isn’t supported by your chosen cipher algorithm. You can get some information about suitable key-sizes by checking the output from “more /proc/crypto”, and looking at the “min keysize” and “max keysize” fields.)
We can now use standard tools to create the actual filing system on /dev/mapper/opaque:

mke2fs /dev/mapper/opaque

(It may be advisable, after the filesystem is first mounted, to check that the permissions of the top-level directory created by mke2fs are appropriate for your needs.)
After executing

cryptmount --release opaque
mkdir /home/crypt

the encrypted filing system is ready for use. Ordinary users can mount it by typing

cryptmount -m opaque

or

cryptmount opaque

and unmount it using

cryptmount -u opaque

cryptmount keeps a record of which user mounted each filesystem in order to provide a locking mechanism to ensure that only the same user (or root) can unmount it.

PASSWORD CHANGING
After a filesystem has been in use for a while, one may want to change the access password. For an example target called “opaque”, this can be performed by executing:

cryptmount --change-password opaque

After successfully supplying the old password, one can then choose a new password which will be used to re-encrypt the access key for the filesystem. (The filesystem itself is not altered or re-encrypted.)

Source: Man-page CRYPTMOUNT(8)

Topic 321: Access Control

321.1 Host Based Access Control (weight: TBD)

Candidates should be familiar with basic host based access control such as nsswitch configuration, PAM and password cracking.

Key Knowledge Areas

  • nsswitch
  • PAM
  • password cracking

The following is a partial list of the used files, terms and utilities:

  • security

nsswitch

Name Service Switch (NSS)
/etc/nsswitch.conf: defines the order in which to contact different name services.

passwd:         compat
group:          compat
shadow:         compat

hosts:          dns [!UNAVAIL=return] files
networks:       nis [NOTFOUND=return] files
ethers:         nis [NOTFOUND=return] files
protocols:      nis [NOTFOUND=return] files
rpc:            nis [NOTFOUND=return] files
services:       nis [NOTFOUND=return] files

The general form is ‘[’ ( ‘!’? STATUS ‘=’ ACTION )+ ‘]’ where
STATUS ⇒ success | notfound | unavail | tryagain
ACTION ⇒ return | continue
The case of the keywords is insignificant. The STATUS values are the results of a call to a lookup function of a specific service. They mean:

  • success - No error occurred and the wanted entry is returned. The default action for this is ‘return’.
  • notfound - The lookup process works ok but the needed value was not found. The default action is ‘continue’.
  • unavail - The service is permanently unavailable. This can either mean the needed file is not available, or, for DNS, the server is not available or does not allow queries. The default action is ‘continue’.
  • tryagain - The service is temporarily unavailable. This could mean a file is locked or a server currently cannot accept more connections. The default action is ‘continue’.


Source: Man-page NSSWITCH.CONF(5)

PAM

PAM, pam - Pluggable Authentication Modules for Linux For example this is the PAM configuration file for the login service (in a file named login).

#%PAM-1.0
auth     required   /lib/security/pam_securetty.so
auth     required   /lib/security/pam_nologin.so
auth     sufficient /lib/security/pam_ldap.so
auth     required   /lib/security/pam_unix_auth.so use_first_pass
account  sufficient /lib/security/pam_ldap.so
account  required   /lib/security/pam_unix_acct.so
password required   /lib/security/pam_cracklib.so
password sufficient /lib/security/pam_ldap.so
password required   /lib/security/pam_unix_passwd.so use_first_pass md5 shadow
session  required   /lib/security/pam_unix_session.so

password cracking

John can work in the following modes:
[a] Wordlist : John will simply use a file with a list of words that will be checked against the passwords. See RULES for the format of wordlist files.
[b] Single crack : In this mode, john will try to crack the password using the login/GECOS information as passwords.
[c] Incremental : This is the most powerful mode. John will try any character combination to resolve the password. Details about these modes can be found in the MODES file in john’s documentation, including how to define your own cracking methods.

Using John the ripper to check weak passwords / crack passwords
First use the unshadow command to combines the /etc/passwd and /etc/shadow files so John can use them. You might need this since if you only used your shadow file, the GECOS information wouldn’t be used by the “single crack” mode, and also you wouldn’t be able to use the -shells option. On a normal system you’ll need to run unshadow as root to be able to read the shadow file. So login as root or use old good sudo / su command.
Type the following command:

# /usr/bin/unshadow /etc/passwd /etc/shadow > /tmp/crack.password.db

To use John, you just need to supply it a password file created using unshadow command along with desired options. If no mode is specified, john will try “single” first, then “wordlist” and finally “incremental” password cracking methods.

$ john /tmp/crack.password.db

Output:

$ john  /tmp/crack.password.db
Loaded 1 password (FreeBSD MD5 [32/32])

This procedure will take its own time. To see the cracked passwords, enter:

$ john -show /tmp/crack.password.db

test:123456:1002:1002:test,,,:/home/test:/bin/bash
didi:abc123:1003:1003::/home/didi:/usr/bin/rssh

2 passwords cracked, 1 left

Above output clearly indicates - user test has 123456 and didi has abc123 password.

Source

321.2 Extended Attributes and ACLs (weight: TBD)

Candidates are required to understand and know how to use Extended Attributes and Access Control Lists.

Key Knowledge Areas

  • ACLs/EAs
  • getfattr/setfattr, getfacl/setfacl

The following is a partial list of the used files, terms and utilities:

  • security

Extended Attributes

In Linux, the ext2, ext3, ext4, JFS, ReiserFS and XFS filesystems support extended attributes (abbreviated xattr) if the libattr feature is enabled in the kernel configuration. Any regular file may have a list of extended attributes. Each attribute is denoted by a name and the associated data. The name must be a null-terminated string, and must be prefixed by a namespace identifier and a dot character. Currently, four namespaces exist: user, trusted, security and system. The user namespace has no restrictions with regard to naming or contents. The system namespace is primarily used by the kernel for access control lists. The security namespace is used by SELinux, for example.
Extended attributes are not widely used in user-space programs in Linux, although they are supported in the 2.6 and later versions of the kernel. Beagle does use extended attributes, and freedesktop.org publishes recommendations for their use.

Source

getfattr
For each file, getfattr displays the file name, and the set of extended attribute names (and optionally values) which are associated with that file.

OPTIONS

-n name, --name=name
    Dump the value of the named extended attribute extended attribute. 
-d, --dump
    Dump the values of all extended attributes associated with pathname. 
-e en, --encoding=en
    Encode values after retrieving them. Valid values of en are "text", "hex", and "base64". Values encoded as text strings are enclosed in 
    double quotes ("), while strings encoded as hexidecimal and base64 are prefixed with 0x and 0s, respectively. 
-h, --no-dereference
    Do not follow symlinks. If pathname is a symbolic link, the symbolic link itself is examined, rather than the file the link refers to. 
-m pattern, --match=pattern
    Only include attributes with names matching the regular expression pattern. The default value for pattern is "^user\\.", -m which 
    includes all the attributes in the user namespace. Refer to attr(5) for a more detailed discussion on namespaces. 
--absolute-names
    Do not strip leading slash characters ('/'). The default behaviour is to strip leading slash characters. 
--only-values
    Dump out the extended attribute value(s) only. 
-R, --recursive
    List the attributes of all files and directories recursively. 
-L, --logical
    Logical walk, follow symbolic links. The default behaviour is to follow symbolic link arguments, and to skip symbolic links 
    encountered in subdirectories. 
-P, --physical
    Physical walk, skip all symbolic links. This also skips symbolic link arguments. 

The output format of getfattr -d is as follows:

1:  # file: somedir/
2:  user.name0="value0"
3:  user.name1="value1"
4:  user.name2="value2"
5:  ...

Line 1 identifies the file name for which the following lines are being reported. The remaining lines (lines 2 to 4 above) show the name and value pairs associated with the specified file.

Source

setfattr
The setfattr command associates a new value with an extended attribute name for each specified file.

OPTIONS

-n name, --name=name
    Specifies the name of the extended attribute to set. 
-v value, --value=value
    Specifies the new value for the extended attribute. 
-x name, --remove=name
    Remove the named extended attribute entirely. 
-h, --no-dereference
    Do not follow symlinks. If pathname is a symbolic link, it is not followed, but is instead itself the inode being modified. 
--restore=file
    Restores extended attributes from file. The file must be in the format generated by the getfattr command with the --dump option. 
    If a dash (-) is given as the file name, setfattr reads from standard input. 

Example:

$ setfattr -n user.testing -v "this is a test" test-1.txt
$ getfattr -n user.testing test-1.txt

# file: test-1.txt
user.testing="this is a test"

Source

Access Control Lists

getfacl
getfacl - get file access control lists For each file, getfacl displays the file name, owner, the group, and the Access Control List (ACL). If a directory has a default ACL, getfacl also displays the default ACL. Non-directories cannot have default ACLs.
If getfacl is used on a file system that does not support ACLs, getfacl displays the access permissions defined by the traditional file mode permission bits.
The output format of getfacl is as follows:

$ getfacl somedir

1:  # file: somedir/
2:  # owner: lisa
3:  # group: staff
4:  user::rwx
5:  user:joe:rwx               #effective:r-x
6:  group::rwx                 #effective:r-x
7:  group:cool:r-x
8:  mask:r-x
9:  other:r-x
10:  default:user::rwx
11:  default:user:joe:rwx       #effective:r-x
12:  default:group::r-x
13:  default:mask:r-x
14:  default:other:---

Lines 4, 6 and 9 correspond to the user, group and other fields of the file mode permission bits. These three are called the base ACL entries. Lines 5 and 7 are named user and named group entries. Line 8 is the effective rights mask. This entry limits the effective rights granted to all groups and to named users. (The file owner and others permissions are not affected by the effective rights mask; all other entries are.) Lines 10–14 display the default ACL associated with this directory. Directories may have a default ACL. Regular files never have a default ACL.

Source: Man-page getfacl

setfacl
setfacl - set file access control lists

   OPTIONS
       -b, --remove-all
           Remove all extended ACL entries. The base ACL entries of the owner,
           group and others are retained.

       -k, --remove-default
           Remove  the  Default ACL. If no Default ACL exists, no warnings are
           issued.

       -n, --no-mask
           Do not recalculate the effective rights mask. The default  behavior
           of  setfacl  is  to  recalculate  the ACL mask entry, unless a mask
           entry was explicitly given.  The mask entry is set to the union  of
           all  permissions  of the owning group, and all named user and group
           entries. (These are  exactly  the  entries  affected  by  the  mask
           entry).

       --mask
           Do recalculate the effective rights mask, even if an ACL mask entry
           was explicitly given. (See the -n option.)

       -d, --default
           All operations apply to the Default ACL. Regular ACL entries in the
           input  set are promoted to Default ACL entries. Default ACL entries
           in the input set are discarded. (A warning is issued if  that  hap-
           pens).

       --restore=file
           Restore a permission backup created by ‘getfacl -R’ or similar. All
           permissions of a complete directory subtree are restored using this
           mechanism.  If the input contains owner comments or group comments,
           and setfacl is run by root, the owner and owning group of all files
           are  restored  as  well.  This  option  cannot  be mixed with other
           options except ‘--test’.
 ACL ENTRIES
       The setfacl utility recognizes the following ACL entry formats  (blanks
       inserted for clarity):


       [d[efault]:] [u[ser]:]uid [:perms]
              Permissions  of  a  named user. Permissions of the file owner if
              uid is empty.

       [d[efault]:] g[roup]:gid [:perms]
              Permissions of a named group. Permissions of the owning group if
              gid is empty.

       [d[efault]:] m[ask][:] [:perms]
              Effective rights mask

       [d[efault]:] o[ther][:] [:perms]
              Permissions of others.
EXAMPLES

       Granting an additional user read access
              setfacl -m u:lisa:r file

       Revoking  write  access  from all groups and all named users (using the
       effective rights mask)
              setfacl -m m::rx file

       Removing a named group entry from a file’s ACL
              setfacl -x g:staff file

       Copying the ACL of one file to another
              getfacl file1 | setfacl --set-file=- file2

       Copying the access ACL into the Default ACL
              getfacl --access dir | setfacl -d -M- dir

Source: Man-page setfacl

321.3 SELinux (weight: TBD)

Candidates should have a thorough knowledge of SELinux.

Key Knowledge Areas

  • SELinux configuration
  • TE, RBAC, MAC and DAC concepts and use

The following is a partial list of the used files, terms and utilities:

  • security

TE

FIXME - No info found.

RBAC

Role-based access control (RBAC) is an access policy determined by the system, not the owner. RBAC is used in commercial applications and also in military systems, where multi-level security requirements may also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC controls read and write permissions based on a user's clearance level and additional labels. RBAC controls collections of permissions that may include complex operations such as an e-commerce transaction, or may be as simple as read or write. A role in RBAC can be viewed as a set of permissions.

Three primary rules are defined for RBAC:

1. Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a role.

2. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.

3. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which they are authorized.

Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles.

Most IT vendors offer RBAC in one or more products.

Source

MAC

Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used in multilevel systems that process highly sensitive data, such as classified government and military information. A multilevel system is a single computer system that handles multiple classification levels between subjects and objects.

  • Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object.
  • Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of MAC-based systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times.

Two methods are commonly used for applying mandatory access control:

  • Rule-based access controls: This type of control further defines specific conditions for access to a requested object. All MAC-based systems implement a simple form of rule-based access control to determine whether access should be granted or denied by matching:
  1. An object's sensitivity label
  2. A subject's sensitivity label
  • Lattice-based access controls: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.


Source

DAC

Discretionary access control (DAC) is an access policy determined by the owner of an object. The owner decides who is allowed to access the object and what privileges they have.

Two important concepts in DAC are

  • File and data ownership: Every object in the system has an owner. In most DAC systems, each object's initial owner is the subject that caused it to be created. The access policy for an object is determined by its owner.
  • Access rights and permissions: These are the controls that an owner can assign to other subjects for specific resources.

Access controls may be discretionary in ACL-based or capability-based access control systems. (In capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a similar degree of control over its access policy.)

Source

SELinux configuration

getenforce Display SELinux mode

$ getenforce 
Disabled

setenforce
Modify the mode if SELinux will it is running SELinux has two modes

  • Enforcing - enforce policy
  • Permissive - warn only
$ setenforce Enforcing

To enable or disable SELinux you need to modify /etc/selinux/config and reboot the system.

getsebool
Example display all booleans used for squid

$ getsebool -a | grep squid
allow_httpd_squid_script_anon_write --> off
squid_connect_any --> off
squid_disable_trans --> off

semanage
Example display all contexts for squid

$ semanage fcontext -l | grep squid
/etc/squid(/.*)?                                   all files          system_u:object_r:squid_conf_t:s0 
/var/log/squid(/.*)?                               all files          system_u:object_r:squid_log_t:s0 
/var/spool/squid(/.*)?                             all files          system_u:object_r:squid_cache_t:s0 
/usr/share/squid(/.*)?                             all files          system_u:object_r:squid_conf_t:s0 
/var/cache/squid(/.*)?                             all files          system_u:object_r:squid_cache_t:s0 
/usr/sbin/squid                                    regular file       system_u:object_r:squid_exec_t:s0 
/var/run/squid\.pid                                regular file       system_u:object_r:squid_var_run_t:s0 
/usr/lib/squid/cachemgr\.cgi                       regular file       system_u:object_r:httpd_squid_script_exec_t:s0 
/usr/lib64/squid/cachemgr\.cgi                     regular file       system_u:object_r:httpd_squid_script_exec_t:s0 

setsebool
Example: Allow anonymous ftp

$ setsebool allow_ftp_anon_write=on

321.4 Other Mandatory Access Control systems (weight: TBD)

Candidates should be familiar with other Mandatory Access Control systems for Linux. This includes major features of these systems but not configuration and use.

Key Knowledge Areas

  • SMACK
  • AppArmor

The following is a partial list of the used files, terms and utilities:

  • security

SMACK

Simplified Mandatory Access Control Kernel for Linux The Simplified Mandatory Access Control Kernel (Smack) provides a complete Linux kernel based mechanism for protecting processes and data from inappropriate manipulation. Smack uses process, file, and network labels combined with an easy to understand and manipulate way to identify the kind of accesses that should be allowed.
Smack consists of three components:

  • A kernel component that is implemented as a Linux Security Modules module. It requires netlabel and works best with file systems that support extended attributes.
  • A startup script that insures that some device files have the correct Smack attributes and loads Smack configuration if any is defined.
  • A set of patches to the GNU Core Utilities package to make it aware of Smack extended file attributes. A set of similar initial patches to Busybox are also created. It's important to note that SMACK can perfectly work with no kind of user-space support.


Source1
Source2

Apparmor

AppArmor (“Application Armor”) is security software for Linux, released under the GNU General Public License. From 2005 through September 2007, AppArmor was maintained by Novell. AppArmor allows the system administrator to associate with each program a security profile which restricts the capabilities of that program. It supplements the traditional Unix discretionary access control (DAC) model by providing mandatory access control (MAC).

In addition to manually specifying profiles, AppArmor includes a learning mode, in which violations of the profile are logged, but not prevented. This log can then be turned into a profile, based on the program's typical behavior.

AppArmor is implemented using the Linux Security Modules kernel interface.

AppArmor was created in part as an alternative to SELinux, which critics claim is difficult for administrators to set up and maintain. Unlike SELinux, which is based on applying labels to files, AppArmor works with file paths. Proponents of AppArmor claim that it is less complex and easier for the average user to learn than SELinux. They also claim that AppArmor requires fewer modifications to work with existing systems: for example, SELinux requires a filesystem that supports “security labels”, and thus cannot provide access control for files mounted via NFS. AppArmor is file-system agnostic.

Source

Topic 322: Application Security

322.1 BIND/DNS (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of BIND DNS services.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Configuration

First of all, check security mailing lists and web sites for new versions of BIND. Particularly, versions prior to 8.2.3 are vulnerable to known attacks.
Hide your version number from foreign queries – it could be used to craft a special attack against you. Since BIND 8.2, you may use in named.conf:

options {
   version « None of your business »;
};

You can also restrict queries : Globally :

options {
   allow-query { address-match-list; };
};

Or per-zone (which take precedence over global ACLs) :

zone « test.com » {
   type slave;
   file « db.test »;
   allow-query { 192.168.0.0/24; };
};

Even more important, make sure only real slave DNS can transfer your zones from your master. Use the keyword allow-transfer : Globally (in an « options » statement), applies to all zones Per-zone On the slaves, disable zone transfers! Use

allow-transfer { none; };

Don't run BIND as root ! Since 8.1.2, there are options to change the user (-u ) and group (-g) under which BIND runs. Use a non-priviledged user (i.e create a new one, without shell access). Make sure your zone files have ther correct permission (named.conf is read while BIND is still under root's permissions, so don't change this file's permissions)
Also, run bind in a chroot jail. Since 8.1.2, there is option -t to specify the directory for the nameserver to chroot() to. Make sure all the files needed by BIND (i.e log files, etc..) are under the root-jail If you plan to use ndc with a chroot'ed BIND, don't forget to pass the new pathname to the UNIX socket to ndc !
Here's a little bit on how to setup a chrooted bind9 environment in Debian. As the configuration in bind9 is very similar, the same procedure applies to bind8 for creating a chrooted environment.

  • Stop the currently running bind.
/etc/init.d/bind9 stop
  • In order to chroot bind in a jail, we need to specify what environment in /etc/default/bind9:
OPTIONS="-u bind -t /var/lib/named"
  • We still want logging in our /var/log/syslog, so we change /etc/default/syslogd that it opens an extra socket to which the chrooted bind can log through into /var/log/syslog.
SYSLOGD="-a /var/lib/named/dev/log"
  • Run a couple of mkdir's for the environment
mkdir /var/lib/named
mkdir -p /var/lib/named/var/run/bind/run
mkdir /var/lib/named/etc
mkdir /var/lib/named/dev
mkdir /var/lib/named/var/cache 
  • Move over our existing config
mv /etc/bind /var/lib/named/etc/bind
  • Link it
ln -s /var/lib/named/etc/bind /etc/bind
  • Change ownership in the chrooted var and etc
chown -R bind:bind /var/lib/named/var/* 
chown -R bind:bind /var/lib/named/etc/bind
  • Create some devices & set permissions
mknod /var/lib/named/dev/null c 1 3
mknod /var/lib/named/dev/random c 1 8
chown 666 /var/lib/named/dev/random /var/lib/named/dev/null
  • Restart syslogd & start bind
/etc/init.d/sysklogd restart
/etc/init.d/bind9 start

If bind does not start and there are error messages in the syslog, keep in mind that these messages where created from inside the chrooted domain, hence a permission problem about /var/run/bind/run/named.pid would mean that it is really a problem about /var/lib/named/var/run/bind/run/named.pid

Source

Threads

  • Cache poisoning
  • Denial of service
  • Flaws in the software

322.2 Mail Services (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of Postfix mail services. Awareness of security issues in Sendmail is also required but not configuration.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Postfix

Postfix is a replacement for Sendmail which has several security advantages over Sendmail. Postfix consists of several small programs that perform their own small task. And almost all programs run in a chroot jail. The following parameters in /etc/postfix/main.cf should be set to ensure that Postfix accepts only local emails for delivery:

  mydestination = $myhostname, localhost.$mydomain, localhost
  inet_interfaces = localhost

The parameter mydestination lists all domains to receive emails for. The parameter inet_interfaces specifies the network to liston on.
Once you've configured Postfix, restart the mail system with the following command:

# /etc/init.d/postfix restart

To verify whether Postfix is still listening for incoming network request, you can run one of the following commands from another node:

# nmap -sT -p 25 <remode_node>
# telnet <remote_node> 25

Don't run these commands on the local host since Postfix is supposed to accept connections from the local node.

Threads

  • Open Relay (unauthorized use of service)
  • Denial of service
  • Flaws in the software

322.3 Apache/HTTP/HTTPS (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of Apache web services.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Configuration

Hide the Apache Version number, and other sensitive information.
By default many Apache installations tell the world what version of Apache you're running, what operating system/version you're running, and even what Apache Modules are installed on the server. Attackers can use this information to their advantage when performing an attack. It also sends the message that you have left most defaults alone.

There are two directives that you need to add, or edit in your httpd.conf file:

ServerSignature Off
ServerTokens Prod

The ServerSignature appears on the bottom of pages generated by apache such as 404 pages, directory listings, etc. The ServerTokens directive is used to determine what Apache will put in the Server HTTP response header. By setting it to Prod it sets the HTTP response header as follows:

Server: Apache

If you're super paranoid you could change this to something other than “Apache” by editing the source code, or by using mod_security.

Make sure apache is running under its own user account and group
Several apache installations have it run as the user nobody. So suppose both Apache, and your mail server were running as nobody an attack through Apache may allow the mail server to also be compromised, and vise versa.

User apache
Group apache

Ensure that files outside the web root are not served
We don't want apache to be able to access any files out side of its web root. So assuming all your web sites are placed under one directory (we will call this /web), you would set it up as follows:

<Directory />
  Order Deny,Allow
  Deny from all
  Options None
  AllowOverride None
</Directory>
<Directory /web>
  Order Allow,Deny
  Allow from all
</Directory>

Note that because we set Options None and AllowOverride None this will turn off all options and overrides for the server. You now have to add them explicitly for each directory that requires an Option or Override.

Turn off directory browsing
You can do this with an Options directive inside a Directory tag. Set Options to either None or -Indexes

Options -Indexes

Turn off server side includes
This is also done with the Options directive inside a Directory tag. Set Options to either None or -Includes

Options -Includes

Turn off CGI execution
If you're not using CGI turn it off with the Options directive inside a Directory tag. Set Options to either None or -ExecCGI

Options -ExecCGI

Don't allow apache to follow symbolic links
This can again can be done using the Options directive inside a Directory tag. Set Options to either None or -FollowSymLinks

Options -FollowSymLinks

Turning off multiple Options
If you want to turn off all Options simply use:

Options None

If you only want to turn off some separate each option with a space in your Options directive:

Options -ExecCGI -FollowSymLinks -Indexes

Turn off support for .htaccess files
This is done in a Directory tag but with the AllowOverride directive. Set it to None.

AllowOverride None

If you require Overrides ensure that they cannot be downloaded, and/or change the name to something other than .htaccess. For example we could change it to .httpdoverride, and block all files that start with .ht from being downloaded as follows:

AccessFileName .httpdoverride
<Files ~ "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy All
</Files>

Disable any unnecessary modules
Apache typically comes with several modules installed. Go through the apache module documentation and learn what each module you have enabled actually does. Many times you will find that you don't need to have the said module enabled. Look for lines in your httpd.conf that contain LoadModule. To disable the module you can typically just add a # at the beginning of the line. To search for modules run:

grep LoadModule httpd.conf

Here are some modules that are typically enabled but often not needed: mod_imap, mod_include, mod_info, mod_userdir, mod_status, mod_cgi, mod_autoindex.

Make sure only root has read access to apache's config and binaries
This can be done assuming your apache installation is located at /usr/local/apache as follows:

chown -R root:root /usr/local/apache
chmod -R o-rwx /usr/local/apache

Lower the Timeout value
By default the Timeout directive is set to 300 seconds. You can decrease help mitigate the potential effects of a denial of service attack.

Timeout 45

Limiting large requests
Apache has several directives that allow you to limit the size of a request, this can also be useful for mitigating the effects of a denial of service attack. A good place to start is the LimitRequestBody directive. This directive is set to unlimited by default. If you are allowing file uploads of no larger than 1MB, you could set this setting to something like:

LimitRequestBody 1048576

If you're not allowing file uploads you can set it even smaller. Some other directives to look at are LimitRequestFields, LimitRequestFieldSize and LimitRequestLine. These directives are set to a reasonable defaults for most servers, but you may want to tweak them to best fit your needs. See the documentation for more info.

Limiting the size of an XML Body
If you're running mod_dav (typically used with subversion) then you may want to limit the max size of an XML request body. The LimitXMLRequestBody directive is only available on Apache 2, and its default value is 1 million bytes (approx 1mb). Many tutorials will have you set this value to 0 which means files of any size may be uploaded, which may be necessary if you're using WebDAV to upload large files, but if you're simply using it for source control, you can probably get away with setting an upper bound, such as 10mb:

LimitXMLRequestBody 10485760

Limiting Concurrency
Apache has several configuration settings that can be used to adjust handling of concurrent requests. The MaxClients is the maximum number of child processes that will be created to serve requests. This may be set too high if your server doesn't have enough memory to handle a large number of concurrent requests. Other directives such as MaxSpareServers, MaxRequestsPerChild, and on Apache2 ThreadsPerChild, ServerLimit, and MaxSpareThreads are important to adjust to match your operating system, and hardware.

Restricting Access by IP
If you have a resource that should only by accessed by a certain network, or IP address you can enforce this in your apache configuration. For instance if you want to restrict access to your intranet to allow only the 176.16 network:

Order Deny,Allow
Deny from all
Allow from 176.16.0.0/16

Or by IP:

Order Deny,Allow
Deny from all
Allow from 127.0.0.1

Adjusting KeepAlive settings
According to the Apache documentation using HTTP Keep Alive's can improve client performance by as much as 50%, so be careful before changing these settings, you will be trading performance for a slight denial of service mitigation. KeepAlive's are turned on by default and you should leave them on, but you may consider changing the MaxKeepAliveRequests which defaults to 100, and the KeepAliveTimeout which defaults to 15. Analyze your log files to determine the appropriate values.

Run Apache in a Chroot environment
chroot allows you to run a program in its own isolated jail. This prevents a break in on one service from being able to effect anything else on the server. It can be fairly tricky to set this up using chroot due to library dependencies.

Source

322.4 FTP (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of Pure-FTPd and vsftpd FTP services.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Pure-FTPd

Disable anonymous access
Start pure-ftpd with the following option

-E or --noanonymous

Jail users
We don't want the user to see /. Add the following:

-A or --chrooteveryone

vsftpd

Disable anonymous access
Change these lines in vsftpd.conf:

anonymous_enable=NO
local_enable=YES

Jail users
We don't want the user to see /. We need to add a couple lines:

chroot_list_enable=YES
chroot_local_user=YES

FTP Greeting Banner
Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system.
To change the greeting banner for vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:

ftpd_banner=<insert_greeting_here>

Anonymous Upload
If you want to allow anonymous users to upload, it is recommended you create a write-only directory within /var/ftp/pub/. To do this type:

mkdir /var/ftp/pub/upload

Next change the permissions so that anonymous users cannot see what is within the directory by typing:

chmod 730 /var/ftp/pub/upload

A long format listing of the directory should look like this:

drwx-wx---    2 root     ftp          4096 Feb 13 20:05 upload

Additionally, under vsftpd, add the following line to /etc/vsftpd/vsftpd.conf:

anon_upload_enable=YES

User Accounts
Because FTP passes unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts. To disable user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:

local_enable=NO

Restricting User Accounts
To disable specific user accounts in vsftpd, add the username to /etc/vsftpd.ftpusers

:!:Some info used from this source

322.5 OpenSSH (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of OpenSSH SSH services.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Configuration

It's prudent to disable direct root logins at the SSH level as well.

PermitRootLogin no

Also ensure to have privilege separation enabled where the daemon is split into two parts. With privilege separation a small part of the code runs as root and the rest of the code runs in a chroot jail environment. Note that on older RHEL systems this feature can break some functionality, for example see Preventing Accidental Denial of Service.

UsePrivilegeSeparation yes

Since SSH protocol version 1 is not as secure you may want to limit the protocol to version 2 only:

Protocol 2

You may also want to prevent SSH from setting up TCP port and X11 forwarding if you don't need it:

AllowTcpForwarding no
X11Forwarding no

Ensure the StrictModes directive is enabled which checks file permissions and ownerships of some important files in the user's home directory like ~/.ssh, ~/.ssh/authorized_keys etc. If any checks fail, the user won't be able to login.

StrictModes yes

Ensure that all host-based authentications are disabled. These methods should be avoided as primary authentication.

IgnoreRhosts yes
HostbasedAuthentication no
RhostsRSAAuthentication no

Disable sftp if it's not needed:

#Subsystem      sftp    /usr/lib/misc/sftp-server

After changing any directives make sure to restart the sshd daemon:

/etc/init.d/sshd restart

Source

322.6 NFSv4 (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of NFSv4 NFS services. Earlier versions of NFS are not required knowledge.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Configuration

limit access
For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host bob.example.com with read and write permissions.

/tmp/nfs/     bob.example.com(rw)

Restricting access is also by protecting the portmap service, this can be done using libwrap en iptables.

iptables -A INPUT -p udp -s! 192.168.0.0/24  --dport 111 -j DROP
#
# hosts.allow   This file describes the names of the hosts which are
#               allowed to use the local INET services, as decided
#               by the '/usr/sbin/tcpd' server.
#
                                                                                                                
portmap : 127. : ALLOW
portmap : ALL : DENY

Do Not Use the no_root_squash Option
By default, NFS shares change root-owned files to user nfsnobody. This prevents uploading of programs with the setuid bit set.

Configure /etc/idmapd.conf
The id mapper daemon is required on both client and server. It maps NFSv4 username@domain user strings back and forth into numeric UIDs and GIDs when necessary. The client and server must have matching domains in this configuration file:

[General]
 
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = vanemery.com
 
[Mapping]
 
Nobody-User = nfsnobody
Nobody-Group = nfsnobody


Based on information from source

322.7 Syslog (weight: TBD)

Candidates should have experience and knowledge of security issues in use and configuration of syslog services.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Syslog over network

The default syslog daemon is not listening on the network. You can enable this by specifying the -r flag at startup. Be aware that syslog-messages are plain text and there is no security mechanism implemented in the syslog daemon. You can limit access to the syslog daemon using iptables. Example:

iptables -A INPUT -p udp -s ! 192.168.0.0/24 --dport 514 -j DROP

Topic 323: Operations Security

323.1 Host Configuration Management (weight: TBD)

Candidates should be familiar with the use of RCS and Puppet for host configuration management.

Key Knowledge Areas

  • RCS, ci, co, rcsdiff
  • puppet

The following is a partial list of the used files, terms and utilities:

  • security

RCS, ci, co

The Revision Control System (RCS) manages multiple revisions of files. RCS automates the storing, retrieval, logging, identification, and merging of revisions. RCS is useful for text that is revised frequently, including source code, programs, documentation, graphics, papers, and form letters.
You can use RCS to version control your files. RCS is able to keep a history of previous revisions, and it provides a log for people to note why they made their change. Let us work through an example! A common administrative file that can benefit from RCS is /etc/sudoers. This file is not changed automatically and should only be altered by the Unix administrators. Let us start with a basic sudoers file:

%unixadmins     ALL = (ALL) ALL

We want to add another line for our Oracle administrators, but we want to save the original copy of this file. We first use RCS’s ci (check in) program to create the initial revision of the file:

i_am_root# cd /etc
i_am_root# ci -l sudoers
sudoers,v  <--  sudoers
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> sudo uses this file to determine permissions
>> .
initial revision: 1.1
done

This creates the revision history file, sudoers,v, in the /etc directory right with the sudoers file. If a directory named RCS exists, then the revision history file will be located there. You can use a symbolic link named RCS to store your revision history files anywhere you want.
Creating your revision history file leaves the sudoers file in a “checked out” state. This is necessary because creating the revision history file without also checking out the file will cause the original file to be deleted (oops). Therefore, you may begin editing the file now, or you might want to just check it in so someone else can edit it, too. The command to check files in, ci, can be used with the -u flag instead of -l to unlock the file instead of locking it:

i_am_root# ci -u sudoers
sudoers,v  <--  sudoers
file is unchanged; reverting to previous revision 1.1
done

Let us check the file back out for editing now. We use RCS’s co (check out) command to perform the task, and co takes -l as an option when locking is requested:

i_am_root# co -l sudoers
sudoers,v  -->  sudoers
revision 1.1 (locked)
done

Now edit the file to add the permissions for the Oracle DBAs.

%unixadmins     ALL = (ALL) ALL
%oracledbas     ALL = (oracle) ALL

We can use RCS’s rcsdiff command to see what we have changed:

i_am_root# rcsdiff sudoers
===================================================================
RCS file: sudoers,v
retrieving revision 1.1
diff -r1.1 sudoers
1a2
> %oracledbas     ALL = (oracle) ALL

It does not seem so useful for so small a change, but if you are making a lot of changes to a file then it is more useful. You can also use rcsdiff to view differences between revisions, which is extremely useful if you are trying to figure out when a particular line in the file was introduced (e.g. rcsdiff -r1.1 -r1.3 sudoers). If the standard diff format output is undesired, then try -c or -u (the latter not available on all systems) to change up the output format.
We are not quite done, but we are almost there. We have to check in our change.

i_am_root# ci -u sudoers
sudoers,v  <--  sudoers
new revision: 1.2; previous revision: 1.1
enter log message, terminated with single '.' or end of file:
>> added permissions for Oracle DBAs
>> .
done

Source

Changes against source:
- Removed some I's.
- Removed some irrelevant text.

rcsdiff

We can see how the version in the current directory differs from the one most recently checked in to RCS using rcsdiff, and then check in the change:

bash$ rcsdiff sample.txt
===================================================================
RCS file: sample.txt,v
retrieving revision 1.1
diff -r1.1 sample.txt
4c4
< I am getting bored with this.
---
> We are all am getting bored with this.
bash$

Now we can compare the version we have checked out with a specific older one in the repository:

bash$ rcsdiff -r1.1 sample.txt
===================================================================
RCS file: sample.txt,v
retrieving revision 1.1
diff -r1.1 sample.txt
4c4
< I am getting bored with this.
---
> We are all am getting bored with this.
bash$

We can even look at the differences between two versions:

bash$ rcsdiff -r1.1 -r1.3 sample.txt
===================================================================
RCS file: sample.txt,v
retrieving revision 1.1
retrieving revision 1.3
diff -r1.1 -r1.3
4c4
< I am getting bored with this.
---
> We are all getting bored with this.
bash$

Source - used only the rcsdiff examples.

puppet

Puppet is a system for automating system administration tasks.

Configuration and usage example

Server Preparation
The server (puppetmasterd) requires a manifest to be in place before it's able to run. Lets write a manifest that tells puppet to create a file ”/tmp/testfile” on the client.

puppet:# vim /etc/puppet/manifests/site.pp

# Create "/tmp/testfile" if it doesn't exist.
class test_class {
    file { "/tmp/testfile":
       ensure => present,
       mode   => 644,
       owner  => root,
       group  => root
    }
}

# tell puppet on which client to run the class
node pclient {
    include test_class
}

Now start the puppet server.

puppet:# /etc/init.d/puppetmaster start

Client Preparation
Clients by default will connect to a server on your network with a hostname of “puppet.” If your server's hostname isn't “puppet” a directive needs to be inserted into the puppetd configuration file “puppetd.conf.” Even though we don't need to in this case, we'll do so for demonstration purposes. Open ”/etc/puppet/puppetd.conf” with your favorite text editor and add “server = puppet.example.com” to the existing file as the example below indicates.

pclient:# vim /etc/puppet/puppetd.conf

[puppetd]
server = puppet.example.com

# Make sure all log messages are sent to the right directory
# This directory must be writable by the puppet user
logdir=/var/log/puppet
vardir=/var/lib/puppet
rundir=/var/run

Sign Keys
In order for the two systems to communicate securely we need to create signed SSL certificates. You should be logged into both the server and client machines for this next step. On the client side run.

pclient:# puppetd --server puppet.example.com --waitforcert 60 --test

You should see the following message.

err: No certificate; running with reduced functionality.
info: Creating a new certificate request for pclient.example.con
info: Requesting certificate
warning: peer certificate won't be verified in this SSL session
notice: Did not receive certificate

Next, on the server side, run the following command to verify the client is waiting for the cert to be signed.

puppet:# puppetca --list

pclient.example.con

Then sign the certificate.

puppet:# puppetca --sign pclient.example.com

Signed pclient.example.com

If everything went OK you should see this message on pclient.

info: Requesting certificate
warning: peer certificate won't be verified in this SSL session
notice: Ignoring --listen on onetime run
info: Caching configuration at /etc/puppet/localconfig.yaml
notice: Starting configuration run
notice: //pclient/test_class/File[/tmp/testfile]/ensure: created
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished configuration run in 0.11 seconds

Test
Check and make sure the file was created.

pclient:# ls -l /tmp/testfile

-rw-r--r-- 1 root root 0 2007-02-18 18:28 /tmp/testfile

For a test lets edit the manifest on the server and direct Puppet to modify the file mode. Change line, “mode ⇒ 644,” to “mode ⇒ 600,”

puppet:# vim /etc/puppet/manifests/site.pp

# Create "/tmp/testfile" if it doesn't exist.
class test_class {
    file { "/tmp/testfile":
       ensure => present,
       mode   => 600,
       owner  => root,
       group  => root
    }
}

# tell puppet on which client to run the class
node pclient {
    include test_class
}

On the client run puppetd in verbose mode (-v) and only once (-o).

pclient:# puppetd -v -o

You should see the following message, which states that /tmp/testfile changed from mode 644 to 600.

notice: Ignoring --listen on onetime run
info: Config is up to date
notice: Starting configuration run
notice: //pclient/test_class/File[/tmp/testfile]/mode: mode changed '644' to '600'
notice: Finished configuration run in 0.26 seconds

To verify the work was completed properly.

pclient:# ls -l /tmp/testfile

-rw------- 1 root root 0 2007-02-18 18:28 /tmp/testfile 

Source

Topic 324: Network Security

324.1 Intrusion Detection (weight: TBD)

Candidates should be familiar with the use and configuration of intrusion detection software.

Key Knowledge Areas

  • snort
  • tripwire

The following is a partial list of the used files, terms and utilities:

  • security

snort

Configure Snort
We need to modify the snort.conf file to suite our needs. Open /etc/snort/snort.conf with your favorite text editor (nano, vi, vim, etc.).

# vi /etc/snort/snort.conf

Change “var HOME_NET any” to “var HOME_NET 192.168.1.0/24” (your home network may differ from 192.168.1.0) Change “var EXTERNAL_NET any” to “var EXTERNAL_NET !$HOME_NET” (this is stating everything except HOME_NET is external) Change “var RULE_PATE ../rules” to “var RULE_PATH /etc/snort/rules” Save and quit.
Change permissions on the conf file to keep things secure (thanks rojo):

# chmod 600 /etc/snort/snort.conf

Time to test Snort
In the terminal type:

# snort -c /etc/snort/snort.conf

If everything went well you should see an ascii pig.
To end the test hit ctrl + c.

Updating rules
modify /etc/oinkmaster.conf so that:

url = http://www.snort.org/pub-bin/oinkmaster.cgi/<your registered key>/snortrules-snapshot-CURRENT.tar.gz

Then:

groupadd snort
useradd -g snort snort -s /bin/false
chmod 640 /etc/oinktmaster.conf
chown root:snort /etc/oinkmaster.conf
nano -w /usr/local/bin/oinkdaily

In /usr/local/bin/oinkdaily, include the following, uncommenting the appropriate line:

#!/bin/bash

## if you have "mail" installed, uncomment this to have oinkmaster mail you reports:
# /usr/sbin/oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules 2>&1 | mail -s "oinkmaster" your@email.address

## otherwise, use this one:
# /usr/sbin/oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules >/dev/null 2>&1

Finally:

chmod 700 /usr/local/bin/oinkdaily
chown -R snort:snort /usr/local/bin/oinkdaily /etc/snort/rules
crontab -u snort -e

In user snort's crontab, to launch the update on the 30th minute of the 5th hour of every day, add the following:

30 5 * * *     /usr/local/bin/oinkdaily

But you should randomize those times (for instance, 2:28 or 4:37 or 6:04) to reduce the impact on snort.org's servers.

Source

tripwire

Tripwire is a security tool that checks the integrity of normal system binaries and reports any changes to syslog or by email. Tripwire is a good tool for ensuring that your binaries have not been replaced by Trojan horse programs. Trojan horses are malicious programs inadvertently installed because of identical filenames to distributed (expected) programs, and they can wreak havoc on a breached system.

After installation, run the twinstall.sh script (found under /etc/tripwire) as root like so:

    $ sudo /etc/tripwire/twinstall.sh

    ----------------------------------------------

    The Tripwire site and local passphrases are used to

    sign a variety of files, such as the configuration,

    policy, and database files.



    Passphrases should be at least 8 characters in length

    and contain both letters and numbers.



    See the Tripwire manual for more information.



    ----------------------------------------------

    Creating key files...



    (When selecting a passphrase, keep in mind that good passphrases typically

    have upper and lower case letters, digits and punctuation marks, and are

    at least 8 characters in length.)



    Enter the site keyfile passphrase:

You then need to enter a password of at least eight characters (perhaps best is a string of random madness, such as 5fXkc4ln) at least twice. The script generates keys for your site (host) and then asks you to enter a password (twice) for local use. You are then asked to enter the new site password. After following the prompts, the (rather extensive) default configuration and policy files (tw.cfg and tw.pol) are encrypted. You should then back up and delete the original plain-text files installed by Ubuntu.
To then initialize Tripwire, use its –init option like so:

    $ sudo tripwire --init

    Please enter your local passphrase:

    Parsing policy file: /etc/tripwire/tw.pol

    Generating the database...

    *** Processing Unix File System ***

    ....

    Wrote database file: /var/lib/tripwire/shuttle2.twd

    The database was successfully generated.

Note that not all the output is shown here. After Tripwire has created its database (which is a snapshot of your file system), it uses this baseline along with the encrypted configuration and policy settings under the /etc/tripwire directory to monitor the status of your system. You should then start Tripwire in its integrity checking mode, using a desired option. (See the TRipwire manual page for details.) For example, you can have Tripwire check your system and then generate a report at the command line, like so:

    # tripwire -m c

No output is shown here, but a report is displayed in this example. The output could be redirected to a file, but a report is saved as /var/lib/tripwire/report/hostname-YYYYMMDD-HHMMSS.twr (in other words, using your host's name, the year, the month, the day, the hour, the minute, and the seconds). This report can be read using the twprint utility, like so:

    # twprint --print-report -r \

    /var/lib/tripwire/report/shuttle2-20020919-181049.twr | less

Other options, such as emailing the report, are supported by Tripwire, which should be run as a scheduled task by your system's scheduling table, /etc/crontab, on off-hours. (It can be resource intensive on less powerful computers.) The Tripwire software package also includes a twadmin utility you can use to fine-tune or change settings or policies or to perform other administrative duties.

After updating (config) files

tripwire --update --twrfile /var/lib/tripwire/report/<a_previous_integrity_report>.twr



Source

324.2 Network Security Scanning (weight: TBD)

Candidates should be familiar with the use and configuration of network security scanning tools.

Key Knowledge Areas

  • nessus
  • nmap
  • wireshark

The following is a partial list of the used files, terms and utilities:

  • security

nessus

In computer security, Nessus is a proprietary comprehensive vulnerability scanning software. It is free of charge for personal use in a non-enterprise environment. Its goal is to detect potential vulnerabilities on the tested systems. For example:

  • Vulnerabilities that allow a remote cracker to control or access sensitive data on a system.
  • Misconfiguration (e.g. open mail relay, missing patches, etc).
  • Default passwords, a few common passwords, and blank/absent passwords on some system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary attack.
  • Denials of service against the TCP/IP stack by using mangled packets

On UNIX (including Mac OS X), it consists of nessusd, the Nessus daemon, which does the scanning, and nessus, the client, which controls scans and presents the vulnerability results to the user. For Wndows, Nessus 3 installs as an executable and has a self-contained scanning, reporting and management system.

Operation
In typical operation, Nessus begins by doing a port scan with one of its four internal portscanners (or it can optionally use Amap or Nmap ) to determine which ports are open on the target and then tries various exploits on the open ports. The vulnerability tests, available as subscriptions, are written in NASL (Nessus Attack Scripting Language), a scripting language optimized for custom network interaction.

Tenable Network Security produces several dozen new vulnerability checks (called plugins) each week, usually on a daily basis. These checks are available for free to the general public seven days after they are initially published. Nessus users who require support and the latest vulnerability checks should contact Tenable Network Security for a Direct Feed subscription which is not free. Commercial customers are also allowed to access vulnerability checks without the seven-day delay.

Optionally, the results of the scan can be reported in various formats, such as plain text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for reference against future vulnerability scans. On UNIX, scanning can be automated through the use of a command-line client. There exist many different commercial, free and open source tools for both UNIX and Windows to manage individual or distributed Nessus scanners.

If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus's vulnerability tests may try to cause vulnerable services or operating systems to crash. This lets a user test the resistance of a device before putting it in production.

Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it can use Windows credentials to examine patch levels on computers running the Windows operating system, and can perform password auditing using dictionary and brute force methods. Nessus 3 can also audit systems to make sure they have been configured per a specific policy, such as the NSA's guide for hardening Windows servers.

Source

nmap

Nmap (“Network Mapper”) is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts.
Example

# nmap -A -T4 scanme.nmap.org

Starting Nmap ( http://nmap.org )
Interesting ports on scanme.nmap.org (64.13.134.52):
Not shown: 994 filtered ports
PORT    STATE  SERVICE VERSION
22/tcp  open   ssh     OpenSSH 4.3 (protocol 2.0)
25/tcp  closed smtp
53/tcp  open   domain  ISC BIND 9.3.4
70/tcp  closed gopher
80/tcp  open   http    Apache httpd 2.2.2 ((Fedora))
|_ HTML title: Go ahead and ScanMe!
113/tcp closed auth
Device type: general purpose
Running: Linux 2.6.X
OS details: Linux 2.6.20-1 (Fedora Core 5)

TRACEROUTE (using port 80/tcp)
HOP RTT   ADDRESS
[Cut first seven hops for brevity]
8   10.59 so-4-2-0.mpr3.pao1.us.above.net (64.125.28.142)
9   11.00 metro0.sv.svcolo.com (208.185.168.173)
10  9.93  scanme.nmap.org (64.13.134.52)

Nmap done: 1 IP address (1 host up) scanned in 17.00 seconds
Nmap 4.76 ( http://nmap.org )
Usage: nmap [Scan Type(s)] [Options] {target specification}
TARGET SPECIFICATION:
  Can pass hostnames, IP addresses, networks, etc.
  Ex: scanme.nmap.org, microsoft.com/24, 192.168.0.1; 10.0.0-255.1-254
  -iL <inputfilename>: Input from list of hosts/networks
  -iR <num hosts>: Choose random targets
  --exclude <host1[,host2][,host3],...>: Exclude hosts/networks
  --excludefile <exclude_file>: Exclude list from file
HOST DISCOVERY:
  -sL: List Scan - simply list targets to scan
  -sP: Ping Scan - go no further than determining if host is online
  -PN: Treat all hosts as online -- skip host discovery
  -PS/PA/PU [portlist]: TCP SYN/ACK or UDP discovery to given ports
  -PE/PP/PM: ICMP echo, timestamp, and netmask request discovery probes
  -PO [protocol list]: IP Protocol Ping
  -n/-R: Never do DNS resolution/Always resolve [default: sometimes]
  --dns-servers <serv1[,serv2],...>: Specify custom DNS servers
  --system-dns: Use OS's DNS resolver
SCAN TECHNIQUES:
  -sS/sT/sA/sW/sM: TCP SYN/Connect()/ACK/Window/Maimon scans
  -sU: UDP Scan
  -sN/sF/sX: TCP Null, FIN, and Xmas scans
  --scanflags <flags>: Customize TCP scan flags
  -sI <zombie host[:probeport]>: Idle scan
  -sO: IP protocol scan
  -b <FTP relay host>: FTP bounce scan
  --traceroute: Trace hop path to each host
  --reason: Display the reason a port is in a particular state
PORT SPECIFICATION AND SCAN ORDER:
  -p <port ranges>: Only scan specified ports
    Ex: -p22; -p1-65535; -p U:53,111,137,T:21-25,80,139,8080
  -F: Fast mode - Scan fewer ports than the default scan
  -r: Scan ports consecutively - don't randomize
  --top-ports <number>: Scan <number> most common ports
  --port-ratio <ratio>: Scan ports more common than <ratio>
SERVICE/VERSION DETECTION:
  -sV: Probe open ports to determine service/version info
  --version-intensity <level>: Set from 0 (light) to 9 (try all probes)
  --version-light: Limit to most likely probes (intensity 2)
  --version-all: Try every single probe (intensity 9)
  --version-trace: Show detailed version scan activity (for debugging)
SCRIPT SCAN:
  -sC: equivalent to --script=default
  --script=<Lua scripts>: <Lua scripts> is a comma separated list of 
           directories, script-files or script-categories
  --script-args=<n1=v1,[n2=v2,...]>: provide arguments to scripts
  --script-trace: Show all data sent and received
  --script-updatedb: Update the script database.
OS DETECTION:
  -O: Enable OS detection
  --osscan-limit: Limit OS detection to promising targets
  --osscan-guess: Guess OS more aggressively
TIMING AND PERFORMANCE:
  Options which take <time> are in milliseconds, unless you append 's'
  (seconds), 'm' (minutes), or 'h' (hours) to the value (e.g. 30m).
  -T[0-5]: Set timing template (higher is faster)
  --min-hostgroup/max-hostgroup <size>: Parallel host scan group sizes
  --min-parallelism/max-parallelism <time>: Probe parallelization
  --min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout <time>: Specifies
      probe round trip time.
  --max-retries <tries>: Caps number of port scan probe retransmissions.
  --host-timeout <time>: Give up on target after this long
  --scan-delay/--max-scan-delay <time>: Adjust delay between probes
  --min-rate <number>: Send packets no slower than <number> per second
  --max-rate <number>: Send packets no faster than <number> per second
FIREWALL/IDS EVASION AND SPOOFING:
  -f; --mtu <val>: fragment packets (optionally w/given MTU)
  -D <decoy1,decoy2[,ME],...>: Cloak a scan with decoys
  -S <IP_Address>: Spoof source address
  -e <iface>: Use specified interface
  -g/--source-port <portnum>: Use given port number
  --data-length <num>: Append random data to sent packets
  --ip-options <options>: Send packets with specified ip options
  --ttl <val>: Set IP time-to-live field
  --spoof-mac <mac address/prefix/vendor name>: Spoof your MAC address
  --badsum: Send packets with a bogus TCP/UDP checksum
OUTPUT:
  -oN/-oX/-oS/-oG <file>: Output scan in normal, XML, s|<rIpt kIddi3,
     and Grepable format, respectively, to the given filename.
  -oA <basename>: Output in the three major formats at once
  -v: Increase verbosity level (use twice or more for greater effect)
  -d[level]: Set or increase debugging level (Up to 9 is meaningful)
  --open: Only show open (or possibly open) ports
  --packet-trace: Show all packets sent and received
  --iflist: Print host interfaces and routes (for debugging)
  --log-errors: Log errors/warnings to the normal-format output file
  --append-output: Append to rather than clobber specified output files
  --resume <filename>: Resume an aborted scan
  --stylesheet <path/URL>: XSL stylesheet to transform XML output to HTML
  --webxml: Reference stylesheet from Nmap.Org for more portable XML
  --no-stylesheet: Prevent associating of XSL stylesheet w/XML output
MISC:
  -6: Enable IPv6 scanning
  -A: Enables OS detection and Version detection, Script scanning and Traceroute
  --datadir <dirname>: Specify custom Nmap data file location
  --send-eth/--send-ip: Send using raw ethernet frames or IP packets
  --privileged: Assume that the user is fully privileged
  --unprivileged: Assume the user lacks raw socket privileges
  -V: Print version number
  -h: Print this help summary page.
EXAMPLES:
  nmap -v -A scanme.nmap.org
  nmap -v -sP 192.168.0.0/16 10.0.0.0/8
  nmap -v -iR 10000 -PN -p 80
SEE THE MAN PAGE FOR MANY MORE OPTIONS, DESCRIPTIONS, AND EXAMPLES

Source

wireshark

Wireshark is a free packet sniffer computer application. It is used for network troubleshooting, analysis, software and communications protocol development, and education. In June 2006 the project was renamed from Ethereal due to trademark issues.
Wireshark is software that “understands” the structure of different networking protocols. Thus, it is able to display the encapsulation and the fields along with their meanings of different packets specified by different networking protocols. Wireshark uses pcap to capture packets, so it can only capture the packets on the networks supported by pcap.

  • Data can be captured “from the wire” from a live network connection or read from a file that records the already-captured packets.
  • Live data can be read from a number of types of network, including Ethernet, IEEE 802.11, PPP, and loopback.
  • Captured network data can be browsed via a GUI, or via the terminal (command line) version of the utility, tshark.
  • Captured files can be programmatically edited or converted via command-line switches to the “editcap” program.
  • Display filters can also be used to selectively highlight and color packet summary information.
  • Data display can be refined using a display filter.
  • Hundreds of protocols can be dissected.

Wireshark's native network trace file format is the libpcap format supported by libpcap and WinPcap, so it can read capture files from applications such as tcpdump and CA NetMaster that use that format. It can also read captures from other network analyzers, such as snoop, Network General's Sniffer, and Microsoft Network Monitor.

Source

Command-line options

Wireshark 1.0.3
Interactively dump and analyze network traffic.
See http://www.wireshark.org for more information.

Copyright 1998-2008 Gerald Combs <gerald@wireshark.org> and contributors.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Usage: wireshark [options] ... [ <infile> ]

Capture interface:
  -i <interface>           name or idx of interface (def: first non-loopback)
  -f <capture filter>      packet filter in libpcap filter syntax
  -s <snaplen>             packet snapshot length (def: 65535)
  -p                       don't capture in promiscuous mode
  -k                       start capturing immediately (def: do nothing)
  -Q                       quit Wireshark after capturing
  -S                       update packet display when new packets are captured
  -l                       turn on automatic scrolling while -S is in use
  -y <link type>           link layer type (def: first appropriate)
  -D                       print list of interfaces and exit
  -L                       print list of link-layer types of iface and exit

Capture stop conditions:
  -c <packet count>        stop after n packets (def: infinite)
  -a <autostop cond.> ...  duration:NUM - stop after NUM seconds
                           filesize:NUM - stop this file after NUM KB
                              files:NUM - stop after NUM files
Capture output:
  -b <ringbuffer opt.> ... duration:NUM - switch to next file after NUM secs
                           filesize:NUM - switch to next file after NUM KB
                              files:NUM - ringbuffer: replace after NUM files
Input file:
  -r <infile>              set the filename to read from (no pipes or stdin!)

Processing:
  -R <read filter>         packet filter in Wireshark display filter syntax
  -n                       disable all name resolutions (def: all enabled)
  -N <name resolve flags>  enable specific name resolution(s): "mntC"

User interface:
  -C <config profile>      start with specified configuration profile
  -g <packet number>       go to specified packet number after "-r"
  -m <font>                set the font name used for most text
  -t ad|a|r|d|dd|e         output format of time stamps (def: r: rel. to first)
  -X <key>:<value>         eXtension options, see man page for details
  -z <statistics>          show various statistics, see man page for details

Output:
  -w <outfile|->           set the output filename (or '-' for stdout)

Miscellaneous:
  -h                       display this help and exit
  -v                       display version info and exit
  -P <key>:<path>          persconf:path - personal configuration files
                           persdata:path - personal data files
  -o <name>:<value> ...    override preference or recent setting
  --display=DISPLAY        X display to use

324.3 Network Monitoring (weight: TBD)

Candidates should be familiar with the use and configuration of network monitoring tools.

Key Knowledge Areas

  • nagios
  • ntop

The following is a partial list of the used files, terms and utilities:

  • security

nagios

Nagios is a powerful, modular network monitoring system that can be used to monitor many network services like smtp, http and dns on remote hosts. It also has support for snmp to allow you to check things like processor loads on routers and servers.
First we need to define people that will be notified, and define how they should be notified. In the example below, I define two users, joe and paul. Joe is the network guru and cares about routers and switches. Paul is the systems guy, and he cares about servers. Both will be notified via email and by pager. Note that if you are going to monitor your email server, you will want to use another notification method besides email. If your email server is down, you can't send anybody an email to notify them! :) In that case you will want to use a pager server to send a text message to a phone or pager, or set up a second nagios monitor that uses a different mail server to send email.
Edit /etc/nagios/contacts.cfg and add the following users:

define contact{
    contact_name                    joe
    alias                           Joe Blow
    service_notification_period     24x7
    host_notification_period        24x7
    service_notification_options    w,u,c,r
    host_notification_options       d,u,r
    service_notification_commands   notify-by-email,notify-by-pager
    host_notification_commands      host-notify-by-email,host-notify-by-epager
    email                           joe@yourdomain.com
    pager                           5555555@pager.yourdomain.com
    }

define contact{
    contact_name                    paul
    alias                           Paul Shiznit
    service_notification_period     24x7
    host_notification_period        24x7
    service_notification_options    w,u,c,r
    host_notification_options       d,u,r
    service_notification_commands   notify-by-email,notify-by-epager
    host_notification_commands      host-notify-by-email,host-notify-by-epager
    email                           paul@yourdomain.com
    pager                           5556666@pager.yourdomain.com
    }

Now add the users to groups. In /etc/nagios/contactgroups.cfg add the following:

define contactgroup{
    contactgroup_name   router_admin
    alias               Network Administrators
    members             joe
}

define contactgroup{
    contactgroup_name   server_admin
    alias               Systems Administrators
    members             paul
}

You can add multiple members to a contact group by listing comma separated users.
Now to define some hosts to monitor. For my example, I define two machines, a mail server and a router.
Edit /etc/nagios/hosts.cfg and add:

define host{
    use                     generic-host
    host_name               gw1.yourdomain.com
    alias                   Gateway Router
    address                 10.0.0.1
    check_command           check-host-alive
    max_check_attempts      20
    notification_interval   240
    notification_period     24x7
    notification_options    d,u,r
    }

define host{
    use                     generic-host
    host_name               mail.yourdomain.com
    alias                   Mail Server
    address                 10.0.0.100
    check_command           check-host-alive
    max_check_attempts      20
    notification_interval   240
    notification_period     24x7
    notification_options    d,u,r
    }

Now we add the hosts to groups. I define groups called 'routers' and 'servers' and add the router and mail server respectively.
Edit /etc/nagios/hostgroups.cfg

define hostgroup{
    hostgroup_name  routers
    alias           Routers
    contact_groups  router_admin
    members         gw1.yourdomain.com
    }

define hostgroup{
    hostgroup_name  servers
    alias           Servers
    contact_groups  server_admin
    members         mail.yourdomain.com
    }

Again, for multiple members, just use a comma separated list of hosts.
Next define services to monitor on each of the hosts. Nagios has many built-in plugins for monitoring. On a debian sarge system, they are stored in /usr/lib/nagios/plugins. Here we want to monitor the smtp service on the mail server, and do ping checks on the router.
Edit /etc/nagios/services.cfg

define service{
    use                     generic-service 
    host_name               mail.yourdomain.com
    service_description     SMTP
    is_volatile             0
    check_period            24x7
    max_check_attempts      3
    normal_check_interval   5
    retry_check_interval    1
    contact_groups          server_admin
    notification_interval   240
    notification_period     24x7
    notification_options    w,u,c,r
    check_command           check_smtp
    }

define service{
    use                     generic-service 
    host_name               gw1.yourdomain.com
    service_description     PING
    is_volatile             0
    check_period            24x7
    max_check_attempts      3
    normal_check_interval   5
    retry_check_interval    1
    contact_groups          router_admin
    notification_interval   240
    notification_period     24x7
    notification_options    w,u,c,r
    check_command           check_ping!100.0,20%!500.0,60%
    }

And that's it. To test your configurations, you can run

$ nagios -v /etc/nagios/nagios.cfg

If all is well we can restart nagios and move on to the apache side to get a visual view of the monitor.

$ /etc/init.d/nagios restart

Assuming you have a working apache install, you can add the apache.conf file included in the nagios package to set up the nagios cgi administration interface. The web interface is not required to run nagios, but it is definitely worth setting it up. The simplest way to get it up and running is to copy the supplied conf file over to our apache installation. On my system, I'm running apache2. Systems running apache 1.3.xx will have slightly different setups.

cp /etc/nagios/apache.conf /etc/apache2/sites-enabled/nagios

Of course you may want to set it up as a virtual server, but I leave that as an exercise for the reader. Now you will want to set up an allowed user to view the cgi interface. By default, nagios issues full administrative access to the nagiosadmin user. Nagios uses apache htpasswd style authentication. So here we add a user and password to the default nagios htpasswd file. Here we add the user nagiosadmin with password mypassword to the nagios htpasswd file.

htpasswd2 -nb nagiosadmin mypassword >> /etc/nagios/htpasswd.users

You should now be able to restart apache and logon to http://your.nagios.server/nagios
Nagios is a very powerful tool for monitoring networks. I've only touched on the basics here, but it should be enough to get you up and running. Hopefully, once you do, you'll start experimenting with all the cool features and plugins that are available. The documentation included in the cgi interface is very detailed and helpful.

Source

ntop

ntop is a network traffic probe that shows the network usage, similar to what the popular top Unix command does. Ntop is based on libpcap and it has been written in a portable way in order to virtually run on every Unix platform.

How Ntop Works ?
Ntop users can use a a web browser to navigate through ntop (that acts as a web server) traffic information and get a dump of the network status. In the latter case, ntop can be seen as a simple RMON-like agent with an embedded web interface. The use of:

  • a web interface
  • limited configuration and administration via the web interface
  • reduced CPU and memory usage (they vary according to network size and traffic).


Using Ntop
This is a very simple procedure. Run this command in the bash shell:

# ntop -P /etc/ntop -W4242 -d

What does it means ? Well, -P option reads the configuration files in the ”/etc/ntop” directory. The -W option enables the port on which we want to access Ntop through our web browser. If you don't specify this option the default port is 3000. Finally the -d option enables Ntop in daemon mode. This means that Ntop will work until the system runs and works.
Once is started in web mode Ntop enables its web server and allow us to view and use its statistics through any web browser by using the web address http://host:portnumber/.
The example on our test machine:

# http://192.168.0.6:4242/


Source

324.4 netfilter/iptables (weight: TBD)

Candidates should be familiar with the use and configuration of iptables.

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

iptables

Basic Commands
Typing

$ sudo iptables -L

lists your current rules in iptables. If you have just set up your server, you will have no rules, and you should see

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination


Basic Iptables Options
Here are explanations for some of the iptables options you will see in this tutorial. Don't worry about understanding everything here now, but remember to come back and look at this list as you encounter new options later on.

-A - Append this rule to a rule chain. Valid chains for what we're doing are INPUT, FORWARD and OUTPUT, but we mostly deal with INPUT in this tutorial, which affects only incoming traffic.

-L - List the current filter rules.

-m state - Allow filter rules to match based on connection state. Permits the use of the –state option.

–state - Define the list of states for the rule to match on. Valid states are:

  • NEW - The connection has not yet been seen.
  • RELATED - The connection is new, but is related to another connection already permitted.
  • ESTABLISHED - The connection is already established.
  • INVALID - The traffic couldn't be identified for some reason.

-m limit - Require the rule to match only a limited number of times. Allows the use of the –limit option. Useful for limiting logging rules.

  • –limit - The maximum matching rate, given as a number followed by ”/second”, ”/minute”, ”/hour”, or ”/day” depending on how often you want the rule to match. If this option is not used and -m limit is used, the default is “3/hour”.

-p - The connection protocol used.

–dport - The destination port(s) required for this rule. A single port may be given, or a range may be given as start:end, which will match all ports from start to end, inclusive.

-j - Jump to the specified target. By default, iptables allows four targets:

  • ACCEPT - Accept the packet and stop processing rules in this chain.
  • REJECT - Reject the packet and notify the sender that we did so, and stop processing rules in this chain.
  • DROP - Silently ignore the packet, and stop processing rules in this chain.
  • LOG - Log the packet, and continue processing more rules in this chain. Allows the use of the –log-prefix and –log-level options.

–log-prefix - When logging, put this text before the log message. Use double quotes around the text to use.

–log-level - Log using the specified syslog level. 7 is a good choice unless you specifically need something else.

-i - Only match if the packet is coming in on the specified interface.

-I - Inserts a rule. Takes two options, the chain to insert the rule into, and the rule number it should be.

  • -I INPUT 5 would insert the rule into the INPUT chain and make it the 5th rule in the list.

-v - Display more information in the output. Useful for if you have rules that look similar without using -v.

-s –source - address[/mask] source specification

-d –destination - address[/mask] destination specification

-o –out-interface - output name[+] network interface name ([+] for wildcard)


Allowing Established Sessions
We can allow established sessions to receive traffic:

$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT


Allowing Incoming Traffic on Specific Ports
You could start by blocking traffic, but you might be working over SSH, where you would need to allow SSH before blocking everything else.
To allow incoming traffic on the default SSH port (22), you could tell iptables to allow all TCP traffic on that port to come in.

$ sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT

Referring back to the list above, you can see that this tells iptables:

  • append this rule to the input chain (-A INPUT) so we look at incoming traffic
  • check to see if it is TCP (-p tcp).
  • if so, check to see if the input goes to the SSH port (–dport ssh).
  • if so, accept the input (-j ACCEPT).

Lets check the rules: (only the first few lines shown, you will see more)

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh

Now, let's allow all incoming web traffic

$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

Checking our rules, we have

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www

We have specifically allowed tcp traffic to the ssh and web ports, but as we have not blocked anything, all traffic can still come in.

Blocking Traffic
Once a decision is made to accept a packet, no more rules affect it. As our rules allowing ssh and web traffic come first, as long as our rule to block all traffic comes after them, we can still accept the traffic we want. All we need to do is put the rule to block all traffic at the end.

$ sudo iptables -A INPUT -j DROP
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www
DROP       all  --  anywhere             anywhere

Because we didn't specify an interface or a protocol, any traffic for any port on any interface is blocked, except for web and ssh.

Editing iptables
The only problem with our setup so far is that even the loopback port is blocked. We could have written the drop rule for just eth0 by specifying -i eth0, but we could also add a rule for the loopback. If we append this rule, it will come too late - after all the traffic has been dropped. We need to insert this rule before that. Since this is a lot of traffic, we'll insert it as the first rule so it's processed first.

$ sudo iptables -I INPUT 1 -i lo -j ACCEPT
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:www
DROP       all  --  anywhere             anywhere

The first and last lines look nearly the same, so we will list iptables in greater detail.

$ sudo iptables -L -v

Chain INPUT (policy ALLOW 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  lo     any     anywhere             anywhere
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere            state RELATED,ESTABLISHED
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere            tcp dpt:ssh
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere            tcp dpt:www
    0     0 DROP       all  --  any    any     anywhere             anywhere

You can now see a lot more information. This rule is actually very important, since many programs use the loopback interface to communicate with each other. If you don't allow them to talk, you could break those programs!

Logging
In the above examples none of the traffic will be logged. If you would like to log dropped packets to syslog, this would be the quickest way:

$ sudo iptables -I INPUT 5 -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7


Saving iptables
If you were to reboot your machine right now, your iptables configuration would disappear. Rather than type this each time you reboot, however, you can save the configuration, and have it start up automatically. To save the configuration, you can use iptables-save and iptables-restore.

Source

324.5 OpenVPN (weight: TBD)

Candidates should be familiar with the use of OpenVPN

Key Knowledge Areas

  • security and other social issues

The following is a partial list of the used files, terms and utilities:

  • security

Configuration

OpenVPN is a full-featured open source SSL VPN solution that accommodates a wide range of configurations, including remote access, site-to-site VPNs, Wi-Fi security, and enterprise-scale remote access solutions with load balancing, failover, and fine-grained access-controls. Starting with the fundamental premise that complexity is the enemy of security, OpenVPN offers a cost-effective, lightweight alternative to other VPN technologies that is well-targeted for the SME and enterprise markets.

Simple Example
This example demonstrates a bare-bones point-to-point OpenVPN configuration. A VPN tunnel will be created with a server endpoint of 10.8.0.1 and a client endpoint of 10.8.0.2. Encrypted communication between client and server will occur over UDP port 1194, the default OpenVPN port.

Generate a static key:

    openvpn --genkey --secret static.key

Copy the static key to both client and server, over a pre-existing secure channel. Server configuration file

    dev tun
    ifconfig 10.8.0.1 10.8.0.2
    secret static.key

Client configuration file

    remote myremote.mydomain
    dev tun
    ifconfig 10.8.0.2 10.8.0.1
    secret static.key

Firewall configuration
Make sure that:

  • UDP port 1194 is open on the server, and
  • the virtual TUN interface used by OpenVPN is not blocked on either the client or server (on Linux, the TUN interface will probably be called tun0 while on Windows it will probably be called something like Local Area Connection n unless you rename it in the Network Connections control panel).

Bear in mind that 90% of all connection problems encountered by new OpenVPN users are firewall-related.

Testing the VPN
Run OpenVPN using the respective configuration files on both server and client, changing myremote.mydomain in the client configuration to the domain name or public IP address of the server.
To verify that the VPN is running, you should be able to ping 10.8.0.2 from the server and 10.8.0.1 from the client. Expanding on the Simple Example Use compression on the VPN link
Add the following line to both client and server configuration files:

    comp-lzo

Make the link more resistent to connection failures

Deal with:

  • keeping a connection through a NAT router/firewall alive, and
  • follow the DNS name of the server if it changes its IP address.

Add the following to both client and server configuration files:

    keepalive 10 60
    ping-timer-rem
    persist-tun
    persist-key

Run OpenVPN as a daemon (Linux/BSD/Solaris/MacOSX only)
Run OpenVPN as a daemon and drop privileges to user/group nobody.
Add to configuration file (client and/or server):

    user nobody
    group nobody
    daemon

Allow client to reach entire server subnet
Suppose the OpenVPN server is on a subnet 192.168.4.0/24. Add the following to client configuration:

    route 192.168.4.0 255.255.255.0

Then on the server side, add a route to the server's LAN gateway that routes 10.8.0.2 to the OpenVPN server machine (only necessary if the OpenVPN server machine is not also the gateway for the server-side LAN). Also, don't forget to enable IP Forwarding on the OpenVPN server machine.

Source

Acknowledgments

Most of the information in this document was collected from different sites on the internet and was copied (un)modified. Some text was created by me and my colleagues. The copyright of the text in document remains by their owners and is noway claimed by me. If you wrote some of the text we copied, I like to thank you for your excellent work.

Nothing in this document should be published for commercial purposes without gaining the permission of the original copyright owners.

For questions about this document or if you want to help keep this document up-to-date, you can contact me at webmaster@universe-network.net

 
wiki/certification/lpic303.txt · Last modified: 2008/11/12 15:25 by ferry
 
Recent changes RSS feed Creative Commons License Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki