Category Archives: Linux

Two D Rescue: Save and Recover Data From Crashed Disks

If we are using Linux and we need to recover data due to any of the reason whether physical damage or logical damage, we have many tools for this purpose of recovering data. To not to confuse with many, I will be discussing  only one of the data recovery tools available for Linux. ….GNU ddrescue.

GNU ddrescue is a program that copies data from one file or block device (hard disk, cd/dvd-rom, etc) to another, it is a tool to help you to save data from crashed partition i.e. it is a data recovery tool. It tries to read and if it fails it will go on with the next sectors, where tools like dd will fail. If the copying process is interrupted by the user it is possible to continue at any position later. It can copy backwards.

This program is useful to rescue data in case of I/O errors, because it does not necessarily abort or truncate the output. This is why you need to use this program and not the dd command. I have recovered much data from many disks (CD/hard disk/software raid) over the years using GNU ddrescue on Linux. I highly recommend this tool to Linux sysadmins.

Install ddrescue on a Debian/Ubuntu Linux

Type the following apt-get command to install ddrescue:
# apt-get install gddrescue
Sample outputs:

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  gddrescue
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 49.6 kB of archives.
After this operation, 152 kB of additional disk space will be used.
Get:1 http://mirrors.service.networklayer.com/ubuntu/ precise/universe gddrescue amd64 1.14-1 [49.6 kB]
Fetched 49.6 kB in 0s (1,952 kB/s)
Selecting previously unselected package gddrescue.
(Reading database ... 114969 files and directories currently installed.)
Unpacking gddrescue (from .../gddrescue_1.14-1_amd64.deb) ...
Processing triggers for install-info ...
Processing triggers for man-db ...
Setting up gddrescue (1.14-1) ...

Install ddrescue on a RHEL/Fedora/CentOS Linux

First turn on EPEL repo on a RHEL/CentOS/Fedora Linux. Type the following yum command:
# yum install ddrescue
Sample outputs:

Loaded plugins: product-id, rhnplugin, security, subscription-manager,
              : versionlock
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
This system is receiving updates from RHN Classic or RHN Satellite.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ddrescue.x86_64 0:1.16-1.el6 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
================================================================================
 Package            Arch             Version               Repository      Size
================================================================================
Installing:
 ddrescue           x86_64           1.16-1.el6            epel            81 k
 
Transaction Summary
================================================================================
Install       1 Package(s)
 
Total download size: 81 k
Installed size: 189 k
Is this ok [y/N]: y
Downloading Packages:
ddrescue-1.16-1.el6.x86_64.rpm                           |  81 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : ddrescue-1.16-1.el6.x86_64                                   1/1
  Verifying  : ddrescue-1.16-1.el6.x86_64                                   1/1
 
Installed:
  ddrescue.x86_64 0:1.16-1.el6
 
Complete!

You can directly download ddrescue from the official GNU project web site and compile it on Linux or Unix-like systems.

A note about using ddrescue safely

  1. You need to use a logfile to resume a rescue.
  2. Never ever run ddrescue on a read/write mounted partition.
  3. Do not try to repair a file system on a drive with I/O errors.
  4. Be careful about destination partition/device, any data stored there will be overwritten.

How do I use ddrescue command?

In this example rescue a /dev/sda to /dev/sdb

     ## No need to partition /dev/sdb beforehand, but if the partition table on /dev/sda ##
     ## is damaged, you will need to recreate it somehow on /dev/sdb. ##
     ddrescue -f -n /dev/sda /dev/sdb logfile
     ddrescue -d -f -r3 /dev/sda /dev/sdb logfile
     ## get list of partitions on a /dev/sdb ##
     fdisk /dev/sdb
 
     ## check for errors ##
     fsck -v -f /dev/sdb1
     fsck -v -f /dev/sdb2

Understanding ddrescue command options

  • -f : Overwrite output device or partition.
  • -n : Do not try to split or retry failed blocks.
  • -d : Use direct disc access for input file.
  • -r3 : Exit after given three (3) retries (use -1 as infinity retries).
  • -b2048 : Sector size of input device [default is set to 512].

Example: Rescue a partition in /dev/sda3 to /dev/sdb3 in Linux

 ## You need to create the sdb2 partition with fdisk first. sdb2 should be of appropriate type and size ##
     ddrescue -f -n /dev/sda2 /dev/sdb2 logfile
     ddrescue -d -f -r3 /dev/sda2 /dev/sdb2 logfile
     e2fsck -v -f /dev/sdb2
     mount -o ro /dev/sdb2 /mnt
## Rread rescued files from /mnt ##
     cd /mnt
     ls -l
## Copy files using rsync ## 
     rsync -avr . vivek@server1.cyberciti.biz:/data/resuced/wks01

Example: Rescue/recover a DVD-ROM in /dev/dvdom on a Linux

The syntax is:

     ddrescue -n -b2048 /dev/dvdrom dvd-image logfile
     ddrescue -d -b2048 /dev/dvdrom dvd-image logfile

Please note that if there are no errors (errsize is zero), dvd-image now contains a complete image of the DVD-ROM and you can write it to a blank DVD-ROM on a Linux basedsystem:
# growisofs -Z /dev/dvdrom=/path/to/dvd-image

Example: Resume failed rescue

In this example, while rescuing the whole drive /dev/sda to /dev/sdb, /dev/sda freezes up at position XYZFOOBAR (troubled sector # 7575757542):

 ## /dev/sda freezes here ##
 ddrescue -f /dev/hda /dev/hdb logfile
 ## So restart /dev/sda or reboot the server ##
 reboot
 ## Restart copy at a safe distance from the troubled sector # 7575757542 ##
 ddrescue -f -i 7575757542 /dev/sda /dev/sdb logfile
 ## Copy backwards down to the troubled sector # 7575757542 ##
 ddrescue -f -R /dev/sda /dev/sdb logfile

A note about dd_rescue command and syntax

On Debian / Ubuntu and a few other distro you end up installing other utility called dd_rescue. dd_rescue is a program that copies data from one file or block device to another, it is a tool to help you to save data from crashed partition.

Examples: dd_rescue

To make exact copy of /dev/sda (damaged) to /dev/sdb (make sure sdb is empty) you need to type following command:
# ddrescue /dev/sda /dev/sdb
Naturally, next step is to run fsck on /dev/sdb partition to recover/save data. Remember do not touch originally damaged /dev/sda. If this procedure fails you can send your disk to professional data recovery service. For example if /home (user data) is on /dev/sda2, you need to run a command on /dev/sdb2:
# fsck /dev/sdb2

Once fsck run, mount /dev/sdb2 somewhere and see if you can access the data:
# mount /dev/sdb2 /mnt/data
Finally, take backup using tar or any other command of your own choice. ddrescue command supports tons of options, read man page for more information:
# man dd_rescue
OR see gnu/ddrescue command man page:
# man ddrescue

Redis or Memcached for caching

Memcached or Redis? It’s a question that nearly always arises in any discussion about squeezing more performance out of a modern, database-driven Web application. When performance needs to be improved, caching is often the first step taken, and Memcached or Redis are typically the first places to turn.

These renowned cache engines share a number of similarities, but they also have important differences. Redis, the newer and more versatile of the two, is almost always the superior choice.

The similarities

Let’s start with the similarities. Both Memcached and Redis serve as in-memory, key-value data stores, although Redis is more accurately described as a data structure store. Both Memcached and Redis belong to the NoSQL family of data management solutions, and both are based on a key-value data model. They both keep all data in RAM, which of course makes them supremely useful as a caching layer. In terms of performance, the two data stores are also remarkably similar, exhibiting almost identical characteristics (and metrics) with respect to throughput and latency.

Both Memcached and Redis are mature and hugely popular open source projects. Memcached was originally developed by Brad Fitzpatrick in 2003 for the LiveJournal website. Since then, Memcached has been rewritten in C (the original implementation was in Perl) and put in the public domain, where it has become a cornerstone of modern Web applications. Current development of Memcached is focused on stability and optimizations rather than adding new features.

Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo remains the lead developer of the project today. Redis is sometimes described as “Memcached on steroids,” which is hardly surprising considering that parts of Redis were built in response to lessons learned from using Memcached. Redis has more features than Memcached and is, thus, more powerful and flexible.

Used by many companies and in countless mission-critical production environments, both Memcached and Redis are supported by client libraries in every conceivable programming language, and it’s included in a multitude of packages for developers. In fact, it’s a rare Web stack that does not include built-in support for either Memcached or Redis.

Why are Memcached and Redis so popular? Not only are they extremely effective, they’re also relatively simple. Getting started with either Memcached or Redis is considered easy work for a developer. It takes only a few minutes to set up and get them working with an application. Thus, a small investment of time and effort can have an immediate, dramatic impact on performance — usually by orders of magnitude. A simple solution with a huge benefit; that’s as close to magic as you can get.

When to use Memcached

Because Redis is newer and has more features than Memcached, Redis is almost always the better choice. However, Memcached could be preferable when caching relatively small and static data, such as HTML code fragments. Memcached’s internal memory management, while not as sophisticated as that of Redis, is more efficient in the simplest use cases because it consumes comparatively less memory resources for metadata. Strings (the only data type supported by Memcached) are ideal for storing data that’s only read, because strings require no further processing.

That said, Memcached’s memory management efficiency diminishes quickly when data size is dynamic, at which point Memcached’s memory can become fragmented. Also, large data sets often involve serialized data, which always requires more space to store. While Memcached is effectively limited to storing data in its serialized form, the data structures in Redis can store any aspect of the data natively, thus reducing serialization overhead.

The second scenario in which Memcached has an advantage over Redis is in scaling. Because Memcached is multithreaded, you can easily scale up by giving it more computational resources, but you will lose part or all of the cached data (depending on whether you use consistent hashing). Redis, which is mostly single-threaded, can scale horizontally via clustering without loss of data. Clustering is an effective scaling solution, but it is comparatively more complex to set up and operate.

When to use Redis

You’ll almost always want to use Redis because of its data structures. With Redis as a cache, you gain a lot of power (such as the ability to fine-tune cache contents and durability) and greater efficiency overall. Once you use the data structures, the efficiency boost becomes tremendous for specific application scenarios.

Redis’ superiority is evident in almost every aspect of cache management. Caches employ a mechanism called data eviction to make room for new data by deleting old data from memory. Memcached’s data eviction mechanism employs a Least Recently Used algorithm and somewhat arbitrarily evicts data that’s similar in size to the new data.

Redis, by contrast, allows for fine-grained control over eviction, letting you choose from six different eviction policies. Redis also employs more sophisticated approaches to memory management and eviction candidate selection. Redis supports both lazy and active eviction, where data is evicted only when more space is needed or proactively. Memcached, on the other hand, provides lazy eviction only.

Redis gives you much greater flexibility regarding the objects you can cache. While Memcached limits key names to 250 bytes and works with plain strings only, Redis allows key names and values to be as large as 512MB each, and they are binary safe. Plus, Redis has five primary data structures to choose from, opening up a world of possibilities to the application developer through intelligent caching and manipulation of cached data.

Beyond caching

Using Redis data structures can simplify and optimize several tasks — not only while caching, but even when you want the data to be persistent and always available. For example, instead of storing objects as serialized strings, developers can use a Redis Hash to store an object’s fields and values, and manage them using a single key. Redis Hash saves developers the need to fetch the entire string, deserialize it, update a value, reserialize the object, and replace the entire string in the cache with its new value for every trivial update — that means lower resource consumption and increased performance.

Other data structures offered by Redis (such as lists, sets, sorted sets, hyperloglogs, bitmaps, and geospatial indexes) can be used to implement even more complex scenarios. Sorted sets for time-series data ingestion and analysis is another example of a Redis data structure that offers enormously reduced complexity and lower bandwidth consumption.

Another important advantage of Redis is that the data it stores isn’t opaque, so the server can manipulate it directly. A considerable share of the 180-plus commands available in Redis are devoted to data processing operations and embedding logic in the data store itself via server-side Lua scripting. These built-in commands and user scripts give you the flexibility of handling data processing tasks directly in Redis without having to ship data across the network to another system for processing.

Redis offers optional and tunable data persistence designed to bootstrap the cache after a planned shutdown or an unplanned failure. While we tend to regard the data in caches as volatile and transient, persisting data to disk can be quite valuable in caching scenarios. Having the cache’s data available for loading immediately after restart allows for much shorter cache warm-up and removes the load involved in repopulating and recalculating cache contents from the primary data store.

Data replication too

Redis can also replicate the data that it manages. Replication can be used for implementing a highly available cache setup that can withstand failures and provide uninterrupted service to the application. A cache failure falls only slightly short of application failure in terms of the impact on user experience and application performance, so having a proven solution that guarantees the cache’s contents and service availability is a major advantage in most cases.

Last but not least, in terms of operational visibility, Redis provides a slew of metrics and a wealth of introspective commands with which to monitor and track usage and abnormal behavior. Real-time statistics about every aspect of the database, the display of all commands being executed, the listing and managing of client connections — Redis has all that and more.

When developers realize the effectiveness of Redis’ persistence and in-memory replication capabilities, they often use it as a first-responder database, usually to analyze and process high-velocity data and provide responses to the user while a secondary (often slower) database maintains a historical record of what happened. When used in this manner, Redis can also be ideal for analytics use cases.

Redis for analytics

Three analytics scenarios come immediately to mind. In the first scenario, when using something like Apache Spark to iteratively process large data sets, you can use Redis as a serving layer for data previously calculated by Spark. In the second scenario, using Redis as your shared, in-memory, distributed data store canaccelerate Spark processing speeds by a factor of 45 to 100. Finally, an all too common scenario is one in which reports and analytics need to be customizable by the user, but retrieving data from inherently batch data stores (like Hadoop or an RDBMS) takes too long. In this case, an in-memory data structure store such as Redis is the only practical way of getting submillisecond paging and response times.

When using extremely large operational data sets or analytics workloads, running everything in-memory might not be cost effective. To achieve submillisecond performance at lower cost, Redis Labs created a version of Redis that runs on a combination of RAM and flash, with the option to configure RAM-to-flash ratios. While this opens up several new avenues to accelerate workload processing, it also gives developers the option to simply run their “cache on flash.”

Open source software continues to provide some of the best technologies available today. When it comes to boosting application performance through caching, Redis and Memcached are the most established and production-proven candidates. However, given its richer functionality, more advanced design, many potential uses, and greater cost efficiency at scale, Redis should be your first choice in nearly every case.

Top Linux Server Distributions

You know that Linux is a hot data center server. You know it can save you money in licensing and maintenance costs. But that still leaves the question of what your best options are for Linux as a server operating system.

We have listed the top Linux Server distributions based on the following characteristics:

  1. Ease of installation and use
  2. Cost
  3. Available commercial support
  4. Data center reliability
Ubuntu LTS

Ubuntu

At the top of almost every Linux-related list, the Debian-based Ubuntu is in a class by itself. Canonical’s Ubuntu surpasses all other Linux server distributions — from its simple installation to its excellent hardware discovery to its world-class commercial support, Ubuntu sets a strong standard that is hard to match.

Ubuntu

The latest release of Ubuntu, Ubuntu 16.04 LTS “Xenial Xerus,” debuted in April 2016 and ups the ante with OpenStack Mitaka support, the LXD pure-container hypervisor, and Snappy, an optimized packaging system developed specifically for working with newer trends and technologies such as containers, mobile and the Internet of Things (IoT).

The LTS in Ubuntu 16.04 LTS stands for Long Term Support. The LTS versions are released every two years and include five years of commercial support for the Ubuntu Server edition.

Red Hat Enterprise Linux

Red Hat Enterprise Linux

While Red Hat started out as the “little Linux company that could,” its Red Hat Enterprise Linux (RHEL) server operating system is now a major force in the quest for data center rackspace. The Linux darling of large companies throughout the world, Red Hat’s innovations and non-stop support, including ten years of support for major releases, will keep you coming back for more.
Red Hat
RHEL is based on the community-driven Fedora, which Red Hat sponsors. Fedora is updated more frequently than RHEL and serves as more of a bleeding-edge Linux distro in terms of features and technology, but it doesn’t offer the stability or the length and quality of commercial support that RHEL is renowned for.In development since 2010, Red Hat Enterprise Linux 7 (RHEL 7) made its official debut in June 2014, and the major update offers scalability improvements for enterprises, including a new filesystem that can scale to 500 terabytes, as well as support for Docker container virtualization technology. The most recent release of RHEL, version 7.2, arrived in November 2015.
SUSE Linux Enterprise Server

SUSE Linux Enterprise Server

The Micro Focus-owned (but independently operated) SUSE Linux Enterprise Server (SLES) is stable, easy to maintain and offers 24×7 rapid-response support for those who don’t have the time or patience for lengthy troubleshooting calls. And the SUSE consulting teams will have you meeting your SLAs and making your accountants happy to boot.
SUSE Linux
Similar to how Red Hat’s RHEL is based on the open-source Fedora distribution, SLES is based on the open-source openSUSE Linux distro, with SLES focusing on stability and support over leading-edge features and technologies.The most recent major release, SUSE Linux Enterprise Server 12 (SLES 12), debuted in late October 2014 and introduced new features like framework for Docker, full system rollback, live kernel patching enablement and software modules for “increasing data center uptime, improving operational efficiency and accelerating the adoption of open source innovation,” according to SUSE.SLES 12 SP1 (Service Pack 1) followed the initial SLES 12 release in December 2015, and added support for Docker, Network Teaming, Shibboleth and JeOS images.
CentOS

CentOS

If you operate a website through a web hosting company, there’s a very good chance your web server is powered by CentOS Linux. This low-cost clone of Red Hat Enterprise Linux isn’t strictly commercial, but since it’s based on RHEL, you can leverage commercial support for it.Short for Community Enterprise Operating System, CentOS
CentOS has largely operated as a community-driven project that used the RHEL code, removed all Red Hat’s trademarks, and made the Linux server OS available for free use and distribution.In 2014 the focus shifted following Red Hat and CentOS announcing they would collaborate going forward and that CentOS would serve to address the gap between the community-innovation-focused Fedora platform and the enterprise-grade, commercially-deployed Red Hat Enterprise Linux platform.CentOS will continue to deliver a community-oriented operating system with a mission of helping users develop and adopt open source technologies on a Linux server distribution that is more consistent and conservative than Fedora’s more innovative role.At the same time, CentOS will remain free, with support provided by the community-led CentOS project rather than through Red Hat. CentOS released CentOS 7.2 in December 2015, which is derived from Red Hat Enterprise Linux 7.2.
Debian

Debian

If you’re confused by Debian’s inclusion here, don’t be. Debian doesn’t have formal commercial support but you can connect with Debian-savvy consultants around the world via theirConsultants page. Debian originated in 1993 and has spawned more child distributions than any other parent Linux distribution, including Ubuntu, Linux Mint and Vyatta.
Debian
Debian remains a popular option for those who value stability over the latest features. The latest major stable version of Debian, Debian 8 “jessie,” was released in April 2015, and it will be supported for five years.Debian 8 marks the switch to the systemd init system over the old SysVinit init system, and includes the latest releses of the Linux Kernel, Apache, LibreOffice, Perl, Python, Xen Hypervisor, GNU Compiler Collection and the GNOME and Xfce desktop environments.The latest update for Debian 8, version 8.4, debuted on April 2nd, 2016.
Oracle Linux

Oracle Linux

If you didn’t know that Oracle produces its own Linux distribution, you’re not alone. Oracle Linux (formerly Oracle Enterprise Linux) is Red Hat Enterprise Linux fortified with Oracle’s own special Kool-Aid as well as various Oracle logos and art added in.Oracle’s Linux competes directly with Red Hat’s Linux server distributions, and does so quite effectively since purchased support through Oracle is half the price of Red Hat’s equivalent model.
 Oracle Linux Server
Optimized for Oracle’s database services, Oracle Linux is a heavy contender in the enterprise Linux market. If you run Oracle databases and want to run them on Linux, you know the drill: Call Oracle.The latest release of Oracle Linux, version 7.2, arrived in November 2015 and is based on RHEL 7.2.
Mageia / Mandriva

Mageia / Mandriva

Mageia is an open-source-based fork of Mandriva Linux that made its debut in 2011. The most recent release, Mageia 5, became available in June 2015, and Mageia 6 is expected to debut in late June 2016.
Mageia and Mandriva Linux
For U.S.-based executive or technical folks, Mageia and its predecessor Mandriva might be a bit foreign. The incredibly well-constructed Mandriva Linux distribution hails from France and enjoys extreme acceptance in Europe and South America. The Mandriva name and its construction derive from the Mandrake Linux and Connectiva Linux distributions.Mageia maintains the strengths of Mandriva while continuing its development with new features and capabilities, as well as support from the community organization Mageia.Org. Mageia updates are typically released on a 9-month release cycle, with each release supported for two cycles (18 months).As for Mandriva Linux, the Mandriva SA company continues its business Linux server projects, which are now based on Mageia code.

An Introduction to Securing your Linux VPS

Introduction

Taking control of your own Linux server is an opportunity to try new things and leverage the power and flexibility of a great platform. However, Linux server administrators must take the same caution that is appropriate with any network-connected machine to keep it secure and safe.

There are many different security topics that fall under the general category of “Linux security” and many opinions as to what an appropriate level of security looks like for a Linux server.

The main thing to take away from this is that you will have to decide for yourself what security protections will be necessary. Before you do this, you should be aware of the risks and the trade offs, and decide on the balance between usability and security that makes sense for you.

This article is meant to help orient you with some of the most common security measures to take in a Linux server environment. This is not an exhaustive list, and does not cover recommended configurations, but it will provide links to more thorough resources and discuss why each component is an important part of many systems.

Blocking Access with Firewalls

One of the easiest steps to recommend to all users is to enable and configure a firewall. Firewalls act as a barrier between the general traffic of the internet and your machine. They look at traffic headed in and out of your server, and decide if it should allow the information to be delivered.

They do this by checking the traffic in question against a set of rules that are configured by the user. Usually, a server will only be using a few specific networking ports for legitimate services. The rest of the ports are unused, and should be safely protected behind a firewall, which will deny all traffic destined for these locations.

This allows you to drop data that you are not expecting and even conditionalize the usage of your real services in some cases. Sane firewall rules provide a good foundation to network security.

There are quite a few firewall solutions available. We’ll briefly discuss some of the more popular options below.

UFW

UFW stands for uncomplicated firewall. Its goal is to provide good protection without the complicated syntax of other solutions.

UFW, as well as most Linux firewalls, is actually a front-end to control the netfilter firewall included with the Linux kernel. This is usually a simple firewall to use for people not already familiar with Linux firewall solutions and is generally a good choice.

You can learn how to enable and configure the UFW firewall and find out more by clicking this link.

IPTables

Perhaps the most well-known Linux firewall solution is iptables. IPTables is another component used to administer the netfilter firewall included in the Linux kernel. It has been around for a long time and has undergone intense security audits to ensure its safety. There is a version of iptables called ip6tables for creating IPv6 restrictions.

You will likely come across iptables configurations during your time administering Linux machines. The syntax can be complicated to grasp at first, but it is an incredibly powerful tool that can be configured with very flexible rule sets.

You can learn more about how to implement some iptables firewall rules on Ubuntu or Debian systemshere, or learn how to use iptables on CentOS/Fedora/RHEL-based distros here.

IP6Tables

As mentioned above, the iptables is used to manipulate the tables that contain IPv4 rules. If you have IPv6 enabled on your server, you will need to also pay attention to the IPv6 equivalent: ip6tables.

The netfilter firewall that is included in the Linux kernel keeps IPv4 and IPv6 traffic completely separate. These are stored in different tables. The rules that dictate the ultimate fate of a packet are determined by the protocol version that is being used.

What this means to the server’s administer is that a separate ruleset must be maintained when version 6 is enabled. The ip6tables command shares the same syntax as the iptables command, so implementing the same set of restrictions in the version 6 table is usually straight forward. You must be sure match traffic directed at your IPv6 addresses however, for this to work correctly.

NFTables

Although iptables has long been the standard for firewalls in a Linux environment, a new firewall called nftables has recently been added into the Linux kernel. This is a project by the same team that makes iptables, and is intended to eventually replace iptables.

The nftables firewall attempts to implement more readable syntax than that found its iptables predecessor, and implements IPv4 and IPv6 support into the same tool. While most versions of Linux at this time do not ship with a kernel new enough to implement nftables, it will soon be very commonplace, and you should try to familiarize yourself with its usage.

Using SSH to Securely Login Remotely

When administering a server where you do not have local access, you will need to log in remotely. The standard, secure way of accomplishing this on a Linux system is through a protocol known called SSH, which stands for secure shell.

SSH provides end-to-end encryption, the ability to tunnel insecure traffic over a secure connection, X-forwarding (graphical user interface over a network connection), and much more. Basically, if you do not have access to a local connection or out-of-band management, SSH should be your primary way of interacting with your machine.

While the protocol itself is very secure and has undergone extensive research and code review, your configuration choices can either aid or hinder the security of the service. We will discuss some options below.

Password vs SSH-Key Logins

SSH has a flexible authentication model that allows you to sign in using a number of different methods. The two most popular choices are password and SSH-key authentication.

While password authentication is probably the most natural model for most users, it is also the less secure of these two choices. Password logins allow a potential intruder to continuously guess passwords until a successful combination is found. This is known as brute-forcing and can easily be automated by would-be attackers with modern tools.

SSH-keys, on the other hand, operate by generating a secure key pair. A public key is created as a type of test to identify a user. It can be shared publicly without issues, and cannot be used for anything other than identifying a user and allowing a login to the user with the matching private key. The private key should be kept secret and is used to pass the test of its associated public key.

Basically, you can add your public SSH key on a server, and it will allow you to login by using the matching private key. These keys are so complex that brute-forcing is not practical. Furthermore, you can optionally add long passphrases to your key that adds even more security.

To learn more about how to use SSH click here, and check out this link to learn how to set up SSH keys on your server.

Implement fail2ban to Ban Malicious IP Addresses

One step that will help with the general security of your SSH configuration is to implement a solution like fail2ban. Fail2ban is a service that monitors log files in order to determine if a remote system is likely not a legitimate user, and then temporarily ban future traffic from the associated IP address.

Setting up a sane fail2ban policy can allow you to flag computers that are continuously trying to log in unsuccessfully and add firewall rules to drop traffic from them for a set period of time. This is an easy way of hindering often used brute force methods because they will have to take a break for quite a while when banned. This usually is enough to discourage further brute force attempts.

You can learn how to implement a fail2ban policy on Ubuntu here. There are similar guides for Debian andCentOS here.

Implement an Intrusion Detection System to Detect Unauthorized Entry

One important consideration to keep in mind is developing a strategy for detecting unauthorized usage. You may have preventative measures in place, but you also need to know if they’ve failed or not.

An intrusion detection system, also known as an IDS, catalogs configuration and file details when in a known-good state. It then runs comparisons against these recorded states to find out if files have been changed or settings have been modified.

There are quite a few intrusion detection systems. We’ll go over a few below.

Tripwire

One of the most well-known IDS implementations is tripwire. Tripwire compiles a database of system files and protects its configuration files and binaries with a set of keys. After configuration details are chosen and exceptions are defined, subsequent runs notify of any alterations to the files that it monitors.

The policy model is very flexible, allowing you to shape its properties to your environment. You can then configure tripwire runs via a cron job and even implement email notifications in the event of unusual activity.

Learn more about how to implement tripwire here.

Aide

Another option for an IDS is Aide. Similar to tripwire, Aide operates by building a database and comparing the current system state to the known-good values it has stored. When a discrepancy arises, it can notify the administrator of the problem.

Aide and tripwire both offer similar solutions to the same problem. Check out the documentation and try out both solutions to find out which you like better.

For a guide on how to use Aide as an IDS, check here.

Psad

The psad tool is concerned with a different portion of the system than the tools listed above. Instead of monitoring system files, psad keeps an eye on the firewall logs to try to detect malicious activity.

If a user is trying to probe for vulnerabilities with a port scan, for instance, psad can detect this activity and dynamically alter the firewall rules to lock out the offending user. This tool can register different threat levels and base its response on the severity of the problem. It can also optionally email the administrator.

To learn how to use psad as a network IDS, follow this link.

Bro

Another option for a network-based IDS is Bro. Bro is actually a network monitoring framework that can be used as a network IDS or for other purposes like collecting usage stats, investigating problems, or detecting patterns.

The Bro system is divided into two layers. The first layer monitors activity and generates what it considers events. The second layer runs the generated events through a policy framework that dictates what should be done, if anything, with the traffic. It can generate alerts, execute system commands, simply log the occurrence, or take other paths.

To find out how to use Bro as an IDS, click here.

RKHunter

While not technically an intrusion detection system, rkhunter operates on many of the same principles as host-based intrusion detection systems in order to detect rootkits and known malware.

While viruses are rare in the Linux world, malware and rootkits are around that can compromise your box or allow continued access to a successful exploiter. RKHunter downloads a list of known exploits and then checks your system against the database. It also alerts you if it detects unsafe settings in some common applications.

You can check out this article to learn how to use RKHunter on Ubuntu.

General Security Advice

While the above tools and configurations can help you secure portions of your system, good security does not come from just implementing a tool and forgetting about it. Good security manifests itself in a certain mindset and is achieved through diligence, scrutiny, and engaging in security as a process.

There are some general rules that can help set you in the right direction in regards to using your system securely.

Pay Attention to Updates and Update Regularly

Software vulnerabilities are found all of the time in just about every kind of software that you might have on your system. Distribution maintainers generally do a good job of keeping up with the latest security patches and pushing those updates into their repositories.

However, having security updates available in the repository does your server no good if you have not downloaded and installed the updates. Although many servers benefit from relying on stable, well-tested versions of system software, security patches should not be put off and should be considered critical updates.

Most distributions provide security mailing lists and separate security repositories to only download and install security patches.

Take Care When Downloading Software Outside of Official Channels

Most users will stick with the software available from the official repositories for their distribution, and most distributions offer signed packages. Users generally can trust the distribution maintainers and focus their concern on the security of software acquired outside of official channels.

You may choose to trust packages from your distribution or software that is available from a project’s official website, but be aware that unless you are auditing each piece of software yourself, there is risk involved. Most users feel that this is an acceptable level of risk.

On the other hand, software acquired from random repositories and PPAs that are maintained by people or organizations that you don’t recognize can be a huge security risk. There are no set rules, and the majority of unofficial software sources will likely be completely safe, but be aware that you are taking a risk whenever you trust another party.

Make sure you can explain to yourself why you trust the source. If you cannot do this, consider weighing your security risk as more of a concern than the convenience you’ll gain.

Know your Services and Limit Them

Although the entire point of running a server is likely to provide services that you can access, limit the services running on your machine to those that you use and need. Consider every enabled service to be a possible threat vector and try to eliminate as many threat vectors as you can without affecting your core functionality.

This means that if you are running a headless (no monitor attached) server and don’t run any graphical (non-web) programs, you should disable and probably uninstall your X display server. Similar measures can be taken in other areas. No printer? Disable the “lp” service. No Windows network shares? Disable the “samba” service.

You can discover which services you have running on your computer through a variety of means. This article covers how to detect enabled services under the “create a list of requirements” section.

Do Not Use FTP; Use SFTP Instead

This might be a hard one for many people to come to terms with, but FTP is a protocol that is inherently insecure. All authentication is sent in plain-text, meaning that anyone monitoring the connection between your server and your local machine can see your login details.

There are only very few instances where FTP is probably okay to implement. If you are running an anonymous, public, read-only download mirror, FTP is a decent choice. Another case where FTP is an okay choice is when you are simply transferring files between two computers that are behind a NAT-enabled firewall, and you trust your network is secure.

In almost all other cases, you should use a more secure alternative. The SSH suite comes complete with an alternative protocol called SFTP that operates on the surface in a similar way, but it based on the same security of the SSH protocol.

This allows you to transfer information to and from your server in the same way that you would traditionally use FTP, but without the risk. Most modern FTP clients can also communicate with SFTP servers.

To learn how to use SFTP to transfer files securely, check out this guide.

Implement Sensible User Security Policies

There are a number of steps that you can take to better secure your system when administering users.

One suggestion is to disable root logins. Since the root user is present on any POSIX-like systems and it is an all-powerful account, it is an attractive target for many attackers. Disabling root logins is often a good idea after you have configured sudo access, or if you are comfortable using the su command. Many people disagree with this suggestion, but examine if it is right for you.

It is possible to disable remote root logins within the SSH daemon or to disable local logins, you can make restrictions in the /etc/securetty file. You can also set the root user’s shell to a non-shell to disable root shell access and set up PAM rules to restrict root logins as well. RedHat has a great article on how to disable root logins.

Another good policy to implement with user accounts is creating unique accounts for each user and service, and give them only the bare minimum permissions to get their job done. Lock down everything that they don’t need access to and take away all privileges short of crippling them.

This is an important policy because if one user or service gets compromised, it doesn’t lead to a domino affect that allows the attacker to gain access to even more of the system. This system of compartmentalization helps you to isolate problems, much like a system of bulkheads and watertight doors can help prevent a ship from sinking when there is a hull breach.

In a similar vein to the services policies we discussed above, you should also take care to disable any user accounts that are no longer necessary. This may happen when you uninstall software, or if a user should no longer have access to the system.

Pay Attention to Permission Settings

File permissions are a huge source of frustration for many users. Finding a balance for permissions that allow you to do what you need to do while not exposing yourself to harm can be difficult and demands careful attention and thought in each scenario.

Setting up a sane umask policy (the property that defines default permissions for new files and directories) can go a long way in creating good defaults. You can learn about how permissions work and how to adjust your umask value here.

In general, you should think twice before setting anything to be world-writeable, especially if it is accessible in any way to the internet. This can have extreme consequences. Additionally, you should not set the SGID or SUID bit in permissions unless you absolutely know what you are doing. Also, check that your files have an owner and a group.

Your file permissions settings will vary greatly based on your specific usage, but you should always try to see if there is a way to get by with fewer permissions. This is one of the easiest things to get wrong and an area where there is a lot of bad advice floating around on the internet.

Regularly Check for Malware on your Servers

While Linux is generally less targeted by Malware than Windows, it is by no means immune to malicious software. In conjunction with implementing an IDS to detect intrusion attempts, scanning for malware can help identify traces of activity that indicate that illegitimate software is installed on your machine.

There are a number of malware scanners available for Linux systems that can be used to regularly validate the integrity of your servers. Linux Malware Detect, also known as maldet or LMD, is one popular option that can be easily installed and configured to scan for known malware signatures. It can be run manually to perform one-off scans and can also be daemonized to run regularly scheduled scans. Reports from these scans can be emailed to the server administrators.

How To Secure the Specific Software you are Using

Although this guide is not large enough to go through the specifics of securing every kind of service or application, there are many tutorials and guidelines available online. You should read the security recommendations of every project that you intend to implement on your system.

Furthermore, popular server software like web servers or database management systems have entire websites and databases devoted to security. In general, you should read up on and secure every service before putting it online.

You can check our security section for more specific advice for the software you are using.

Conclusion

You should now have a decent understanding of general security practices you can implement on your Linux server. While we’ve tried hard to mention many areas of high importance, at the end of the day, you will have to make many decisions on your own. When you administer a server, you have to take responsibility for your server’s security.

This is not something that you can configure in one quick spree in the beginning, it is a process and an ongoing exercise in auditing your system, implementing solutions, evaluating logs and alerts, reassessing your needs, etc. You need to be vigilant in protecting your system and always be evaluating and monitoring the results of your solutions.

Changing Password of Specific User Account In Linux

Rules for changing passwords for user accounts

  1. A normal user may only change the password for his/her own account.
  2. The superuser (root user) may change the password for any account or specific account.
  3. The passwd command also changes the account or associated password validity period.

First, login as the root user. Use sudo -s or su - command to login as root. To change password of specific user account, use the following syntax:

passwd userNameHere

To change the password for user called vivek, enter:
# passwd vivek
Sample outputs:

Change Users Local Linux Password Command Line


To see password status of any user account, enter:
# passwd -S userNameHere
# passwd -S vivek

Sample outputs:

vivek P 05/05/2012 0 99999 7 -1

The status information consists of 7 fields as follows:

  1. vivek : Account login name (username)
  2. P : This field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P)
  3. 05/05/2012 : Date of the last password change.
  4. 0 : Password expiry minimum age
  5. 99999 : Password expiry maximum age.
  6. 7 : Password expiry warning period.
  7. -1 : Inactivity period for the password (see chage command for more information).

To get more info about password aging for a specific user called vivek, enter:
# chage -l vivek

Checking Disk Space in Linux

Linux offer the following commands to check disk space usage:

Linux command to check disk space

  1. df command – Shows the amount of disk space used and available on Linux file systems.
  2. du command – Display the amount of disk space used by the specified files and for each subdirectory.
  3. btrfs fi df /device/ – Show disk space usage information for a btrfs based mount point/file system.

Linux check disk space with df command

  1. Open the terminal and type the following command to check disk space.
  2. The basic syntax for df is:
    df [options] [devices]
    Type:
  3. df
  4. df -H

Sample outputs:

Fig.01: df command in action

Fig.01: df command in action

The items in square brackets are optional. You can simply type the df command (i.e. no arguments), to see a table that lists for each device name on the system.

See information about specific filesystem

You can give a device or mount point as an argument, and df report data only for the filesystem physically residing on that device. For example, the following command provides information only for the partition /dev/sda:
$ df /dev/sda
$ df -h /dev/sdc1
$ df /data/

Sample outputs:

Filesystem      1K-blocks     Used  Available Use% Mounted on
/dev/sda       2930266584 69405248 2859579472   3% /data

Understanding df command output

The valid fields are as follows:

Display name Valid field name (for --outputoption) Description
Filesystem source The source of the mount point, usually a device.
1K-blocks size Total number of blocks.
Used used Number of used blocks.
Available avail Number of available blocks.
Use% pcent Percentage of USED divided by SIZE.
Mounted on target The mount point.

You can pass the output format defined by ‘valid field name’ as follows:
$ df --output=field1,field2,...
$ df --output=source,used,avail /data/

Sample outputs:

Filesystem                    Used Avail
/dev/md0                      5.4G  115G
udev                             0   11M
tmpfs                         6.2M  414M
tmpfs                         4.1k  1.1G
tmpfs                         4.1k  5.3M
tmpfs                            0  1.1G
/dev/md2                      818G  688G
tmpfs                            0  210M
tmpfs                            0  210M
/dev/mapper/cryptvg-mybackup   77G  526G

You can print all available fields, enter:
$ df --o
Sample outputs:

Filesystem     Type     Inodes  IUsed  IFree IUse%  1K-blocks     Used      Avail Use% File Mounted on
udev           devtmpfs 379248    333 378915    1%      10240        0      10240   0% -    /dev
tmpfs          tmpfs    381554    498 381056    1%     610488     9704     600784   2% -    /run
/dev/sdc1      ext3     956592 224532 732060   24%   14932444  7836056    6331204  56% -    /
tmpfs          tmpfs    381554      1 381553    1%    1526216        0    1526216   0% -    /dev/shm
tmpfs          tmpfs    381554      4 381550    1%       5120        0       5120   0% -    /run/lock
tmpfs          tmpfs    381554     14 381540    1%    1526216        0    1526216   0% -    /sys/fs/cgroup
/dev/sda       btrfs         0      0      0     - 2930266584 69405248 2859579472   3% -    /data
tmpfs          tmpfs    381554      4 381550    1%     305244        0     305244   0% -    /run/user/0

Express df output in human readable form

Pass the -h option to see output in human readable format. You will device size in gigabytes or terabytes or megabytes:
$ df -h ### Human format
$ df -m ### Show output size in one-megabyte
$ df -k ### Show output size in one-kilobyte blocks (default)

Display output using inode usage instead of block usage

An inode is a data structure on a Linux file system that stores all information about file. To list inode information, enter:
$ df -i
$ df -i -h

Sample outputs:

Filesystem     Inodes IUsed IFree IUse% Mounted on
udev             371K   333  371K    1% /dev
tmpfs            373K   498  373K    1% /run
/dev/sdc1        935K  220K  715K   24% /
tmpfs            373K     1  373K    1% /dev/shm
tmpfs            373K     4  373K    1% /run/lock
tmpfs            373K    14  373K    1% /sys/fs/cgroup
/dev/sda            0     0     0     - /data
tmpfs            373K     4  373K    1% /run/user/0

Find out the type of each file system displayed

Pass the -T option to display the type of each filesystems listed such as ext4, btrfs, ext2, nfs4, fuse, cgroup, cputset, and more:
$ df -T
$ df -T -h
$ df -T -h /data/

Sample outputs:

Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda       btrfs  2.8T   67G  2.7T   3% /data

Limit listing to file systems of given type

The syntax is:
$ df -t ext3 #Only see ext3 file system
$ df -t ext4 #Only see ext4 file system
$ df -t btrfs #Only see btrfs file system

Exclude given file system type

To list all but exclude ext2 filesystem pass the -x TYPE option, enter:
$ df -x ext2

Show all file system

Pass the -a or --all option to the df command to include in its output filesystems that have a size of zero blocks, run:
$ df -a

Filesystem      1K-blocks     Used  Available Use% Mounted on
sysfs                   0        0          0    - /sys
proc                    0        0          0    - /proc
udev                10240        0      10240   0% /dev
devpts                  0        0          0    - /dev/pts
tmpfs              610488     9708     600780   2% /run
/dev/sdc1        14932444  7836084    6331176  56% /
securityfs              0        0          0    - /sys/kernel/security
tmpfs             1526216        0    1526216   0% /dev/shm
tmpfs                5120        0       5120   0% /run/lock
tmpfs             1526216        0    1526216   0% /sys/fs/cgroup
cgroup                  0        0          0    - /sys/fs/cgroup/systemd
pstore                  0        0          0    - /sys/fs/pstore
cgroup                  0        0          0    - /sys/fs/cgroup/cpuset
cgroup                  0        0          0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                  0        0          0    - /sys/fs/cgroup/blkio
cgroup                  0        0          0    - /sys/fs/cgroup/memory
cgroup                  0        0          0    - /sys/fs/cgroup/devices
cgroup                  0        0          0    - /sys/fs/cgroup/freezer
cgroup                  0        0          0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                  0        0          0    - /sys/fs/cgroup/perf_event
systemd-1               -        -          -    - /proc/sys/fs/binfmt_misc
fusectl                 0        0          0    - /sys/fs/fuse/connections
debugfs                 0        0          0    - /sys/kernel/debug
mqueue                  0        0          0    - /dev/mqueue
hugetlbfs               0        0          0    - /dev/hugepages
/dev/sda       2930266584 69405248 2859579472   3% /data
rpc_pipefs              0        0          0    - /run/rpc_pipefs
tmpfs              305244        0     305244   0% /run/user/0
binfmt_misc             0        0          0    - /proc/sys/fs/binfmt_misc

These file systems omitted by default.

Getting more help about the df command

Pass the --help option see a brief help message:
$ df --help
Or read its man page by typing the following command:
$ man df

Say hello to the du command

The du command is very useful to track down disk space hogs. It is useful to find out the names of directories and files that consume large amounts of space on a disk. The basic syntax is:
du
du /path/do/dir
du [options] [directories and/or files]

To see the names and space consumption of each of the directories including all subdirectories in the directory tree, enter:
$ du
Sample outputs:

16	./.aptitude
12	./.ssh
56	./apcupsd
8	./.squidview
4	./kernel.build
12	./.elinks
8	./.vim
8	./.config/htop
12	./.config
648	.

The first column is expressed in kilobytes (file size) and the second column is the filename or directory name.

See du output in human readable format

Pass the -h option to display size in K (kilobytes), M (megabytes), G (gigabytes) instead of the default kilobytes:
$ du -h
Sample outputs:

16K	./.aptitude
12K	./.ssh
56K	./apcupsd
8.0K	./.squidview
4.0K	./kernel.build
12K	./.elinks
8.0K	./.vim
8.0K	./.config/htop
12K	./.config
648K	.

Finding information about any directory trees or files

To find out /etc/ directory space usage, enter:
# du /etc/
# du -h /etc/

The following will report the sizes of the thee files named hdparm, iptunnel and ifconfig that are located in the /sbin directory:
$ du /sbin/hdparm /sbin/iptunnel /sbin/ifconfig
$ du -h /sbin/hdparm /sbin/iptunnel /sbin/ifconfig

Sample outputs:

112K	/sbin/hdparm
24K	/sbin/iptunnel
72K	/sbin/ifconfig

How do I summarize disk usage for given directory name?

Pass the -s option to the du command. In this example, ask du command to report only the total disk space occupied by a directory tree and to suppress subdirectories:
# du -s /etc/
# du -sh /etc/

Sample outputs:

6.3M	/etc/

Pass the -a (all) option to see all files, not just directories:
# du -a /etc/
# du -a -h /etc/

Sample outputs:

4.0K	/etc/w3m/config
4.0K	/etc/w3m/mailcap
12K	/etc/w3m
4.0K	/etc/ConsoleKit/run-seat.d
4.0K	/etc/ConsoleKit/seats.d/00-primary.seat
8.0K	/etc/ConsoleKit/seats.d
4.0K	/etc/ConsoleKit/run-session.d
20K	/etc/ConsoleKit
...
....
..
...
4.0K	/etc/ssh/ssh_host_rsa_key
4.0K	/etc/ssh/ssh_host_rsa_key.pub
4.0K	/etc/ssh/ssh_host_dsa_key
244K	/etc/ssh/moduli
4.0K	/etc/ssh/sshd_config
272K	/etc/ssh
4.0K	/etc/python/debian_config
8.0K	/etc/python
0	/etc/.pwd.lock
4.0K	/etc/ldap/ldap.conf
8.0K	/etc/ldap
6.3M	/etc/

You can also use star ( * ) wildcard, which will match any character. For example, to see the size of each png file in the current directory, enter:
$ du -ch *.png

 52K	CIQTK4FUAAAbjDw.png-large.png
 68K	CX23RezWEAA0QY8.png-large.png
228K	CY32cShWkAAaNLD.png-large.png
 12K	CYaQ3JqU0AA-amA.png-large.png
136K	CYywxDfU0AAP2py.png
172K	CZBoXO1UsAAw3zR.png-large.png
384K	Screen Shot 2016-01-19 at 5.49.21 PM.png
324K	TkamEew.png
8.0K	VQx6mbH.png
 64K	fH7rtXE.png
 52K	ipv6-20-1-640x377.png
392K	unrseYB.png
1.8M	total

The -c option tells du to display grand total.

Putting it all together

To list top 10 directories eating disk space in /etc/, enter:
# du -a /etc/ | sort -n -r | head -n 10
Sample outputs:

8128	/etc/
928	/etc/ssl
904	/etc/ssl/certs
656	/etc/apache2
544	/etc/apache2/mods-available
484	/etc/init.d
396	/etc/php5
336	/etc/sane.d
308	/etc/X11
268	/etc/ssl/certs/ca-certificates.crt

For more information on the du command, type:
$ man du
$ du --help

Dealing with btrfs file system

For btrfs filesystem use the btrfs fi df command to see space usage information for a mount point. The syntax is:

btrfs filesystem df /path/
btrfs fi df /dev/path
btrfs fi df [options] /path/

Examples

# btrfs fi df /data/
# btrfs fi df -h /data/

Sample outputs:

Data, RAID1: total=71.00GiB, used=63.40GiB
System, RAID1: total=8.00MiB, used=16.00KiB
Metadata, RAID1: total=4.00GiB, used=2.29GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

To see raw numbers in bytes, run:
# btrfs fi df -b /data/
OR
# btrfs fi df -k /data/ ### show sizes in KiB ##
# btrfs fi df -m /data/ ### show sizes in MiB ##
# btrfs fi df -g /data/ ### show sizes in GiB ##
# btrfs fi df -t /data/ ### show sizes in TiB ##

 

How To Install and Setup Postfix on Ubuntu 14.04

Introduction

Postfix is a very popular open source Mail Transfer Agent (MTA) that can be used to route and deliver email on a Linux system. It is estimated that around 25% of public mail servers on the internet run Postfix.

In this guide, we’ll teach you how to get up and running quickly with Postfix on an Ubuntu 14.04 server.

Prerequisites

In order to follow this guide, you should have a Fully Qualified Domain Name pointed at your Ubuntu 14.04 server. You can find help on setting up your domain name with DigitalOcean by clicking here.

Install the Software

The installation process of Postfix on Ubuntu 14.04 is easy because the software is in Ubuntu’s default package repositories.

Since this is our first operation with apt in this session, we’re going to update our local package index and then install the Postfix package:

sudo apt-get update
sudo apt-get install postfix

You will be asked what type of mail configuration you want to have for your server. For our purposes, we’re going to choose “Internet Site” because the description is the best match for our server.

Next, you will be asked for the Fully Qualified Domain Name (FQDN) for your server. This is your full domain name (like example.com). Technically, a FQDN is required to end with a dot, but Postfix does not need this. So we can just enter it like:

example.com

The software will now be configured using the settings you provided. This takes care of the installation, but we still have to configure other items that we were not prompted for during installation.

Configure Postfix

We are going to need to change some basic settings in the main Postfix configuration file.

Begin by opening this file with root privileges in your text editor:

sudo nano /etc/postfix/main.cf

First, we need to find the myhostname parameter. During the configuration, the FQDN we selected was added to the mydestination parameter, but myhostname remained set to localhost. We want to point this to our FQDN too:

myhostname = example.com

If you would like to configuring mail to be forwarded to other domains or wish to deliver to addresses that don’t map 1-to-1 with system accounts, we can remove the alias_maps parameter and replace it withvirtual_alias_maps. We would then need to change the location of the hash to/etc/postfix/virtual:

virtual_alias_maps = hash:/etc/postfix/virtual

As we said above, the mydestination parameter has been modified with the FQDN you entered during installation. This parameter holds any domains that this installation of Postfix is going to be responsible for. It is configured for the FQDN and the localhost.

One important parameter to mention is the mynetworks parameter. This defines the computers that are able to use this mail server. It should be set to local only (127.0.0.0/8 and the other representations). Modifying this to allow other hosts to use this is a huge vulnerability that can lead to extreme cases of spam.

To be clear, the line should be set like this. This should be set automatically, but double check the value in your file:

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128

Configure Additional Email Addresses

We can configure additional email addresses by creating aliases. These aliases can be used to deliver mail to other user accounts on the system.

If you wish to utilize this functionality, make sure that you configured the virtual_alias_maps directive like we demonstrated above. We will use this file to configure our address mappings. Create the file by typing:

sudo nano /etc/postfix/virtual

In this file, you can specify emails that you wish to create on the left-hand side, and username to deliver the mail to on the right-hand side, like this:

blah@example.com username1

For our installation, we’re going to create a few email addresses and route them to some user accounts. We can also set up certain addresses to forward to multiple accounts by using a comma-separated list:

blah@example.com        demouser
dinosaurs@example.com   demouser
roar@example.com        root
contact@example.com     demouser,root

Save and close the file when you are finished.

Now, we can implement our mapping by calling this command:

sudo postmap /etc/postfix/virtual

Now, we can reload our service to read our changes:

sudo service postfix restart

Test your Configuration

You can test that your server can receive and route mail correctly by sending mail from your regular email address to one of your user accounts on the server or one of the aliases you set up.

Once you send an email to:

demouser@your_server_domain.com

You should get mail delivered to a file that matches the delivery username in /var/mail. For instance, we could read this message by looking at this file:

nano /var/mail/demouser

This will contain all of the email messages, including the headers, in one big file. If you want to consume your email in a more friendly way, you might want to install a few helper programs:

sudo apt-get install mailutils

This will give you access to the mail program that you can use to check your inbox:

mail

This will give you an interface to interact with your mail.

Conclusion

You should now have basic email functionality configured on your server.

It is important to secure your server and make sure that Postfix is not configured as an open relay. Mail servers are heavily targeted by attackers because they can send out massive amounts of spam email, so be sure to set up a firewall and implement other security measures to protect your server. You can learn about some security options here.

How To Run Your Own Mail Server

When setting up a web site or application under your own domain, it is likely that you will also want a mail server to handle the domain’s incoming and outgoing email. While it is possible to run your own mail server, it is often not the best option for a variety of reasons.

A typical mail server consists of many software components that provide a specific function. Each component must be configured and tuned to work nicely together and provide a fully-functioning mail server. Because they have so many moving parts, mail servers can become complex and difficult to set up.

Here is a list of required components in a mail server:

  • Mail Transfer Agent
  • Mail Delivery Agent
  • IMAP and/or POP3 Server

In addition to the the required components, you will probably want to add these components:

  • Spam Filter
  • AntiVirus
  • Webmail

While some software packages include the functionality of multiple components, the choice of each component is often left up to you. In addition to the software components, mail servers need a domain name, the appropriate DNS records, and an SSL certificate.

Let’s take a look at each component in more detail.

Mail Transfer Agent

A Mail Transfer Agent (MTA), which handles Simple Mail Transfer Protocol (SMTP) traffic, has two responsibilities:

  1. To send mail from your users to an external MTA (another mail server)
  2. To receive mail from an external MTA

Examples of MTA software: Postfix, Exim, and Sendmail.

Mail Delivery Agent

A Mail Delivery Agent (MDA), which is sometimes referred to as the Local Delivery Agent (LDA), retrieves mail from a MTA and places it in the appropriate mail user’s mailbox.

There are a variety of mailbox formats, such as mbox and Maildir. Each MDA supports specific mailbox formats. The choice of mailbox format determines how the messages are actually stored on the mail server which, in turn, affects disk usage and mailbox access performance.

Examples of MDA software: Postfix and Dovecot.

IMAP and/or POP3 Server

IMAP and POP3 are protocols that are used by mail clients, i.e. any software that is used to read email, for mail retrieval. Each protocol has its own intricacies but we will highlight some key differences here.

IMAP is the more complex protocol that allows, among other things, multiple clients to connect to an individual mailbox simultaneously. The email messages are copied to the client, and the original message is left on the mail server.

POP3 is simpler, and moves email messages to the mail client’s computer, typically the user’s local computer, by default.

Examples of software that provide IMAP and/or POP3 server functionality: Courier, Dovecot, Zimbra.

Spam Filter

The purpose of a spam filter is to reduce the amount of incoming spam, or junk mail, that reaches user’s mailboxes. Spam filters accomplish this by applying spam detection rules–which consider a variety of factors such as the server that sent the message, the message content, and so forth–to incoming mail. If a message’s “spam level” reaches a certain threshold, it is marked and treated as spam.

Spam filters can also be applied to outgoing mail. This can be useful if a user’s mail account is compromised, to reduce the amount of spam that can be sent using your mail server.

SpamAssassin is a popular open source spam filter.

Antivirus

Antivirus is used to detect viruses, trojans, malware, and other threats in incoming and outgoing mail. ClamAV is a popular open source antivirus engine.

Webmail

Many users expect their email service to provide webmail access. Webmail, in the context of running a mail server, is basically mail client that can be accessed by users via a web browser–Gmail is probably the most well-known example of this. The webmail component, which requires a web server such as Nginx or Apache, can run on the mail server itself.

Examples of software that provide webmail functionality: Roundcube and Citadel.


Now that you are familiar with the mail server components that you have to install and configure, let’s look at why maintenance can become overly time-consuming. There are the obvious maintenance tasks, such as continuously keeping your antivirus and spam filtering rules, and all of the mail server components up to date, but there are some other things you might have not thought of.

Staying Off Blacklists

Another challenge with maintaining a mail server is keeping your server off of the various blacklists, also known as DNSBL, blocklists, or blackhole lists. These lists contain the IP addresses of mail servers that were reported to send spam or junk mail (or for having improperly configured DNS records). Many mail servers subscribe to one or more of these blacklists, and filter incoming messages based on whether the mail server that sent the messages is on the list(s). If your mail server gets listed, your outgoing messages may be filtered and discarded before they reach their intended recipients.

If your mail server gets blacklisted, it is often possible to get it unlisted (or removed from the blacklist). You will want to determine the reason for being blacklisted, and resolve the issue. After this, you will want to look up the blacklist removal process for the particular list that your mail server is on, and follow it.

Troubleshooting is Difficult

Although most people use email every day, it is easy to overlook the fact that it is a complex system can be difficult to troubleshoot. For example, if your sent messages are not being received, where do you start to resolve the issue? The issue could be caused by a misconfiguration in one of the many mail server components, such as a poorly tuned outgoing spam filter, or by an external problem, such as being on a blacklist.


Now here are some alternatives to running your own mail server . These mail services will probably meet your needs, and will allow you and your applications to send and receive email from your own domain.

This list doesn’t include every mail service; there are many out there, each with their own features and prices. Be sure to choose the one that has the features that you need, at a price that you want.

Easy Alternatives — Postfix for Outgoing Mail

If you simply need to send outgoing mail from an application on your server, you don’t need to set up a complete mail server. You can set up a simple Mail Transfer Agent (MTA) such as Postfix.

You then can configure your application to use sendmail, on your server, as the mail transport for its outgoing messages.

An Introduction to Securing your Linux VPS

Introduction

Taking control of your own Linux server is an opportunity to try new things and leverage the power and flexibility of a great platform. However, Linux server administrators must take the same caution that is appropriate with any network-connected machine to keep it secure and safe.

There are many different security topics that fall under the general category of “Linux security” and many opinions as to what an appropriate level of security looks like for a Linux server.

The main thing to take away from this is that you will have to decide for yourself what security protections will be necessary. Before you do this, you should be aware of the risks and the trade offs, and decide on the balance between usability and security that makes sense for you.

This article is meant to help orient you with some of the most common security measures to take in a Linux server environment. This is not an exhaustive list, and does not cover recommended configurations, but it will provide links to more thorough resources and discuss why each component is an important part of many systems.

Blocking Access with Firewalls

One of the easiest steps to recommend to all users is to enable and configure a firewall. Firewalls act as a barrier between the general traffic of the internet and your machine. They look at traffic headed in and out of your server, and decide if it should allow the information to be delivered.

They do this by checking the traffic in question against a set of rules that are configured by the user. Usually, a server will only be using a few specific networking ports for legitimate services. The rest of the ports are unused, and should be safely protected behind a firewall, which will deny all traffic destined for these locations.

This allows you to drop data that you are not expecting and even conditionalize the usage of your real services in some cases. Sane firewall rules provide a good foundation to network security.

There are quite a few firewall solutions available. We’ll briefly discuss some of the more popular options below.

UFW

UFW stands for uncomplicated firewall. Its goal is to provide good protection without the complicated syntax of other solutions.

UFW, as well as most Linux firewalls, is actually a front-end to control the netfilter firewall included with the Linux kernel. This is usually a simple firewall to use for people not already familiar with Linux firewall solutions and is generally a good choice.

You can learn how to enable and configure the UFW firewall and find out more by clicking this link.

IPTables

Perhaps the most well-known Linux firewall solution is iptables. IPTables is another component used to administer the netfilter firewall included in the Linux kernel. It has been around for a long time and has undergone intense security audits to ensure its safety. There is a version of iptables called ip6tables for creating IPv6 restrictions.

You will likely come across iptables configurations during your time administering Linux machines. The syntax can be complicated to grasp at first, but it is an incredibly powerful tool that can be configured with very flexible rule sets.

You can learn more about how to implement some iptables firewall rules on Ubuntu or Debian systems here, or learn how to use iptables on CentOS/Fedora/RHEL-based distros here.

IP6Tables

As mentioned above, the iptables is used to manipulate the tables that contain IPv4 rules. If you have IPv6 enabled on your server, you will need to also pay attention to the IPv6 equivalent: ip6tables.

The netfilter firewall that is included in the Linux kernel keeps IPv4 and IPv6 traffic completely separate. These are stored in different tables. The rules that dictate the ultimate fate of a packet are determined by the protocol version that is being used.

What this means to the server’s administer is that a separate ruleset must be maintained when version 6 is enabled. The ip6tables command shares the same syntax as the iptables command, so implementing the same set of restrictions in the version 6 table is usually straight forward. You must be sure match traffic directed at your IPv6 addresses however, for this to work correctly.

NFTables

Although iptables has long been the standard for firewalls in a Linux environment, a new firewall called nftables has recently been added into the Linux kernel. This is a project by the same team that makes iptables, and is intended to eventually replace iptables.

The nftables firewall attempts to implement more readable syntax than that found its iptables predecessor, and implements IPv4 and IPv6 support into the same tool. While most versions of Linux at this time do not ship with a kernel new enough to implement nftables, it will soon be very commonplace, and you should try to familiarize yourself with its usage.

Using SSH to Securely Login Remotely

When administering a server where you do not have local access, you will need to log in remotely. The standard, secure way of accomplishing this on a Linux system is through a protocol known called SSH, which stands for secure shell.

SSH provides end-to-end encryption, the ability to tunnel insecure traffic over a secure connection, X-forwarding (graphical user interface over a network connection), and much more. Basically, if you do not have access to a local connection or out-of-band management, SSH should be your primary way of interacting with your machine.

While the protocol itself is very secure and has undergone extensive research and code review, your configuration choices can either aid or hinder the security of the service. We will discuss some options below.

Password vs SSH-Key Logins

SSH has a flexible authentication model that allows you to sign in using a number of different methods. The two most popular choices are password and SSH-key authentication.

While password authentication is probably the most natural model for most users, it is also the less secure of these two choices. Password logins allow a potential intruder to continuously guess passwords until a successful combination is found. This is known as brute-forcing and can easily be automated by would-be attackers with modern tools.

SSH-keys, on the other hand, operate by generating a secure key pair. A public key is created as a type of test to identify a user. It can be shared publicly without issues, and cannot be used for anything other than identifying a user and allowing a login to the user with the matching private key. The private key should be kept secret and is used to pass the test of its associated public key.

Basically, you can add your public SSH key on a server, and it will allow you to login by using the matching private key. These keys are so complex that brute-forcing is not practical. Furthermore, you can optionally add long passphrases to your key that adds even more security.

To learn more about how to use SSH click here, and check out this link to learn how to set up SSH keys on your server.

Implement fail2ban to Ban Malicious IP Addresses

One step that will help with the general security of your SSH configuration is to implement a solution like fail2ban. Fail2ban is a service that monitors log files in order to determine if a remote system is likely not a legitimate user, and then temporarily ban future traffic from the associated IP address.

Setting up a sane fail2ban policy can allow you to flag computers that are continuously trying to log in unsuccessfully and add firewall rules to drop traffic from them for a set period of time. This is an easy way of hindering often used brute force methods because they will have to take a break for quite a while when banned. This usually is enough to discourage further brute force attempts.

You can learn how to implement a fail2ban policy on Ubuntu here. There are similar guides for Debian andCentOS here.

Implement an Intrusion Detection System to Detect Unauthorized Entry

One important consideration to keep in mind is developing a strategy for detecting unauthorized usage. You may have preventative measures in place, but you also need to know if they’ve failed or not.

An intrusion detection system, also known as an IDS, catalogs configuration and file details when in a known-good state. It then runs comparisons against these recorded states to find out if files have been changed or settings have been modified.

There are quite a few intrusion detection systems. We’ll go over a few below.

Tripwire

One of the most well-known IDS implementations is tripwire. Tripwire compiles a database of system files and protects its configuration files and binaries with a set of keys. After configuration details are chosen and exceptions are defined, subsequent runs notify of any alterations to the files that it monitors.

The policy model is very flexible, allowing you to shape its properties to your environment. You can then configure tripwire runs via a cron job and even implement email notifications in the event of unusual activity.

Learn more about how to implement tripwire here.

Aide

Another option for an IDS is Aide. Similar to tripwire, Aide operates by building a database and comparing the current system state to the known-good values it has stored. When a discrepancy arises, it can notify the administrator of the problem.

Aide and tripwire both offer similar solutions to the same problem. Check out the documentation and try out both solutions to find out which you like better.

For a guide on how to use Aide as an IDS, check here.

Psad

The psad tool is concerned with a different portion of the system than the tools listed above. Instead of monitoring system files, psad keeps an eye on the firewall logs to try to detect malicious activity.

If a user is trying to probe for vulnerabilities with a port scan, for instance, psad can detect this activity and dynamically alter the firewall rules to lock out the offending user. This tool can register different threat levels and base its response on the severity of the problem. It can also optionally email the administrator.

To learn how to use psad as a network IDS, follow this link.

Bro

Another option for a network-based IDS is Bro. Bro is actually a network monitoring framework that can be used as a network IDS or for other purposes like collecting usage stats, investigating problems, or detecting patterns.

The Bro system is divided into two layers. The first layer monitors activity and generates what it considers events. The second layer runs the generated events through a policy framework that dictates what should be done, if anything, with the traffic. It can generate alerts, execute system commands, simply log the occurrence, or take other paths.

To find out how to use Bro as an IDS, click here.

RKHunter

While not technically an intrusion detection system, rkhunter operates on many of the same principles as host-based intrusion detection systems in order to detect rootkits and known malware.

While viruses are rare in the Linux world, malware and rootkits are around that can compromise your box or allow continued access to a successful exploiter. RKHunter downloads a list of known exploits and then checks your system against the database. It also alerts you if it detects unsafe settings in some common applications.

You can check out this article to learn how to use RKHunter on Ubuntu.

General Security Advice

While the above tools and configurations can help you secure portions of your system, good security does not come from just implementing a tool and forgetting about it. Good security manifests itself in a certain mindset and is achieved through diligence, scrutiny, and engaging in security as a process.

There are some general rules that can help set you in the right direction in regards to using your system securely.

Pay Attention to Updates and Update Regularly

Software vulnerabilities are found all of the time in just about every kind of software that you might have on your system. Distribution maintainers generally do a good job of keeping up with the latest security patches and pushing those updates into their repositories.

However, having security updates available in the repository does your server no good if you have not downloaded and installed the updates. Although many servers benefit from relying on stable, well-tested versions of system software, security patches should not be put off and should be considered critical updates.

Most distributions provide security mailing lists and separate security repositories to only download and install security patches.

Take Care When Downloading Software Outside of Official Channels

Most users will stick with the software available from the official repositories for their distribution, and most distributions offer signed packages. Users generally can trust the distribution maintainers and focus their concern on the security of software acquired outside of official channels.

You may choose to trust packages from your distribution or software that is available from a project’s official website, but be aware that unless you are auditing each piece of software yourself, there is risk involved. Most users feel that this is an acceptable level of risk.

On the other hand, software acquired from random repositories and PPAs that are maintained by people or organizations that you don’t recognize can be a huge security risk. There are no set rules, and the majority of unofficial software sources will likely be completely safe, but be aware that you are taking a risk whenever you trust another party.

Make sure you can explain to yourself why you trust the source. If you cannot do this, consider weighing your security risk as more of a concern than the convenience you’ll gain.

Know your Services and Limit Them

Although the entire point of running a server is likely to provide services that you can access, limit the services running on your machine to those that you use and need. Consider every enabled service to be a possible threat vector and try to eliminate as many threat vectors as you can without affecting your core functionality.

This means that if you are running a headless (no monitor attached) server and don’t run any graphical (non-web) programs, you should disable and probably uninstall your X display server. Similar measures can be taken in other areas. No printer? Disable the “lp” service. No Windows network shares? Disable the “samba” service.

You can discover which services you have running on your computer through a variety of means. This article covers how to detect enabled services under the “create a list of requirements” section.

Do Not Use FTP; Use SFTP Instead

This might be a hard one for many people to come to terms with, but FTP is a protocol that is inherently insecure. All authentication is sent in plain-text, meaning that anyone monitoring the connection between your server and your local machine can see your login details.

There are only very few instances where FTP is probably okay to implement. If you are running an anonymous, public, read-only download mirror, FTP is a decent choice. Another case where FTP is an okay choice is when you are simply transferring files between two computers that are behind a NAT-enabled firewall, and you trust your network is secure.

In almost all other cases, you should use a more secure alternative. The SSH suite comes complete with an alternative protocol called SFTP that operates on the surface in a similar way, but it based on the same security of the SSH protocol.

This allows you to transfer information to and from your server in the same way that you would traditionally use FTP, but without the risk. Most modern FTP clients can also communicate with SFTP servers.

To learn how to use SFTP to transfer files securely, check out this guide.

Implement Sensible User Security Policies

There are a number of steps that you can take to better secure your system when administering users.

One suggestion is to disable root logins. Since the root user is present on any POSIX-like systems and it is an all-powerful account, it is an attractive target for many attackers. Disabling root logins is often a good idea after you have configured sudo access, or if you are comfortable using the su command. Many people disagree with this suggestion, but examine if it is right for you.

It is possible to disable remote root logins within the SSH daemon or to disable local logins, you can make restrictions in the /etc/securetty file. You can also set the root user’s shell to a non-shell to disable root shell access and set up PAM rules to restrict root logins as well. RedHat has a great article on how to disable root logins.

Another good policy to implement with user accounts is creating unique accounts for each user and service, and give them only the bare minimum permissions to get their job done. Lock down everything that they don’t need access to and take away all privileges short of crippling them.

This is an important policy because if one user or service gets compromised, it doesn’t lead to a domino affect that allows the attacker to gain access to even more of the system. This system of compartmentalization helps you to isolate problems, much like a system of bulkheads and watertight doors can help prevent a ship from sinking when there is a hull breach.

In a similar vein to the services policies we discussed above, you should also take care to disable any user accounts that are no longer necessary. This may happen when you uninstall software, or if a user should no longer have access to the system.

Pay Attention to Permission Settings

File permissions are a huge source of frustration for many users. Finding a balance for permissions that allow you to do what you need to do while not exposing yourself to harm can be difficult and demands careful attention and thought in each scenario.

Setting up a sane umask policy (the property that defines default permissions for new files and directories) can go a long way in creating good defaults. You can learn about how permissions work and how to adjust your umask value here.

In general, you should think twice before setting anything to be world-writeable, especially if it is accessible in any way to the internet. This can have extreme consequences. Additionally, you should not set the SGID or SUID bit in permissions unless you absolutely know what you are doing. Also, check that your files have an owner and a group.

Your file permissions settings will vary greatly based on your specific usage, but you should always try to see if there is a way to get by with fewer permissions. This is one of the easiest things to get wrong and an area where there is a lot of bad advice floating around on the internet.

Regularly Check for Malware on your Servers

While Linux is generally less targeted by Malware than Windows, it is by no means immune to malicious software. In conjunction with implementing an IDS to detect intrusion attempts, scanning for malware can help identify traces of activity that indicate that illegitimate software is installed on your machine.

There are a number of malware scanners available for Linux systems that can be used to regularly validate the integrity of your servers. Linux Malware Detect, also known as maldet or LMD, is one popular option that can be easily installed and configured to scan for known malware signatures. It can be run manually to perform one-off scans and can also be daemonized to run regularly scheduled scans. Reports from these scans can be emailed to the server administrators.

How To Secure the Specific Software you are Using

Although this guide is not large enough to go through the specifics of securing every kind of service or application, there are many tutorials and guidelines available online. You should read the security recommendations of every project that you intend to implement on your system.

Furthermore, popular server software like web servers or database management systems have entire websites and databases devoted to security. In general, you should read up on and secure every service before putting it online.

You can check our security section for more specific advice for the software you are using.

Conclusion

You should now have a decent understanding of general security practices you can implement on your Linux server. While we’ve tried hard to mention many areas of high importance, at the end of the day, you will have to make many decisions on your own. When you administer a server, you have to take responsibility for your server’s security.

This is not something that you can configure in one quick spree in the beginning, it is a process and an ongoing exercise in auditing your system, implementing solutions, evaluating logs and alerts, reassessing your needs, etc. You need to be vigilant in protecting your system and always be evaluating and monitoring the results of your solutions.

How To Install an SSL Certificate from a Commercial Certificate Authority

Introduction

This tutorial will show you how to acquire and install an SSL certificate from a trusted, commercial Certificate Authority (CA). SSL certificates allow web servers to encrypt their traffic, and also offer a mechanism to validate server identities to their visitors. The main benefit of using a purchased SSL certificate from a trusted CA, over self-signed certificates, is that your site’s visitors will not be presented with a scary warning about not being able to verify your site’s identity.

This tutorial covers how to acquire an SSL certificate from the following trusted certificate authorities:

  • GoDaddy
  • RapidSSL (via Namecheap)

You may also use any other CA of your choice.

After you have acquired your SSL certificate, we will show you how to install it on Nginx and Apache HTTP web servers.

Prerequisites

There are several prerequisites that you should ensure before attempting to obtain an SSL certificate from a commercial CA. This section will cover what you will need in order to be issued an SSL certificate from most CAs.

Money

SSL certificates that are issued from commercial CAs have to be purchased. The best free alternative are certificates issued from Let’s Encrypt. Let’s Encrypt is a new certificate authority that issues free SSL/TLS certificates that are trusted in most web browsers.

Registered Domain Name

Before acquiring an SSL certificate, you must own or control the registered domain name that you wish to use the certificate with. If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.).

Domain Validation Rights

For the basic domain validation process, you must have access to one of the email addresses on your domain’s WHOIS record or to an “admin type” email address at the domain itself. Certificate authorities that issue SSL certificates will typically validate domain control by sending a validation email to one of the addresses on the domain’s WHOIS record, or to a generic admin email address at the domain itself. Some CAs provide alternative domain validation methods, such as DNS- or HTTP-based validation, which are outside the scope of this guide.

If you wish to be issued an Organization Validation (OV) or Extended Validation (EV) SSL certificate, you will also be required to provide the CA with paperwork to establish the legal identity of the website’s owner, among other things.

Web Server

In addition to the previously mentioned points, you will need a web server to install the SSL certificate on. This is the server that is reachable at the domain name for which the SSL certificate will be issued for. Typically, this will be an Apache HTTP, Nginx, HAProxy, or Varnish server. If you need help setting up a web server that is accessible via your registered domain name, follow these steps:

  1. Set up a web server of your choice. For example, a LEMP (Nginx) or LAMP (Apache) server–be sure to configure the web server software to use the name of your registered domain
  2. Configure your domain to use the appropriate nameservers. If your web server is hosted on DigitalOcean, this guide can help you get set up: How To Point to DigitalOcean’s Nameservers from Common Domain Registrars
  3. Add DNS records for your web server to your nameservers. If you are using DigitalOcean’s nameservers, follow this guide to learn how to add the appropriate records: How To Set Up a Host Name with DigitalOcean

Choose Your Certificate Authority

If you are not sure of which Certificate Authority you are going to use, there are a few important factors to consider. At an overview level, the most important thing is that the CA you choose provides the features you want at a price that you are comfortable with. This section will focus more on the features that most SSL certificate buyers should be aware of, rather than prices.

Root Certificate Program Memberships

The most crucial point is that the CA that you choose is a member of the root certificate programs of the most commonly used operating systems and web browsers, i.e. it is a “trusted” CA, and its root certificate is trusted by common browsers and other software. If your website’s SSL certificate is signed by a trusted” CA, its identity is considered to be valid by software that trusts the CA–this is in contrast to self-signed SSL certificates, which also provide encryption capabilities but are accompanied by identity validation warnings that are off-putting to most website visitors.

Most commercial CAs that you will encounter will be members of the common root CA programs, and will say they are compatible with 99% of browsers, but it does not hurt to check before making your certificate purchase. For example, Apple provides its list of trusted SSL root certificates for iOS8 here.

Certificate Types

Ensure that you choose a CA that offers the certificate type that you require. Many CAs offer variations of these certificate types under a variety of, often confusing, names and pricing structures. Here is a short description of each type:

  • Single Domain: Used for a single domain, e.g. example.com. Note that additional subdomains, such as www.example.com, are not included
  • Wildcard: Used for a domain and any of its subdomains. For example, a wildcard certificate for*.example.com can also be used for www.example.com and store.example.com
  • Multiple Domain: Known as a SAN or UC certificate, these can be used with multiple domains and subdomains that are added to the Subject Alternative Name field. For example, a single multi-domain certificate could be used with example.com, www.example.com, and example.net

In addition to the aforementioned certificate types, there are different levels of validations that CAs offer. We will cover them here:

  • Domain Validation (DV): DV certificates are issued after the CA validates that the requestor owns or controls the domain in question
  • Organization Validation (OV): OV certificates can be issued only after the issuing CA validates the legal identity of the requestor
  • Extended Validation (EV): EV certificates can be issued only after the issuing CA validates the legal identity, among other things, of the requestor, according to a strict set of guidelines. The purpose of this type of certificate is to provide additional assurance of the legitimacy of your organization’s identity to your site’s visitors. EV certificates can be single or multiple domain, but not wildcard

This guide will show you how to obtain a single domain or wildcard SSL certificate from GoDaddy and RapidSSL, but obtaining the other types of certificates is very similar.

Additional Features

Many CAs offer a large variety of “bonus” features to differentiate themselves from the rest of the SSL certificate-issuing vendors. Some of these features can end up saving you money, so it is important that you weigh your needs against the offerings carefully before making a purchase. Example of features to look out for include free certificate reissues or a single domain-priced certificate that works for www. and the domain basename, e.g. www.example.com with a SAN of example.com

Generate a CSR and Private Key

After you have all of your prerequisites sorted out, and you know the type of certificate you want to get, it’s time to generate a certificate signing request (CSR) and private key.

If you are planning on using Apache HTTP or Nginx as your web server, use openssl to generate your private key and CSR on your web server. In this tutorial, we will just keep all of the relevant files in our home directory but feel free to store them in any secure location on your server:

cd ~

To generate a private key, called example.com.key, and a CSR, called example.com.csr, run this command (replace the example.com with the name of your domain):

openssl req -newkey rsa:2048 -nodes -keyout example.com.key -out example.com.csr

At this point, you will be prompted for several lines of information that will be included in your certificate request. The most important part is the Common Name field which should match the name that you want to use your certificate with–for example, example.com, www.example.com, or (for a wildcard certificate request) *.example.com. If you are planning on getting an OV or EV certificate, ensure that all of the other fields accurately reflect your organization or business details.

For example:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:example.com
Email Address []:sammy@example.com

This will generate a .key and .csr file. The .key file is your private key, and should be kept secure. The.csr file is what you will send to the CA to request your SSL certificate.

You will need to copy and paste your CSR when submitting your certificate request to your CA. To print the contents of your CSR, use this command (replace the filename with your own):

cat example.com.csr

Now we are ready to buy a certificate from a CA. We will show two examples, GoDaddy and RapidSSL via Namecheap, but feel free to get a certificate from any other vendor.

Example CA 1: RapidSSL via Namecheap

Namecheap provides a way to buy SSL certificates from a variety of CAs. We will walk through the process of acquiring a single domain certificate from RapidSSL, but you can deviate if you want a different type of certificate.

Note: If you request a single domain certificate from RapidSSL for the www subdomain of your domain (e.g.www.example.com), they will issue the certificate with a SAN of your base domain. For example, if your certificate request is for www.example.com, the resulting certificate will work for both www.example.comand example.com.

Select and Purchase Certificate

Go to Namecheap’s SSL certificate page: https://www.namecheap.com/security/ssl-certificates.aspx.

Here you can start selecting your validation level, certificate type (“Domains Secured”), or CA (“Brand”).

For our example, we will click on the Compare Products button in the “Domain Validation” box. Then we will find “RapidSSL”, and click the Add to Cart button.

At this point, you must register or log in to Namecheap. Then finish the payment process.

Request Certificate

After paying for the certificate of your choice, go to the Manage SSL Certificates link, under the “HiUsername” section.

Namecheap: SSL

Here, you will see a list of all of the SSL certificates that you have purchased through Namecheap. Click on the Activate Now link for the certificate that you want to use.

Namecheap: SSL Management

Now select the software of your web server. This will determine the format of the certificate that Namecheap will deliver to you. Commonly selected options are “Apache + MOD SSL”, “nginx”, or “Tomcat”.

Paste your CSR into the box then click the Next button.

You should now be at the “Select Approver” step in the process, which will send a validation request email to an address in your domain’s WHOIS record or to an administrator type address of the domain that you are getting a certificate for. Select the address that you want to send the validation email to.

Provide the “Administrative Contact Information”. Click the Submit order button.

Validate Domain

At this point, an email will be sent to the “approver” address. Open the email and approve the certificate request.

Download Certificates

After approving the certificate, the certificate will be emailed to the Technical Contact. The certificate issued for your domain and the CA’s intermediate certificate will be at the bottom of the email.

Copy and save them to your server in the same location that you generated your private key and CSR. Name the certificate with the domain name and a .crt extension, e.g. example.com.crt, and name the intermediate certificate intermediate.crt.

The certificate is now ready to be installed on your web server.

Example CA 2: GoDaddy

GoDaddy is a popular CA, and has all of the basic certificate types. We will walk through the process of acquiring a single domain certificate, but you can deviate if you want a different type of certificate.

Select and Purchase Certificate

Go to GoDaddy’s SSL certificate page: https://www.godaddy.com/ssl/ssl-certificates.aspx.

Scroll down and click on the Get Started button.

Go Daddy: Get started

Select the type of SSL certificate that you want from the drop down menu: single domain, multidomain (UCC), or wildcard.

GoDaddy: Certificate Type

Then select your plan type: domain, organization, or extended validation.

Then select the term (duration of validity).

Then click the Add to Cart button.

Review your current order, then click the Proceed to Checkout button.

Complete the registration and payment process.

Request Certificate

After you complete your order, click the SSL Certificates* button (or click on My Account > Manage SSL Certificates in the top-right corner).

Find the SSL certificate that you just purchased and click the Set Up button. If you have not used GoDaddy for SSL certificates before, you will be prompted to set up the “SSL Certificates” product, and associate your recent certificate order with the product (Click the green Set Up button and wait a few minutes before refreshing your browser).

After the “SSL Certificates” Product is added to your GoDaddy account, you should see your “New Certificate” and a “Launch” button. Click on the Launch button next to your new certificate.

Provide your CSR by pasting it into the box. The SHA-2 algorithm will be used by default.

Tick the I agree checkbox, and click the Request Certificate button.

Validate Domain

Now you will have to verify that you have control of the domain, and provide GoDaddy with a few documents. GoDaddy will send a domain ownership verification email to the address that is on your domain’s WHOIS record. Follow the directions in the emails that you are sent to you, and authorize the issuance of the certificate.

Download Certificate

After verifying to GoDaddy that you control the domain, check your email (the one that you registered with GoDaddy with) for a message that says that your SSL certificate has been issued. Open it, and follow the download certificate link (or click the Launch button next to your SSL certificate in the GoDaddy control panel).

Now click the Download button.

Select the server software that you are using from the Server type dropdown menu–if you are using Apache HTTP or Nginx, select “Apache”–then click the Download Zip File button.

Extract the ZIP archive. It should contain two .crt files; your SSL certificate (which should have a random name) and the GoDaddy intermediate certificate bundle (gd_bundle-g2-1.crt). Copy both two your web server. Rename the certificate to the domain name with a .crt extension, e.g. example.com.crt, and rename the intermediate certificate bundle as intermediate.crt.

The certificate is now ready to be installed on your web server.

Install Certificate On Web Server

After acquiring your certificate from the CA of your choice, you must install it on your web server. This involves adding a few SSL-related lines to your web server software configuration.

We will cover basic Nginx and Apache HTTP configurations on Ubuntu 14.04 in this section.

We will assume the following things:

  • The private key, SSL certificate, and, if applicable, the CA’s intermediate certificates are located in a home directory at /home/sammy
  • The private key is called example.com.key
  • The SSL certificate is called example.com.crt
  • The CA intermediate certificate(s) are in a file called intermediate.crt
  • If you have a firewall enabled, be sure that it allows port 443 (HTTPS)

Note: In a real environment, these files should be stored somewhere that only the user that runs the web server master process (usually root) can access. The private key should be kept secure.

Nginx

If you want to use your certificate with Nginx on Ubuntu 14.04, follow this section.

With Nginx, if your CA included an intermediate certificate, you must create a single “chained” certificate file that contains your certificate and the CA’s intermediate certificates.

Change to the directory that contains your private key, certificate, and the CA intermediate certificates (in the intermediate.crt file). We will assume that they are in your home directory for the example:

cd ~

Assuming your certificate file is called example.com.crt, use this command to create a combined file called example.com.chained.crt (replace the highlighted part with your own domain):

cat example.com.crt intermediate.crt > example.com.chained.crt

Now go to your Nginx server block configuration directory. Assuming that is located at/etc/nginx/sites-enabled, use this command to change to it:

cd /etc/nginx/sites-enabled

Assuming want to add SSL to your default server block file, open the file for editing:

sudo vi default

Find and modify the listen directive, and modify it so it looks like this:

    listen 443 ssl;

Then find the server_name directive, and make sure that its value matches the common name of your certificate. Also, add the ssl_certificate and ssl_certificate_key directives to specify the paths of your certificate and private key files (replace the highlighted part with the actual path of your files):

    server_name example.com;
    ssl_certificate /home/sammy/example.com.chained.crt;
    ssl_certificate_key /home/sammy/example.com.key;

To allow only the most secure SSL protocols and ciphers, add the following lines to the file:

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

If you want HTTP traffic to redirect to HTTPS, you can add this additional server block at the top of the file (replace the highlighted parts with your own information):

server {
    listen 80;
    server_name example.com;
    rewrite ^/(.*) https://example.com/$1 permanent;
}

Then save and quit.

Now restart Nginx to load the new configuration and enable TLS/SSL over HTTPS!

sudo service nginx restart

Test it out by accessing your site via HTTPS, e.g. https://example.com.

Apache

If want to use your certificate with Apache on Ubuntu 14.04, follow this section.

Make a backup of your configuration file by copying it. Assuming your server is running on the default virtual host configuration file, /etc/apache2/sites-available/000-default.conf, use these commands to to make a copy:

cd /etc/apache2/sites-available
cp 000-default.conf 000-default.conf.orig

Then open the file for editing:

sudo vi 000-default.conf

Find the <VirtualHost *:80> entry and modify it so your web server will listen on port 443:

<VirtualHost *:443>

Then add the ServerName directive, if it doesn’t already exist (substitute your domain name here):

ServerName example.com

Then add the following lines to specify your certificate and key paths (substitute your actual paths here):

SSLEngine on
SSLCertificateFile /home/sammy/example.com.crt
SSLCertificateKeyFile /home/sammy/example.com.key

If you are using Apache 2.4.8 or greater, specify the CA intermediate bundle by adding this line (substitute the path):

SSLCACertificateFile /home/sammy/intermediate.crt

If you are using an older version of Apache, specify the CA intermediate bundle with this line (substitute the path):

SSLCertificateChainFile /home/sammy/intermediate.crt

At this point, your server is configured to listen on HTTPS only (port 443), so requests to HTTP (port 80) will not be served. To redirect HTTP requests to HTTPS, add the following to the top of the file (substitute the name in both places):

<VirtualHost *:80>
   ServerName example.com
   Redirect permanent / https://example.com/
</VirtualHost>

Save and exit.

Enable the Apache SSL module by running this command:

sudo a2enmod ssl

Now restart Apache to load the new configuration and enable TLS/SSL over HTTPS!

sudo service apache2 restart

Test it out by accessing your site via HTTPS, e.g. https://example.com. You will also want to try connecting via HTTP, e.g. http://example.com to ensure that the redirect is working properly!

Conclusion

Now you should have a good idea of how to add a trusted SSL certificate to secure your web server. Be sure to shop around for a CA that you are happy with!