Agile for a constantly changing world

In my years in software development, I have experienced many projects that have encountered challenges on keeping with its schedule and meeting its deadline.

One major reason would be the constant change of requirements.

There is this saying:

“In life, as well as in software development, the only thing that is constant is change.”

You may ask – Why are the requirements always changing? The answer to that would be the following:

  • The clients or users are not sure what they want.
  • They have difficulty stating all they want and know.
  • Many details of what they want will only be revealed during development.
  • The details are overwhelmingly complex for people.
  • As they see the product develop, they change their minds.
  • External forces (such as a competitor’s product or service) lead to changes or enhancements in requests.

Luckily there were a group of experts in the industry who created a revolutionary way to do software development. This revolutionary methodology they have called as “Agile”.

agile-logo

Agile development methods apply timeboxed iterative and evolutionary development, adaptive planning, promote evolutionary delivery, and include other values and practices that encourage agility—rapid and flexible response to change.

agile_method-01

If agile methods have a motto, it is embrace change. If agile methods have a strategic point, it is maneuverability.

To understand it fully there is an Agile Manifesto that can be found in http://agilemanifesto.org/

AgileManifesto

The Agile Manifesto is based on twelve principles:

  1. Customer satisfaction by rapid delivery of useful software
  2. Welcome changing requirements, even late in development
  3. Working software is delivered frequently (weeks rather than months)
  4. Close, daily cooperation between business people and developers
  5. Projects are built around motivated individuals, who should be trusted
  6. Face-to-face conversation is the best form of communication (co-location)
  7. Working software is the principal measure of progress
  8. Sustainable development, able to maintain a constant pace
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Self-organizing teams
  12. Regular adaptation to changing circumstances

There is much to learn with regards to Agile and how to implement it. And this post is not meant to be a comprehensive guide in learning agile but an introductory note that there is already an existing method that would solve the common problem of changing requirements in software development.

Today when I encounter requirement change in a project, I do not complain anymore. I instead adapt to it.

Now is the time to Adapt to the changing world! Start being Agile!

Agile-mindset-2-1

Using your Camera to Search an image or item in Google

Have you ever passed by an item in a store or a mall and wanted to know the name of that item and other information like what it is for, how much is it, where else I can buy it from.

Or when you were on a trip to a museum or a tourist on vacation some place  foreign and saw an artifact or a piece of art or maybe a historical landmark and wanted to know the background of it,

Or even in a bookstore to check on a book or a grocery store to check out some new food items.

Well, there is an App that will make your life much easier in getting information just by using your phone camera. Point your phone camera at something and Google Goggles will find out all about it for you.

google goggles

While tapping the search box on the home screen and typing a word or phrase is straightforward, Google Goggles provides a different way to search: by using the camera. This has many benefits and one of its great features is its ability to recognise famous landmarks. Just point your device at the landmark and in seconds Goggles has analysed the scene and recognised it. It then shows one or more links to information on the web, Wikipedia entries and so on. Google Goggles can recognise books and DVDs. Just point the camera at the item and it scans it, then provides a series of links at the foot of the screen, such as the author, artist and so on. It can recognise barcodes on products and by using this to identify the item, you can follow the links to discover where you can buy it online and who has the best shopping deals. It recognises text and can add contacts by scanning business cards. Paintings and photos are recognised. It’s a handy way to search.

How to Use:

1: Choose the network

goggles01

If you want to use Goggles when you are out and might not have a Wi-Fi connection, select the first option. Bear in mind that it uploads photos and uses a fair amount of data.

2: Point and scan

goggles02

Point the camera at something and hold it steady for a second. If it recognises items in the frame, it automatically takes a snapshot, but you can also tap the camera button.

3: View the results

goggles03

One or more items in the image may be recognised. Here it has detected three and displayed them as a series of links at the bottom of the screen.

4: Start a search

goggles04

Tapping one of the recognised items at the bottom performs a web search. Here we tapped on the photo, which it recognised, and there are Wikipedia entries and more.

5: Share your results

goggles04b

You can share your results to your friends and families to give them information if ever they have the same interest or they have asked you to research on that item.

6: Scan barcodes

goggles05

Any item with a barcode can be scanned and this enables the item to be instantly recognised.

7: Go shopping

goggles06

It assumes you might be interested in buying this item and it therefore shows links to places where you can buy it online. There are options to show videos, images and more.

And so before your next trip in the mall, in the streets, or to another country, get your google goggles installed and ready to bring out the best information that search can provide by a click of the camera shutter.

 

 

 

 

 

 

 

 

 

A perfect time to start a career in SEO

Looking for a career or field to enter to into I.T. which would be relatively new or should I say where the current players are not far ahead that it would not be hard to gain the same level in a few years time. I suggest you venture and invest some time learning SEO which stands for Search Engine Optimization.

Tactical-SEO

 

Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a search engine’s search results. Generally, the higher ranked on the search results page the more visitors it will receive from the search engine’s users.

This practice have existed since the mid-1990s by webmasters optimizing sites for the first search engines and soon became a career after Google came into the picture and became the leading search engine of choice by users.

seo-peer365

So you would probably wonder how come I say that the people already working on this field are not far ahead in their expertise for those who will just start to venture and enter this field as a newbie knowing that this have existed more than a decade already.

Well the answer to that would be because what happened on February 11, 2011. On that day, Google released the Panda update. Google’s search algorithms were optimized and updated. The update was designed to filter out low quality web pages from the index. And it left webmasters and site owners not knowing why their sites were penalized and lost their rankings in the results of page search.

Then on April 24, 2012, Google unleashed the Penguin. On this update, it completely changed the way how people need to think about SEO. Just about everything that have been taught about SEO in the last 10 years have been put in the trash can.

Another update came on May 22, 2013 which was Penguin 2.0. This is also a major update to the Penguin itself and moved the standard much higher.

And finally there was Hummingbird. This update of Google’s algorithm was implemented on August 20, 2013. By that time all Webmasters and SEO specialist were back at square one to figure out how to bring back their Sites to rank well again in the search results.

Now is just the perfect time for anyone who wants to enter this field and make a career out of it while this field is still in its infancy stage again.

How to get started in Android Development

One of the challenges on shifting and venturing to another programming language or platform is having to figure out how to setup the right environment and tools needed for that language or platform.

In Android, the people in Google made this task the least painstakingly as possible . To get started developing in Android, you will only need to download the ADT (Android Developer Tools) Bundle. You can download this from http://developer.android.com/sdk/index.html#download

This bundle includes the following:

Eclipse
an integrated development environment used for Android development. Because Eclipse is also written in Java, you can install it on a PC, a Mac, or a Linux computer. The Eclipse user interface follows the “native look-and-feel” of your machine, so your screen may not look exactly like the screenshot below:

eclipse_ide

Android Developer Tools
a plug-in for Eclipse.  As of this writing (July 2014) , version of ADT (Android Developer Tools) is 23.0.2 You should make sure
you have that version or higher.

Android SDK
the latest version of the Android SDK tools and Android platform-tools for debugging and testing your apps, a system image for the Android emulator lets you create and test your apps on different virtual devices

After downloading, you just need to extract it to your local drive and run the eclipse application executable file under the bundle and eclipse folder.

That simple!

The only extra effort you need to do aside from the above is if your Java Runtime Environment version installed in your local machine is not compatible with the minimum requirement of the Android SDK. In this case, you only need to upgrade or install your Java Runtime Environment version to the latest one (check https://www.java.com/en/download/).

Next stop in this journey is learn android development

Checking Disk Space in Linux

Linux offer the following commands to check disk space usage:

Linux command to check disk space

  1. df command – Shows the amount of disk space used and available on Linux file systems.
  2. du command – Display the amount of disk space used by the specified files and for each subdirectory.
  3. btrfs fi df /device/ – Show disk space usage information for a btrfs based mount point/file system.

Linux check disk space with df command

  1. Open the terminal and type the following command to check disk space.
  2. The basic syntax for df is:
    df [options] [devices]
    Type:
  3. df
  4. df -H

Sample outputs:

Fig.01: df command in action

Fig.01: df command in action

The items in square brackets are optional. You can simply type the df command (i.e. no arguments), to see a table that lists for each device name on the system.

See information about specific filesystem

You can give a device or mount point as an argument, and df report data only for the filesystem physically residing on that device. For example, the following command provides information only for the partition /dev/sda:
$ df /dev/sda
$ df -h /dev/sdc1
$ df /data/

Sample outputs:

Filesystem      1K-blocks     Used  Available Use% Mounted on
/dev/sda       2930266584 69405248 2859579472   3% /data

Understanding df command output

The valid fields are as follows:

Display name Valid field name (for --outputoption) Description
Filesystem source The source of the mount point, usually a device.
1K-blocks size Total number of blocks.
Used used Number of used blocks.
Available avail Number of available blocks.
Use% pcent Percentage of USED divided by SIZE.
Mounted on target The mount point.

You can pass the output format defined by ‘valid field name’ as follows:
$ df --output=field1,field2,...
$ df --output=source,used,avail /data/

Sample outputs:

Filesystem                    Used Avail
/dev/md0                      5.4G  115G
udev                             0   11M
tmpfs                         6.2M  414M
tmpfs                         4.1k  1.1G
tmpfs                         4.1k  5.3M
tmpfs                            0  1.1G
/dev/md2                      818G  688G
tmpfs                            0  210M
tmpfs                            0  210M
/dev/mapper/cryptvg-mybackup   77G  526G

You can print all available fields, enter:
$ df --o
Sample outputs:

Filesystem     Type     Inodes  IUsed  IFree IUse%  1K-blocks     Used      Avail Use% File Mounted on
udev           devtmpfs 379248    333 378915    1%      10240        0      10240   0% -    /dev
tmpfs          tmpfs    381554    498 381056    1%     610488     9704     600784   2% -    /run
/dev/sdc1      ext3     956592 224532 732060   24%   14932444  7836056    6331204  56% -    /
tmpfs          tmpfs    381554      1 381553    1%    1526216        0    1526216   0% -    /dev/shm
tmpfs          tmpfs    381554      4 381550    1%       5120        0       5120   0% -    /run/lock
tmpfs          tmpfs    381554     14 381540    1%    1526216        0    1526216   0% -    /sys/fs/cgroup
/dev/sda       btrfs         0      0      0     - 2930266584 69405248 2859579472   3% -    /data
tmpfs          tmpfs    381554      4 381550    1%     305244        0     305244   0% -    /run/user/0

Express df output in human readable form

Pass the -h option to see output in human readable format. You will device size in gigabytes or terabytes or megabytes:
$ df -h ### Human format
$ df -m ### Show output size in one-megabyte
$ df -k ### Show output size in one-kilobyte blocks (default)

Display output using inode usage instead of block usage

An inode is a data structure on a Linux file system that stores all information about file. To list inode information, enter:
$ df -i
$ df -i -h

Sample outputs:

Filesystem     Inodes IUsed IFree IUse% Mounted on
udev             371K   333  371K    1% /dev
tmpfs            373K   498  373K    1% /run
/dev/sdc1        935K  220K  715K   24% /
tmpfs            373K     1  373K    1% /dev/shm
tmpfs            373K     4  373K    1% /run/lock
tmpfs            373K    14  373K    1% /sys/fs/cgroup
/dev/sda            0     0     0     - /data
tmpfs            373K     4  373K    1% /run/user/0

Find out the type of each file system displayed

Pass the -T option to display the type of each filesystems listed such as ext4, btrfs, ext2, nfs4, fuse, cgroup, cputset, and more:
$ df -T
$ df -T -h
$ df -T -h /data/

Sample outputs:

Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda       btrfs  2.8T   67G  2.7T   3% /data

Limit listing to file systems of given type

The syntax is:
$ df -t ext3 #Only see ext3 file system
$ df -t ext4 #Only see ext4 file system
$ df -t btrfs #Only see btrfs file system

Exclude given file system type

To list all but exclude ext2 filesystem pass the -x TYPE option, enter:
$ df -x ext2

Show all file system

Pass the -a or --all option to the df command to include in its output filesystems that have a size of zero blocks, run:
$ df -a

Filesystem      1K-blocks     Used  Available Use% Mounted on
sysfs                   0        0          0    - /sys
proc                    0        0          0    - /proc
udev                10240        0      10240   0% /dev
devpts                  0        0          0    - /dev/pts
tmpfs              610488     9708     600780   2% /run
/dev/sdc1        14932444  7836084    6331176  56% /
securityfs              0        0          0    - /sys/kernel/security
tmpfs             1526216        0    1526216   0% /dev/shm
tmpfs                5120        0       5120   0% /run/lock
tmpfs             1526216        0    1526216   0% /sys/fs/cgroup
cgroup                  0        0          0    - /sys/fs/cgroup/systemd
pstore                  0        0          0    - /sys/fs/pstore
cgroup                  0        0          0    - /sys/fs/cgroup/cpuset
cgroup                  0        0          0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                  0        0          0    - /sys/fs/cgroup/blkio
cgroup                  0        0          0    - /sys/fs/cgroup/memory
cgroup                  0        0          0    - /sys/fs/cgroup/devices
cgroup                  0        0          0    - /sys/fs/cgroup/freezer
cgroup                  0        0          0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                  0        0          0    - /sys/fs/cgroup/perf_event
systemd-1               -        -          -    - /proc/sys/fs/binfmt_misc
fusectl                 0        0          0    - /sys/fs/fuse/connections
debugfs                 0        0          0    - /sys/kernel/debug
mqueue                  0        0          0    - /dev/mqueue
hugetlbfs               0        0          0    - /dev/hugepages
/dev/sda       2930266584 69405248 2859579472   3% /data
rpc_pipefs              0        0          0    - /run/rpc_pipefs
tmpfs              305244        0     305244   0% /run/user/0
binfmt_misc             0        0          0    - /proc/sys/fs/binfmt_misc

These file systems omitted by default.

Getting more help about the df command

Pass the --help option see a brief help message:
$ df --help
Or read its man page by typing the following command:
$ man df

Say hello to the du command

The du command is very useful to track down disk space hogs. It is useful to find out the names of directories and files that consume large amounts of space on a disk. The basic syntax is:
du
du /path/do/dir
du [options] [directories and/or files]

To see the names and space consumption of each of the directories including all subdirectories in the directory tree, enter:
$ du
Sample outputs:

16	./.aptitude
12	./.ssh
56	./apcupsd
8	./.squidview
4	./kernel.build
12	./.elinks
8	./.vim
8	./.config/htop
12	./.config
648	.

The first column is expressed in kilobytes (file size) and the second column is the filename or directory name.

See du output in human readable format

Pass the -h option to display size in K (kilobytes), M (megabytes), G (gigabytes) instead of the default kilobytes:
$ du -h
Sample outputs:

16K	./.aptitude
12K	./.ssh
56K	./apcupsd
8.0K	./.squidview
4.0K	./kernel.build
12K	./.elinks
8.0K	./.vim
8.0K	./.config/htop
12K	./.config
648K	.

Finding information about any directory trees or files

To find out /etc/ directory space usage, enter:
# du /etc/
# du -h /etc/

The following will report the sizes of the thee files named hdparm, iptunnel and ifconfig that are located in the /sbin directory:
$ du /sbin/hdparm /sbin/iptunnel /sbin/ifconfig
$ du -h /sbin/hdparm /sbin/iptunnel /sbin/ifconfig

Sample outputs:

112K	/sbin/hdparm
24K	/sbin/iptunnel
72K	/sbin/ifconfig

How do I summarize disk usage for given directory name?

Pass the -s option to the du command. In this example, ask du command to report only the total disk space occupied by a directory tree and to suppress subdirectories:
# du -s /etc/
# du -sh /etc/

Sample outputs:

6.3M	/etc/

Pass the -a (all) option to see all files, not just directories:
# du -a /etc/
# du -a -h /etc/

Sample outputs:

4.0K	/etc/w3m/config
4.0K	/etc/w3m/mailcap
12K	/etc/w3m
4.0K	/etc/ConsoleKit/run-seat.d
4.0K	/etc/ConsoleKit/seats.d/00-primary.seat
8.0K	/etc/ConsoleKit/seats.d
4.0K	/etc/ConsoleKit/run-session.d
20K	/etc/ConsoleKit
...
....
..
...
4.0K	/etc/ssh/ssh_host_rsa_key
4.0K	/etc/ssh/ssh_host_rsa_key.pub
4.0K	/etc/ssh/ssh_host_dsa_key
244K	/etc/ssh/moduli
4.0K	/etc/ssh/sshd_config
272K	/etc/ssh
4.0K	/etc/python/debian_config
8.0K	/etc/python
0	/etc/.pwd.lock
4.0K	/etc/ldap/ldap.conf
8.0K	/etc/ldap
6.3M	/etc/

You can also use star ( * ) wildcard, which will match any character. For example, to see the size of each png file in the current directory, enter:
$ du -ch *.png

 52K	CIQTK4FUAAAbjDw.png-large.png
 68K	CX23RezWEAA0QY8.png-large.png
228K	CY32cShWkAAaNLD.png-large.png
 12K	CYaQ3JqU0AA-amA.png-large.png
136K	CYywxDfU0AAP2py.png
172K	CZBoXO1UsAAw3zR.png-large.png
384K	Screen Shot 2016-01-19 at 5.49.21 PM.png
324K	TkamEew.png
8.0K	VQx6mbH.png
 64K	fH7rtXE.png
 52K	ipv6-20-1-640x377.png
392K	unrseYB.png
1.8M	total

The -c option tells du to display grand total.

Putting it all together

To list top 10 directories eating disk space in /etc/, enter:
# du -a /etc/ | sort -n -r | head -n 10
Sample outputs:

8128	/etc/
928	/etc/ssl
904	/etc/ssl/certs
656	/etc/apache2
544	/etc/apache2/mods-available
484	/etc/init.d
396	/etc/php5
336	/etc/sane.d
308	/etc/X11
268	/etc/ssl/certs/ca-certificates.crt

For more information on the du command, type:
$ man du
$ du --help

Dealing with btrfs file system

For btrfs filesystem use the btrfs fi df command to see space usage information for a mount point. The syntax is:

btrfs filesystem df /path/
btrfs fi df /dev/path
btrfs fi df [options] /path/

Examples

# btrfs fi df /data/
# btrfs fi df -h /data/

Sample outputs:

Data, RAID1: total=71.00GiB, used=63.40GiB
System, RAID1: total=8.00MiB, used=16.00KiB
Metadata, RAID1: total=4.00GiB, used=2.29GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

To see raw numbers in bytes, run:
# btrfs fi df -b /data/
OR
# btrfs fi df -k /data/ ### show sizes in KiB ##
# btrfs fi df -m /data/ ### show sizes in MiB ##
# btrfs fi df -g /data/ ### show sizes in GiB ##
# btrfs fi df -t /data/ ### show sizes in TiB ##

 

How To Set Up a Host Name with DigitalOcean

Setup

Before you get started, you do need to have the following:

  • A Droplet (virtual private server) from DigitalOcean. If you don’t have one, you can register and set one up in under a minute
  • A Registered Domain Name. As of yet, you cannot register a domain through DigitalOcean.

Step One—Look Up Information with WHOIS

The first thing you need to do to set up your host name is to change your domain name server to point to the DigitalOcean name servers. You can do this through your domain registrar’s website. If you do not remember where you registered your name, you can look it up using “WHOIS”, a protocol that displays a site’s identifying information, such as the IP address and registration details.

Open up the command line and type:

whois example.com

WHOIS will display all of the details associated with the site, includng the Technical Contact which includes your domain registrar.

Step Two—Change Your Domain Server

Access the control panel of your domain registrar and find the fields called “Domain Name Server.” The forms for my domain registrar looked like this

Point your name servers to DigitalOcean and fill in three Domain Name Server fields. Once done, save your changes and exit.

The DigitalOcean domain servers are

  • ns1.digitalocean.com
  • ns2.digitalocean.com
  • ns3.digitalocean.com

You can verify that the new name servers are registered by running WHOIS again; the output should include the updated information:

Domain Name: EXAMPLE.COM
   Registrar: ENOM, INC.
   Whois Server: whois.enom.com
   Referral URL: http://www.enom.com
   Name Server: NS1.DIGITALOCEAN.COM
   Name Server: NS2.DIGITALOCEAN.COM
   Name Server: NS3.DIGITALOCEAN.COM
   Status: ok

Although the name servers are visible through WHOIS, it may take an hour or two for the changes to be reflected on your site.

Step Three—Configure your Domain

Now we need move into the DigitalOcean control panel.

Within the Networking section, click on Add Domain, and fill in the the domain name field and IP address of the server you want to connect it to on the subsequent page. Note: The domain name does not have a www at the beginning.

add a domain

You will reach a page where you can enter all of your site details. To make a new hostname, you only need to fill in the A record. If you are using an IPv6 address, you should enter it into the AAAA record.

A Records: Use this space to enter in the IP address of the droplet that you want to host your domain name on and the host name itself, a name prepended to your domain name. For example:

test.example.com

To accomplish this, create a new hostname with the word “test” in the hostname field. Your screen should look like this:

domain name

Save by clicking “Add new A record”

You can also connect your IP to a domain name with nothing before it (this should also occur by default when you add a domain):

http://example.com

To accomplish this, create a new hostname with the symbol “@’ in the hostname field. Your screen should look like this:

domain name

You can save by pressing enter after making the required changes on the line.

AAAA Records: Use this space to enter in the IPv6 address of the droplet that you want to host your domain name on and the host name itself, a name prepended to your domain name or you can also connect your IP to a domain name with nothing before it. To accomplish this, create a new hostname with the symbol “@’ in the hostname field. Your screen should look like this:

For example:

domain name

Save by clicking “Create”

CNAME Records: The CNAME record works as an alias of the A Record, pointing a subdomain to an A record— if an A Record’s IP address changes, the CNAME will follow to the new address. To prepend www to your URL, click on “Add a new CNAME record” and fill out the 2 fields.

Your screen should look like this:

 CNAME records

You can also set up a catchall or wildcard CNAME record that will direct any subdomain to the designated A record (for example, if a visitor accidentally types in wwww instead of www). This can be accomplish with an asterisk in the CNAME name field.

Your screen should look like this:

catch all CNAME records

If you need to set up a mail server on your domain, you can do so in the MX Records.

MX Records: The MX Records fields should be filled in with the hostname and priority of your mail server, a value designating the order in which the mail servers should be attempted to be reached. Records always end with a “.”A generic MX record looks something like this: mail1.example.com.

Below is an example of MX records set up for a domain that uses google mail servers (note the period at the end of each record):

Google MX records

Finish Up

Once you have filled in all of the required fields, your information will take a while to propagate, and the Name Server information will be automatically filled in. Your domain name name should be up and supported in a few hours.

You can confirm, after some time has passed, that the new host name has been registered by pinging it:

ping test.example.com

You should see something like:

# ping test.example.com
PING test.example.com (12.34.56.789) 56(84) bytes of data.
64 bytes from 12.34.56.789: icmp_req=1 ttl=63 time=1.47 ms
64 bytes from 12.34.56.789: icmp_req=2 ttl=63 time=0.674 ms

You should also be able to access the site in the browser.

How To Install and Setup Postfix on Ubuntu 14.04

Introduction

Postfix is a very popular open source Mail Transfer Agent (MTA) that can be used to route and deliver email on a Linux system. It is estimated that around 25% of public mail servers on the internet run Postfix.

In this guide, we’ll teach you how to get up and running quickly with Postfix on an Ubuntu 14.04 server.

Prerequisites

In order to follow this guide, you should have a Fully Qualified Domain Name pointed at your Ubuntu 14.04 server. You can find help on setting up your domain name with DigitalOcean by clicking here.

Install the Software

The installation process of Postfix on Ubuntu 14.04 is easy because the software is in Ubuntu’s default package repositories.

Since this is our first operation with apt in this session, we’re going to update our local package index and then install the Postfix package:

sudo apt-get update
sudo apt-get install postfix

You will be asked what type of mail configuration you want to have for your server. For our purposes, we’re going to choose “Internet Site” because the description is the best match for our server.

Next, you will be asked for the Fully Qualified Domain Name (FQDN) for your server. This is your full domain name (like example.com). Technically, a FQDN is required to end with a dot, but Postfix does not need this. So we can just enter it like:

example.com

The software will now be configured using the settings you provided. This takes care of the installation, but we still have to configure other items that we were not prompted for during installation.

Configure Postfix

We are going to need to change some basic settings in the main Postfix configuration file.

Begin by opening this file with root privileges in your text editor:

sudo nano /etc/postfix/main.cf

First, we need to find the myhostname parameter. During the configuration, the FQDN we selected was added to the mydestination parameter, but myhostname remained set to localhost. We want to point this to our FQDN too:

myhostname = example.com

If you would like to configuring mail to be forwarded to other domains or wish to deliver to addresses that don’t map 1-to-1 with system accounts, we can remove the alias_maps parameter and replace it withvirtual_alias_maps. We would then need to change the location of the hash to/etc/postfix/virtual:

virtual_alias_maps = hash:/etc/postfix/virtual

As we said above, the mydestination parameter has been modified with the FQDN you entered during installation. This parameter holds any domains that this installation of Postfix is going to be responsible for. It is configured for the FQDN and the localhost.

One important parameter to mention is the mynetworks parameter. This defines the computers that are able to use this mail server. It should be set to local only (127.0.0.0/8 and the other representations). Modifying this to allow other hosts to use this is a huge vulnerability that can lead to extreme cases of spam.

To be clear, the line should be set like this. This should be set automatically, but double check the value in your file:

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128

Configure Additional Email Addresses

We can configure additional email addresses by creating aliases. These aliases can be used to deliver mail to other user accounts on the system.

If you wish to utilize this functionality, make sure that you configured the virtual_alias_maps directive like we demonstrated above. We will use this file to configure our address mappings. Create the file by typing:

sudo nano /etc/postfix/virtual

In this file, you can specify emails that you wish to create on the left-hand side, and username to deliver the mail to on the right-hand side, like this:

blah@example.com username1

For our installation, we’re going to create a few email addresses and route them to some user accounts. We can also set up certain addresses to forward to multiple accounts by using a comma-separated list:

blah@example.com        demouser
dinosaurs@example.com   demouser
roar@example.com        root
contact@example.com     demouser,root

Save and close the file when you are finished.

Now, we can implement our mapping by calling this command:

sudo postmap /etc/postfix/virtual

Now, we can reload our service to read our changes:

sudo service postfix restart

Test your Configuration

You can test that your server can receive and route mail correctly by sending mail from your regular email address to one of your user accounts on the server or one of the aliases you set up.

Once you send an email to:

demouser@your_server_domain.com

You should get mail delivered to a file that matches the delivery username in /var/mail. For instance, we could read this message by looking at this file:

nano /var/mail/demouser

This will contain all of the email messages, including the headers, in one big file. If you want to consume your email in a more friendly way, you might want to install a few helper programs:

sudo apt-get install mailutils

This will give you access to the mail program that you can use to check your inbox:

mail

This will give you an interface to interact with your mail.

Conclusion

You should now have basic email functionality configured on your server.

It is important to secure your server and make sure that Postfix is not configured as an open relay. Mail servers are heavily targeted by attackers because they can send out massive amounts of spam email, so be sure to set up a firewall and implement other security measures to protect your server. You can learn about some security options here.

How To Run Your Own Mail Server

When setting up a web site or application under your own domain, it is likely that you will also want a mail server to handle the domain’s incoming and outgoing email. While it is possible to run your own mail server, it is often not the best option for a variety of reasons.

A typical mail server consists of many software components that provide a specific function. Each component must be configured and tuned to work nicely together and provide a fully-functioning mail server. Because they have so many moving parts, mail servers can become complex and difficult to set up.

Here is a list of required components in a mail server:

  • Mail Transfer Agent
  • Mail Delivery Agent
  • IMAP and/or POP3 Server

In addition to the the required components, you will probably want to add these components:

  • Spam Filter
  • AntiVirus
  • Webmail

While some software packages include the functionality of multiple components, the choice of each component is often left up to you. In addition to the software components, mail servers need a domain name, the appropriate DNS records, and an SSL certificate.

Let’s take a look at each component in more detail.

Mail Transfer Agent

A Mail Transfer Agent (MTA), which handles Simple Mail Transfer Protocol (SMTP) traffic, has two responsibilities:

  1. To send mail from your users to an external MTA (another mail server)
  2. To receive mail from an external MTA

Examples of MTA software: Postfix, Exim, and Sendmail.

Mail Delivery Agent

A Mail Delivery Agent (MDA), which is sometimes referred to as the Local Delivery Agent (LDA), retrieves mail from a MTA and places it in the appropriate mail user’s mailbox.

There are a variety of mailbox formats, such as mbox and Maildir. Each MDA supports specific mailbox formats. The choice of mailbox format determines how the messages are actually stored on the mail server which, in turn, affects disk usage and mailbox access performance.

Examples of MDA software: Postfix and Dovecot.

IMAP and/or POP3 Server

IMAP and POP3 are protocols that are used by mail clients, i.e. any software that is used to read email, for mail retrieval. Each protocol has its own intricacies but we will highlight some key differences here.

IMAP is the more complex protocol that allows, among other things, multiple clients to connect to an individual mailbox simultaneously. The email messages are copied to the client, and the original message is left on the mail server.

POP3 is simpler, and moves email messages to the mail client’s computer, typically the user’s local computer, by default.

Examples of software that provide IMAP and/or POP3 server functionality: Courier, Dovecot, Zimbra.

Spam Filter

The purpose of a spam filter is to reduce the amount of incoming spam, or junk mail, that reaches user’s mailboxes. Spam filters accomplish this by applying spam detection rules–which consider a variety of factors such as the server that sent the message, the message content, and so forth–to incoming mail. If a message’s “spam level” reaches a certain threshold, it is marked and treated as spam.

Spam filters can also be applied to outgoing mail. This can be useful if a user’s mail account is compromised, to reduce the amount of spam that can be sent using your mail server.

SpamAssassin is a popular open source spam filter.

Antivirus

Antivirus is used to detect viruses, trojans, malware, and other threats in incoming and outgoing mail. ClamAV is a popular open source antivirus engine.

Webmail

Many users expect their email service to provide webmail access. Webmail, in the context of running a mail server, is basically mail client that can be accessed by users via a web browser–Gmail is probably the most well-known example of this. The webmail component, which requires a web server such as Nginx or Apache, can run on the mail server itself.

Examples of software that provide webmail functionality: Roundcube and Citadel.


Now that you are familiar with the mail server components that you have to install and configure, let’s look at why maintenance can become overly time-consuming. There are the obvious maintenance tasks, such as continuously keeping your antivirus and spam filtering rules, and all of the mail server components up to date, but there are some other things you might have not thought of.

Staying Off Blacklists

Another challenge with maintaining a mail server is keeping your server off of the various blacklists, also known as DNSBL, blocklists, or blackhole lists. These lists contain the IP addresses of mail servers that were reported to send spam or junk mail (or for having improperly configured DNS records). Many mail servers subscribe to one or more of these blacklists, and filter incoming messages based on whether the mail server that sent the messages is on the list(s). If your mail server gets listed, your outgoing messages may be filtered and discarded before they reach their intended recipients.

If your mail server gets blacklisted, it is often possible to get it unlisted (or removed from the blacklist). You will want to determine the reason for being blacklisted, and resolve the issue. After this, you will want to look up the blacklist removal process for the particular list that your mail server is on, and follow it.

Troubleshooting is Difficult

Although most people use email every day, it is easy to overlook the fact that it is a complex system can be difficult to troubleshoot. For example, if your sent messages are not being received, where do you start to resolve the issue? The issue could be caused by a misconfiguration in one of the many mail server components, such as a poorly tuned outgoing spam filter, or by an external problem, such as being on a blacklist.


Now here are some alternatives to running your own mail server . These mail services will probably meet your needs, and will allow you and your applications to send and receive email from your own domain.

This list doesn’t include every mail service; there are many out there, each with their own features and prices. Be sure to choose the one that has the features that you need, at a price that you want.

Easy Alternatives — Postfix for Outgoing Mail

If you simply need to send outgoing mail from an application on your server, you don’t need to set up a complete mail server. You can set up a simple Mail Transfer Agent (MTA) such as Postfix.

You then can configure your application to use sendmail, on your server, as the mail transport for its outgoing messages.

Automating file transfer via SFTP using WinSCP

A few years ago I received a request to automate file transfer between FTP server and Development systems. Both, FTP server and Development system, are Windows-based. FTP server was running FTP over SSL only, so that automatically eliminated built-in Windows FTP command-line app. As I was using WinSCP in the past, I decided to do quick check if it is possible to use it in batch mode, so I can create script and run it on Windows Task Scheduler to automate whole process.

Prepare environment – application and folders

Here is the folder structure and script files I created on destination machine (to which files were transferred):

  • C:\Apps\WinSCP – folder contains binary files of portable WinSCP application
  • C:\Data\Scripts – folder contains batch script to run FTP transfer and script with commands for FTP
  • C:\FTP – transfer folder to which files will be transferred from remote site
  • C:\Data\Scripts\ftprun.cmd – batch file with command which starts WinSCP with certain parameters
  • C:\Data\Scripts\ftpscript.txt – text file which contains commands for WinSCP. It is passed to WinSCP as a parameter by ftprun.cmd script

Then download portable version of WinSCP (http://winscp.net/eng/download.php) and unzip to C:\Apps\WinSCP folder.

Create connection profile in WinSCP

First we need to define connection profile in WinSCP. That connection profile will be used later in ftpscript.txt to establish connection with FTP server. In order to define connection profile run winscp.exefrom C:\Apps\WinSCP folder and populate WinSCP Logon window with all details about connection (including password). Once all details are filled in click Save…

In Save session as… window name your session (that name will be used later on in ftpscript.txt to open connection) and make sure that Save password (not recommended) checkbox is ticked. We want to remember password in connection profile (WinSCP stores passwords in encrypted form). Otherwise we will have to entre clear-text password in ftpscript.txt file.

Once connection profile is saved WinSCP will show it on the list in WinSCP Login window. To make sure that all details are correct highlight connection you just created and click Login button. After that WinSCP will attempt to establish connection using parameters just delivered.

If connection is initiated for the first time and FTP server has been defined as FTP with TLS then we can expect that WinSCP might request acceptance for certificate presented by the server. In that case Warning windows will show up with all details of the certificate (as on screenshot below – you can see default, system-generated certificate which has been assigned to FTP site on IIS). Accept certificate by clicking Yes, so WinSCP can store information about certificate in configuration file for future use.

NOTE: It is important to establish connection to FTP server with TLS at least once before transfer task will be sceduled. That way we can accept and save information about certificate, so WinSCP will nit be asking about that later.

Once certificate is accepted and other connection details are fine we should see two panels with files. One panel shows files on our computer, second panel shows files on remote FTP server (in that case there is only one file Text file.txt). That indicates that we can establish connection successfuly and all parameters are entered correctly to connection profile.

As we have connection profile created and saved in WinSCP configuration we can proceed to next step in which we will create some scripts as preparation to automated transfers.

Prepare script to run FTP transfer and script with commands for FTP

In here we will create two scripts which will help to automate file transfer with WinSCP.

  • ftprun.cmd – batch file responsible for starting WinSCP command with appropriate parameters
  • ftpscript.txt – list of commands for WinSCP to execute

NOTE: Both files will be placed in C:\Data\Scripts folder.

ftprun.cmd

1
C:\Apps\WinSCP\winscp.com /script=C:\Data\Scripts\ftpscript.txt

ftpscript.txt

1
2
3
4
5
6
7
option batch continue
option confirm off
open lab-net-01
lcd C:\FTP
synchronize both -delete
synchronize both C:\FTP /
exit

NOTE: In ftpscript.txt file line open lab-net-01 indicated connection profile name which will be used to establish connection.

Before scheduling a script it is worth to check how it works and we get expected results, so from Command Prompt you can just run ftprun.cmd and see how it behaves and if everything is transferred according to the plan.

Here is example output of ftprun.cmd command started before scheduling:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
C:\>C:\Data\Scripts\ftprun.cmd
C:\>C:\Apps\WinSCP\winscp.com /script=C:\Data\Scripts\ftpscript.txt
batch           continue
confirm         off
Connecting to 172.16.90.41 ...
Connected with 172.16.90.41, negotiating SSL connection...
SSL connection established. Waiting for welcome message...
Connected
Starting the session...
Reading remote directory...
Session started.
Active session: [1] lab-net-01
C:\FTP
Comparing...
Local 'C:\FTP'  Remote '/'
Synchronizing...
Local 'C:\FTP'  Remote '/'
C:\FTP\Test file.txt      |          0 KiB |    0.0 KiB/s | ascii  | 100%
Comparing...
Local 'C:\FTP'  Remote '/'
Synchronizing...
Local 'C:\FTP'  Remote '/'
Test file.txt             |          0 KiB |    0.0 KiB/s | ascii  | 100%
C:\>

Once everything went well we are ready to create a scheduled task to trigger file transfer automatically. If test connection failed for some reason it needs troubleshooting then.

Creating task in Task Scheduler

In order to schedule file transfer we will use Task Scheduler, which is built-in Windows tool. Task Scheduler is located in Start / All Programs / Accessories / System Tools.

When Task Scheduler window will appear on the screen locate Create Task… in Action panel and click it.

Next on the screen Create Task windows will appear with General tab active by default. In here we can name task (ie. FTP Transfer), put some description which explains what this task suppose to do. Make sure that option Run whether user is logged on or not is chosen. That will allow task to run in background.

Next click on Triggers tab and click New… button. That will allow to create schedule for the task.

You will see New Trigger window on the screen. Indicate in Begin the task that we want to run task On a schedule. Then define schedule for file transfer. On example screenshot schedule indicates to run task every day, every hour indefinitely. Also make sure that task will be Enabled (checkbox at the bottom of New Trigger window).

When you confirm New Trigger with OK button schedule will be added to Triggers list as shown on screenshot below:

Now we can move to Actions tab. On this tab we will define which command should be triggered by task. Click New… to define command.

In New Action window indicate Action as Start a program. Then in Settings section in Program/script textbox enter C:\Data\Scripts\ftprun.cmd (this is batch script which will execute WinSCP with parameters).

Once action is defined click OK button and you should see our command added to the action list as on the screenshot below:

All parameters have been defined and now we can confirm task configuration by clicking OK button. Just after that system will request credentials for that task. It means that we can enter username and password for the account we want to run this task under. In the example below I just used domain administrator account.

After the credentials are entered in Task Scheduler you can see the task added with all the parameters defined.

Now it is only a matter of time to see if the task will run accordingly to the schedule we have defined and if all files will be transferred correctly.

An Introduction to Securing your Linux VPS

Introduction

Taking control of your own Linux server is an opportunity to try new things and leverage the power and flexibility of a great platform. However, Linux server administrators must take the same caution that is appropriate with any network-connected machine to keep it secure and safe.

There are many different security topics that fall under the general category of “Linux security” and many opinions as to what an appropriate level of security looks like for a Linux server.

The main thing to take away from this is that you will have to decide for yourself what security protections will be necessary. Before you do this, you should be aware of the risks and the trade offs, and decide on the balance between usability and security that makes sense for you.

This article is meant to help orient you with some of the most common security measures to take in a Linux server environment. This is not an exhaustive list, and does not cover recommended configurations, but it will provide links to more thorough resources and discuss why each component is an important part of many systems.

Blocking Access with Firewalls

One of the easiest steps to recommend to all users is to enable and configure a firewall. Firewalls act as a barrier between the general traffic of the internet and your machine. They look at traffic headed in and out of your server, and decide if it should allow the information to be delivered.

They do this by checking the traffic in question against a set of rules that are configured by the user. Usually, a server will only be using a few specific networking ports for legitimate services. The rest of the ports are unused, and should be safely protected behind a firewall, which will deny all traffic destined for these locations.

This allows you to drop data that you are not expecting and even conditionalize the usage of your real services in some cases. Sane firewall rules provide a good foundation to network security.

There are quite a few firewall solutions available. We’ll briefly discuss some of the more popular options below.

UFW

UFW stands for uncomplicated firewall. Its goal is to provide good protection without the complicated syntax of other solutions.

UFW, as well as most Linux firewalls, is actually a front-end to control the netfilter firewall included with the Linux kernel. This is usually a simple firewall to use for people not already familiar with Linux firewall solutions and is generally a good choice.

You can learn how to enable and configure the UFW firewall and find out more by clicking this link.

IPTables

Perhaps the most well-known Linux firewall solution is iptables. IPTables is another component used to administer the netfilter firewall included in the Linux kernel. It has been around for a long time and has undergone intense security audits to ensure its safety. There is a version of iptables called ip6tables for creating IPv6 restrictions.

You will likely come across iptables configurations during your time administering Linux machines. The syntax can be complicated to grasp at first, but it is an incredibly powerful tool that can be configured with very flexible rule sets.

You can learn more about how to implement some iptables firewall rules on Ubuntu or Debian systems here, or learn how to use iptables on CentOS/Fedora/RHEL-based distros here.

IP6Tables

As mentioned above, the iptables is used to manipulate the tables that contain IPv4 rules. If you have IPv6 enabled on your server, you will need to also pay attention to the IPv6 equivalent: ip6tables.

The netfilter firewall that is included in the Linux kernel keeps IPv4 and IPv6 traffic completely separate. These are stored in different tables. The rules that dictate the ultimate fate of a packet are determined by the protocol version that is being used.

What this means to the server’s administer is that a separate ruleset must be maintained when version 6 is enabled. The ip6tables command shares the same syntax as the iptables command, so implementing the same set of restrictions in the version 6 table is usually straight forward. You must be sure match traffic directed at your IPv6 addresses however, for this to work correctly.

NFTables

Although iptables has long been the standard for firewalls in a Linux environment, a new firewall called nftables has recently been added into the Linux kernel. This is a project by the same team that makes iptables, and is intended to eventually replace iptables.

The nftables firewall attempts to implement more readable syntax than that found its iptables predecessor, and implements IPv4 and IPv6 support into the same tool. While most versions of Linux at this time do not ship with a kernel new enough to implement nftables, it will soon be very commonplace, and you should try to familiarize yourself with its usage.

Using SSH to Securely Login Remotely

When administering a server where you do not have local access, you will need to log in remotely. The standard, secure way of accomplishing this on a Linux system is through a protocol known called SSH, which stands for secure shell.

SSH provides end-to-end encryption, the ability to tunnel insecure traffic over a secure connection, X-forwarding (graphical user interface over a network connection), and much more. Basically, if you do not have access to a local connection or out-of-band management, SSH should be your primary way of interacting with your machine.

While the protocol itself is very secure and has undergone extensive research and code review, your configuration choices can either aid or hinder the security of the service. We will discuss some options below.

Password vs SSH-Key Logins

SSH has a flexible authentication model that allows you to sign in using a number of different methods. The two most popular choices are password and SSH-key authentication.

While password authentication is probably the most natural model for most users, it is also the less secure of these two choices. Password logins allow a potential intruder to continuously guess passwords until a successful combination is found. This is known as brute-forcing and can easily be automated by would-be attackers with modern tools.

SSH-keys, on the other hand, operate by generating a secure key pair. A public key is created as a type of test to identify a user. It can be shared publicly without issues, and cannot be used for anything other than identifying a user and allowing a login to the user with the matching private key. The private key should be kept secret and is used to pass the test of its associated public key.

Basically, you can add your public SSH key on a server, and it will allow you to login by using the matching private key. These keys are so complex that brute-forcing is not practical. Furthermore, you can optionally add long passphrases to your key that adds even more security.

To learn more about how to use SSH click here, and check out this link to learn how to set up SSH keys on your server.

Implement fail2ban to Ban Malicious IP Addresses

One step that will help with the general security of your SSH configuration is to implement a solution like fail2ban. Fail2ban is a service that monitors log files in order to determine if a remote system is likely not a legitimate user, and then temporarily ban future traffic from the associated IP address.

Setting up a sane fail2ban policy can allow you to flag computers that are continuously trying to log in unsuccessfully and add firewall rules to drop traffic from them for a set period of time. This is an easy way of hindering often used brute force methods because they will have to take a break for quite a while when banned. This usually is enough to discourage further brute force attempts.

You can learn how to implement a fail2ban policy on Ubuntu here. There are similar guides for Debian andCentOS here.

Implement an Intrusion Detection System to Detect Unauthorized Entry

One important consideration to keep in mind is developing a strategy for detecting unauthorized usage. You may have preventative measures in place, but you also need to know if they’ve failed or not.

An intrusion detection system, also known as an IDS, catalogs configuration and file details when in a known-good state. It then runs comparisons against these recorded states to find out if files have been changed or settings have been modified.

There are quite a few intrusion detection systems. We’ll go over a few below.

Tripwire

One of the most well-known IDS implementations is tripwire. Tripwire compiles a database of system files and protects its configuration files and binaries with a set of keys. After configuration details are chosen and exceptions are defined, subsequent runs notify of any alterations to the files that it monitors.

The policy model is very flexible, allowing you to shape its properties to your environment. You can then configure tripwire runs via a cron job and even implement email notifications in the event of unusual activity.

Learn more about how to implement tripwire here.

Aide

Another option for an IDS is Aide. Similar to tripwire, Aide operates by building a database and comparing the current system state to the known-good values it has stored. When a discrepancy arises, it can notify the administrator of the problem.

Aide and tripwire both offer similar solutions to the same problem. Check out the documentation and try out both solutions to find out which you like better.

For a guide on how to use Aide as an IDS, check here.

Psad

The psad tool is concerned with a different portion of the system than the tools listed above. Instead of monitoring system files, psad keeps an eye on the firewall logs to try to detect malicious activity.

If a user is trying to probe for vulnerabilities with a port scan, for instance, psad can detect this activity and dynamically alter the firewall rules to lock out the offending user. This tool can register different threat levels and base its response on the severity of the problem. It can also optionally email the administrator.

To learn how to use psad as a network IDS, follow this link.

Bro

Another option for a network-based IDS is Bro. Bro is actually a network monitoring framework that can be used as a network IDS or for other purposes like collecting usage stats, investigating problems, or detecting patterns.

The Bro system is divided into two layers. The first layer monitors activity and generates what it considers events. The second layer runs the generated events through a policy framework that dictates what should be done, if anything, with the traffic. It can generate alerts, execute system commands, simply log the occurrence, or take other paths.

To find out how to use Bro as an IDS, click here.

RKHunter

While not technically an intrusion detection system, rkhunter operates on many of the same principles as host-based intrusion detection systems in order to detect rootkits and known malware.

While viruses are rare in the Linux world, malware and rootkits are around that can compromise your box or allow continued access to a successful exploiter. RKHunter downloads a list of known exploits and then checks your system against the database. It also alerts you if it detects unsafe settings in some common applications.

You can check out this article to learn how to use RKHunter on Ubuntu.

General Security Advice

While the above tools and configurations can help you secure portions of your system, good security does not come from just implementing a tool and forgetting about it. Good security manifests itself in a certain mindset and is achieved through diligence, scrutiny, and engaging in security as a process.

There are some general rules that can help set you in the right direction in regards to using your system securely.

Pay Attention to Updates and Update Regularly

Software vulnerabilities are found all of the time in just about every kind of software that you might have on your system. Distribution maintainers generally do a good job of keeping up with the latest security patches and pushing those updates into their repositories.

However, having security updates available in the repository does your server no good if you have not downloaded and installed the updates. Although many servers benefit from relying on stable, well-tested versions of system software, security patches should not be put off and should be considered critical updates.

Most distributions provide security mailing lists and separate security repositories to only download and install security patches.

Take Care When Downloading Software Outside of Official Channels

Most users will stick with the software available from the official repositories for their distribution, and most distributions offer signed packages. Users generally can trust the distribution maintainers and focus their concern on the security of software acquired outside of official channels.

You may choose to trust packages from your distribution or software that is available from a project’s official website, but be aware that unless you are auditing each piece of software yourself, there is risk involved. Most users feel that this is an acceptable level of risk.

On the other hand, software acquired from random repositories and PPAs that are maintained by people or organizations that you don’t recognize can be a huge security risk. There are no set rules, and the majority of unofficial software sources will likely be completely safe, but be aware that you are taking a risk whenever you trust another party.

Make sure you can explain to yourself why you trust the source. If you cannot do this, consider weighing your security risk as more of a concern than the convenience you’ll gain.

Know your Services and Limit Them

Although the entire point of running a server is likely to provide services that you can access, limit the services running on your machine to those that you use and need. Consider every enabled service to be a possible threat vector and try to eliminate as many threat vectors as you can without affecting your core functionality.

This means that if you are running a headless (no monitor attached) server and don’t run any graphical (non-web) programs, you should disable and probably uninstall your X display server. Similar measures can be taken in other areas. No printer? Disable the “lp” service. No Windows network shares? Disable the “samba” service.

You can discover which services you have running on your computer through a variety of means. This article covers how to detect enabled services under the “create a list of requirements” section.

Do Not Use FTP; Use SFTP Instead

This might be a hard one for many people to come to terms with, but FTP is a protocol that is inherently insecure. All authentication is sent in plain-text, meaning that anyone monitoring the connection between your server and your local machine can see your login details.

There are only very few instances where FTP is probably okay to implement. If you are running an anonymous, public, read-only download mirror, FTP is a decent choice. Another case where FTP is an okay choice is when you are simply transferring files between two computers that are behind a NAT-enabled firewall, and you trust your network is secure.

In almost all other cases, you should use a more secure alternative. The SSH suite comes complete with an alternative protocol called SFTP that operates on the surface in a similar way, but it based on the same security of the SSH protocol.

This allows you to transfer information to and from your server in the same way that you would traditionally use FTP, but without the risk. Most modern FTP clients can also communicate with SFTP servers.

To learn how to use SFTP to transfer files securely, check out this guide.

Implement Sensible User Security Policies

There are a number of steps that you can take to better secure your system when administering users.

One suggestion is to disable root logins. Since the root user is present on any POSIX-like systems and it is an all-powerful account, it is an attractive target for many attackers. Disabling root logins is often a good idea after you have configured sudo access, or if you are comfortable using the su command. Many people disagree with this suggestion, but examine if it is right for you.

It is possible to disable remote root logins within the SSH daemon or to disable local logins, you can make restrictions in the /etc/securetty file. You can also set the root user’s shell to a non-shell to disable root shell access and set up PAM rules to restrict root logins as well. RedHat has a great article on how to disable root logins.

Another good policy to implement with user accounts is creating unique accounts for each user and service, and give them only the bare minimum permissions to get their job done. Lock down everything that they don’t need access to and take away all privileges short of crippling them.

This is an important policy because if one user or service gets compromised, it doesn’t lead to a domino affect that allows the attacker to gain access to even more of the system. This system of compartmentalization helps you to isolate problems, much like a system of bulkheads and watertight doors can help prevent a ship from sinking when there is a hull breach.

In a similar vein to the services policies we discussed above, you should also take care to disable any user accounts that are no longer necessary. This may happen when you uninstall software, or if a user should no longer have access to the system.

Pay Attention to Permission Settings

File permissions are a huge source of frustration for many users. Finding a balance for permissions that allow you to do what you need to do while not exposing yourself to harm can be difficult and demands careful attention and thought in each scenario.

Setting up a sane umask policy (the property that defines default permissions for new files and directories) can go a long way in creating good defaults. You can learn about how permissions work and how to adjust your umask value here.

In general, you should think twice before setting anything to be world-writeable, especially if it is accessible in any way to the internet. This can have extreme consequences. Additionally, you should not set the SGID or SUID bit in permissions unless you absolutely know what you are doing. Also, check that your files have an owner and a group.

Your file permissions settings will vary greatly based on your specific usage, but you should always try to see if there is a way to get by with fewer permissions. This is one of the easiest things to get wrong and an area where there is a lot of bad advice floating around on the internet.

Regularly Check for Malware on your Servers

While Linux is generally less targeted by Malware than Windows, it is by no means immune to malicious software. In conjunction with implementing an IDS to detect intrusion attempts, scanning for malware can help identify traces of activity that indicate that illegitimate software is installed on your machine.

There are a number of malware scanners available for Linux systems that can be used to regularly validate the integrity of your servers. Linux Malware Detect, also known as maldet or LMD, is one popular option that can be easily installed and configured to scan for known malware signatures. It can be run manually to perform one-off scans and can also be daemonized to run regularly scheduled scans. Reports from these scans can be emailed to the server administrators.

How To Secure the Specific Software you are Using

Although this guide is not large enough to go through the specifics of securing every kind of service or application, there are many tutorials and guidelines available online. You should read the security recommendations of every project that you intend to implement on your system.

Furthermore, popular server software like web servers or database management systems have entire websites and databases devoted to security. In general, you should read up on and secure every service before putting it online.

You can check our security section for more specific advice for the software you are using.

Conclusion

You should now have a decent understanding of general security practices you can implement on your Linux server. While we’ve tried hard to mention many areas of high importance, at the end of the day, you will have to make many decisions on your own. When you administer a server, you have to take responsibility for your server’s security.

This is not something that you can configure in one quick spree in the beginning, it is a process and an ongoing exercise in auditing your system, implementing solutions, evaluating logs and alerts, reassessing your needs, etc. You need to be vigilant in protecting your system and always be evaluating and monitoring the results of your solutions.