Author Archives: astaz3l

How to configure SSH on CentOS

How to configure secure SSH on CentOS

Hello everyone! In this tutorial I will show you how to increase server security by tuning up configuration of SSH.

Before you begin

There are basically two requirements for this tutorial:

  1. You need to have working SSH keys. You need to be able to login to your server by using them. After completing this tutorial, SSH keys will be the only way to access server. If you won't be able to login by using them, well, you will lose access to your server. In order to add user and configure keys you can follow this tutorial.
  2. Make sure that at least one of the users is in wheel group (has access to sudo). Root should not have access to login via SSH. So if you will block this option and you won't have any sudo user, you won't be able to do much on the server. Follow this tutorial in order to configure sudo.

Disable password authentication for SSH on CentOS

Login to Your server/Vagrant Box and open SSH daemon configuration file:

sudo vi /etc/ssh/sshd_config

Now we need to find the line for password authentication and change it to:

PasswordAuthentication no

Unfortunately, disabling this option can still lead to password authentication by using PAM-based authentication. In order to fully disable authentication with password, make sure that PAM is also disabled:

ChallengeResponseAuthentication no

Also we need to make sure that this line is uncommented. It will  enable SSH login by using public key:

PubkeyAuthentication yes

Save the file and exit from the editor. In order to apply changes, you need to restart SSH daemon:

service sshd restart

After that, try to open new SSH session in new window. Do not logout from your current session! If you won't be able to login with new session, you can undo the changes with existing session. If you will be able to successfully login, you can proceed.

How to secure SSH on CentOS even more?

There are still some things that will help you improve SSH security. Edit the same configuration file as before. Below You will find the configuration options that I usually use for SSH.

Disable root login

PermitRootLogin no

This option will disable root login via ssh. So it means that from now on you won't be able to login to your server as root via ssh.

Allow only specific users to be able to login via SSH

AllowUsers developer

By default you are able to login as any user that is created inside the system. It can be easily limited to particular users. Just give space separated list after AllowUsers. It might not be present in your config, so you need to add this line (for instance at the end of the file).

AllowUsers developer vagrant

Enable protocol 2 for ssh

Protocol 2

This option is set by default in most CentOS installation, but just make sure that there's no version 1 instead. It's less secure protocol.

Ignore rhost

IgnoreRhosts yes

It will disable insecure access via RSH.

Disable login for users with empty passwords

PermitEmptyPasswords no

This line will disable login for users that have empty passwords. Make sure that your account has password set, before changing that!

Enable strict mode for ssh

StrictModes yes

SSH will check users's permission in their home directory before accepting login. It should be set to yes because users may leave their directory or files world-writable. Again, this might be tricky. It's the best to change that, restart SSHD daemon and try to login from new session. If you have any problems, you can undo this change with existing session. If you have any issues with that, try to set valid permissions for your .ssh directory and files inside. Also set valid username and group for your files:

chmod 700 ~/.ssh
chmod 600 ~/.ssh/*
chown -R YOUR_USERNAME:YOUR_USERNAME ~/.ssh

Disable other authentication methods

GSSAPIAuthentication no
KerberosAuthentication no

If you don't plan to login with GSS API, or Kerberos you can disable them as well.

Disable X11 Forwarding

X11Forwarding no

If you don't use X11 you can safely disable it as well.

Show last login

PrintLastLog yes

Nice feature is to show last successful login after you will login via SSH.

Restart SSH daemon

Remember that after any changes inside the file You need to restart sshd daemon:

sudo service sshd restart

SSH crypto

In addition to changes above that should be applied, you can increase SSH security even more by configuring ciphers and available algorithms (thanks to @Amar for the suggestion:)

This is usually safe to execute, but you must remember that not all algorithms are supported by various tool. Here you can find great chart showing, which tools support given algorithms. But let's be honest, most of you is probably using OpenSSH which supports all the changes I will present here. However if you are using different tool and you won't be able to login to your server, check with the page and enable additional algorithms.

These config options will probably not be listed in your config file. You need to just add them somewhere, like at the end of the file.

Configure server authentication

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key

Server must confirm the identity to the client. There are bunch of algorithms available, but this is the list of most secure.

This might be present in your configuration file, also there might be more not commented lines with HostKeys. Leave only these two enabled and comment out the rest.

Configure key exchange

KexAlgorithms curve25519-sha256@libssh.org

There are many more key exchange algorithms, but this is probably the most secure.

Configure ciphers

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com

Ciphers are used to encrypt the data. As with key exchange, there are multiple algorithms. These are the safest.

MACs - Message Authentication Codes

MACs hmac-ripemd160,hmac-ripemd160-etm@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-128@openssh.com,umac-128-etm@openssh.com

MACs are used for data integrity. Again, line above contains the safest algorithms only.

After these changes, don't forget to restart sshd daemon.

Easier way?

You can use our Ansible LAMP on Steroids project to make configuration of your server easier!

If you don't know what Ansible is, check our tutorial first.

Clone our repository and setup your server faster with LAMP on steroids.

CentOS, users, groups, sudo and SSH keys

Create sudo users and groups in CentOS

I will show you how to organise groups and users in CentOS.

As an example we will create account for user named developer. The purpose of this account is:

  • logging via SSH instead of using root account
  • access to sudo command for management tasks
  • write access to website files
  • read access to logs

Create list with available groups

If you are on fresh system it's handy to create a list with all available groups. It's also handy to check which users belongs to that groups. It might be helpful after some time when you need to decide if given group was at the beginning or can it be safely removed. getent command can help you with that.

getent group > /etc/initial-group-list
cat /etc/initial-group-list

Remove group from the system

If you need to remove group from CentOS simply use following command:

sudo groupdel NAME_OF_THE_GROUP_TO_DELETE

Create new group

In order to create group you need to use groupadd command:

sudo groupadd NAME_OF_THE_NEW_GROUP

I usually add group named www (or www-data, whatever works for you). To this group I add php daemons, nginx workers etc. It makes life easier with writing to files. In order to create such group execute following command:

sudo groupadd www

Create list with available users

Same like with group, I like to have list of initial users. In order to create such list you can use getent too:

getent passwd > /etc/initial-users-list
cat /etc/initial-users-list

Delete user from CentOS

In case you would like to remove any user from the system, use following command:

sudo userdel -r USERNAME_TO_REMOVE

-r flag will remove also his home directory. If you wish to delete the user, but to keep his files, omit this flag.

Create new user in CentOS

Let's create new user developer that we mentioned at the beginning:

sudo adduser developer

and create the password for his account:

sudo passwd developer

If you want to add developer user to www group created before use usermod command:

sudo usermod -g www developer

If you want to add this user sudo powers (and you should if you want to use this user instead of root), add it to wheel group. wheel group is special group in CentOS configured in sudoers file. Whoever belongs to this group can have sudo powers.

sudo usermod -g wheel developer

Optional parameters to useradd command

There are lot of additional parameters for useradd command but there are two especially useful.

First one is helpful when you don't want to create user home directory. It means that user will not have it's own place under /home directory to store it's files. This option is helpful when you are creating user for system service like Apache httpd for instance. So in order to create user with no home directory use --no-create-home:

sudo useradd httpd --no-create-home

Another useful feature is to specify shell of given user. It's nice if you want to cut of possibility to login to the system via SSH for instance. Add --shell /sbin/nologin to disable login for given user, like so:

sudo useradd httpd --shell /sbin/nologin --no-create-home

You can use --shell and --no-create-home parameters separately:)

 

How to setup SSH keys for new created user?

Each user should have RSA key-pair. It makes life easier and you should use it if you want to login to different servers, use GIT etc. In order to create such user key-pair you first need to login to user you created. Most probably you are using root account to execute all commands, but you should never ever login via SSH as a root.

It's much better to create separate user for system management and use only this account. Login via SSH to your server to account your created. In my case it's developer user so my command looks like this:

ssh developer@IP_OF_THE_SERVER_HERE

Once you'll be logged in (after providing the password), you can create RSA key pair. Execute following command:

ssh-keygen -t rsa -b 4096

-t rsa means that it will be RSA key, but this is standard for creating SSH keys. Fun part is with strength of the key -b 4096. By default it's 1024 bits, but to make it harder to break I usually provides 4096. It's not necessary, but you should do that. Some services requires key length to be minimum 2048, but it's better to create even longer one.

Generator will ask you some questions, but you should generally confirm them with enter and leave the defaults. When it comes to SSH on the server, I usually don't set the password. It makes life easier in automated scripts etc.

After that private and public key should be generated as expected. You can find them in ~/.ssh directory.

Add authorized key to user

In order to login with SSH keys to the server, instead of using password you need to add authorized key to developer user. In my opinion it's must have feature as using password login is super risky. Again, been there, done that, I was hacked, even when my password was strong. With SSH logging even strongest bruteforce attack will fail:)

You need to add your key to ~/.ssh/authorized_keys on the server. If You have ssh-copy-id command available just execute:

ssh-copy-id developer@IP_OF_YOUR_SERVER

Make sure that you are executing this command from your computer, not from the server. If you don't have SSH key created locally, you can generate it in the same way as on the server, by using ssh-keygen command.

If uou don't have ssh-copy-id available (for instance from Windows), you can do it manually.

ssh developer@IP_OF_YOUR_SERVER
cd ~/.ssh
vi authorized_keys
//Press "i" to enter in input mode, paste there your code (usually it's right click of the mouse) and :wq (colon, w, q) it will save and quit from vi 
chmod 600 authorized_keys

So here how it goes:

  1. ssh to the server as usual with password.
  2. Change location to .ssh directory.
  3. Create authorized_keys file with vi
  4. Paste there your local public key, save the file and quit
  5. Set permissions on authorized_keys.

Test ssh login with keys

Now You can try to log in with Your key.

ssh developer@IP_OF_YOUR_SERVER -i path/to/your/PRIVATE/key/file

You shouldn't be prompted for your account password!

Easier way?

If you don't want to spend your precious time executing each of these commands by hand, you can use Ansible and our LAMP on steroids project to speed things up!

If you don't know what Ansible is - you can read our tutorial about it here.

LAMP on steroids project is available on GitHub here.

Iptables for CentOS

How to secure server with iptables?

Hi there! In this tutorial I would like to show you how to increase server security by using iptables as a firewall. To be honest, not many people are actually using iptables or any firewall. I think that this is bad practice, because you they allow all traffic to go in and out. You should always limit the possible entry points to your server.

Firewalld vs iptables

Since CentOS 7, we have new tool called firewalld. This is not actually an alternative to iptables. firewalld is a wrapper for iptables. Many people say, that it's easier to use than iptables, but to be honest I believe that it's not flexible enough. Maybe I'm wrong, but I'd love to see some advanced example, how to transform iptables rules below to firewalld 🙂  If you want to use firewalld instead of iptables, unfortunately you need to read different tutorial. Here is great article about firewalld from DigitalOcean.

How to install iptables on CentOS7?

Before we will install iptables, we need to get rid of firewalld first :

sudo yum remove firewalld -y

Next, we can install iptables:

sudo yum install iptables iptables-services -y

iptables-services is simple script that will help us save and restore firewall rules.

Secure iptables rules for CentOS

First, let's check if there are any rules by executing following command:

sudo iptables -S

If you will get following output:

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

It means that you allow all traffic, both incoming and outgoing to your server. However if you have anything more than output above, copy it to separate file as a backup.

The easiest way of adding rules is by editing iptables rules file. Open the file, or create one if it doesn't exists:

sudo vi /etc/sysconfig/iptables

I will describe whole file line by line, but at the bottom of this post you can find whole content that I'm using for iptables.

Opening and closing tags

*filter

File must contains two indicators:

  • start of the ruleset *filter
  • end of the ruleset COMMIT

You need to have both in order to get iptables configured properly. Between these two lines, you can add iptables rules.

Clear all existing rules

-X
-F
-Z

At the very beginning I'd like to clear whole rules. In other words - enable all traffic. The reason is that I want to be able to execute that file over and over again, and I will always set the rules that I have in file. No other rules will be applied (for instance rules added by command line).

Allowing loopback

-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT
-A OUTPUT -d 127.0.0.0/8 -j REJECT

Next thing is to allow all loopbacks. Those are local connection and blocking them might cause errors in some connections. In addition we will block those which doesn't use lo0.

Keep established connections

-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

All connections that are active now, should remain untouched. It will prevent from interruption of services.

PING command

-A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
-A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
-A INPUT -p icmp --icmp-type echo-request -j ACCEPT
-A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT

In most cases you will need to be able to ping server. These rules will allow two things. First - you'll be able to ping your own server. Second - you will be able to execute ping from your server. Both are usually needed and quite useful.

Protection from PING of Death attack

-N PING_OF_DEATH
-A PING_OF_DEATH -p icmp --icmp-type echo-request -m hashlimit --hashlimit 1/s --hashlimit-burst 10 --hashlimit-htable-expire 300000 --hashlimit-mode srcip --hashlimit-name t_PING_OF_DEATH -j RETURN
-A PING_OF_DEATH -j DROP
-A INPUT -p icmp --icmp-type echo-request -j PING_OF_DEATH

Ping is cool, however you might get attacked with Ping of Death attack. Here is simple protection.

Prevent some nasty attacks

-N PORTSCAN
-A PORTSCAN -p tcp --tcp-flags ACK,FIN FIN -j DROP
-A PORTSCAN -p tcp --tcp-flags ACK,PSH PSH -j DROP
-A PORTSCAN -p tcp --tcp-flags ACK,URG URG -j DROP
-A PORTSCAN -p tcp --tcp-flags FIN,RST FIN,RST -j DROP
-A PORTSCAN -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
-A PORTSCAN -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL ALL -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL NONE -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP
-A INPUT -f -j DROP
-A INPUT -p tcp ! --syn -m state --state NEW -j DROP

This is really nice piece of rules that will prevent port scanning, SYN flood attacks, invalid packages, malformed XMAS packets, NULL packets, etc.

UDP traffic

-A INPUT -p udp --sport 53 -j ACCEPT
-A OUTPUT -p udp --dport 53 -j ACCEPT
-A INPUT -p udp --sport 123 -j ACCEPT
-A OUTPUT -p udp --dport 123 -j ACCEPT

I enable usually only ports for outgoing traffic (from our server to outside world). There are two ports that I'd like to open:

  • 53 - DNS port. It's a must if you want to use curl or yum. If you will have it closed, you will not resolve any domain name.
  • 123 - NTP port. If you are using chrony or ntpd, you need to enable that port to allow NTP deamon synchronisation.

TCP traffic

# Open TCP ports for incoming traffic
-A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT

# Open TCP ports for outgoing traffic
-A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT

With TCP it's more complicated, but it's not that hard. First, you need to think what traffic you need to access from your server (outgoing traffic). I usually allow only SSH, HTTP and HTTPS traffic. Yum requires HTTP and HTTPS ports for pulling new packages. You will need it also for wget or curl. SSH is not mandatory, but if you want to pull packages from git via ssh protocol, you will need it as well.

I usually enable the same for incoming traffic. If you have httpd or nginx installed, you need to enable port 80. If you are using SSL for HTTPS, you need to enable 443 also. In addition to these two ports you must enable port 22 for SSH. If you will block this, you won't be able to get access to your server!

Block everything else

-A INPUT -j DROP
-A FORWARD -j DROP
-A OUTPUT -j DROP

At the very end, before closing COMMIT tag I add these three rules. So everything that was not specified above will be dropped. Both incoming and outgoing traffic.

How to apply rules?

There are two ways how you can apply the rules. First, save the changes in iptables file. First method is not permanent method. It's good way of testing your firewall before saving them permanently. If anything will go wrong, you can just restart the server and you will have all traffic open. Make sure that you check SSH access with these rules. Log out and try to login after applying rules.

So non permanent way of applying rules is:

sudo iptables-restore < /etc/sysconfig/iptables

Try to check rules with iptables -S to see the difference:) Check if everything is working fine. If so, you can set them permanently. After each server restart, rules will be applied automatically.

sudo systemctl start iptables.service
sudo systemctl enable iptables.service

If you want to reload rules, simply edit the file, add what you need and restart iptables service:

sudo systemctl restart iptables.service

You can use our Ansible LAMP on Steroids project to make configuration of your server easier!

It is based on Ansible. If you don't know what Ansible is, check our tutorial first.

Clone our repository and setup your server faster with LAMP on steroids.

Whole content of iptables rules

*filter

# Clear all iptables rules (everything is open)
-X
-F
-Z

# Allow loopback interface (lo0) and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT
-A OUTPUT -d 127.0.0.0/8 -j REJECT

# Keep all established connections
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow ping
-A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
-A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
-A INPUT -p icmp --icmp-type echo-request -j ACCEPT
-A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT

# Protect from ping of death
-N PING_OF_DEATH
-A PING_OF_DEATH -p icmp --icmp-type echo-request -m hashlimit --hashlimit 1/s --hashlimit-burst 10 --hashlimit-htable-expire 300000 --hashlimit-mode srcip --hashlimit-name t_PING_OF_DEATH -j RETURN
-A PING_OF_DEATH -j DROP
-A INPUT -p icmp --icmp-type echo-request -j PING_OF_DEATH

# Prevent port scanning
-N PORTSCAN
-A PORTSCAN -p tcp --tcp-flags ACK,FIN FIN -j DROP
-A PORTSCAN -p tcp --tcp-flags ACK,PSH PSH -j DROP
-A PORTSCAN -p tcp --tcp-flags ACK,URG URG -j DROP
-A PORTSCAN -p tcp --tcp-flags FIN,RST FIN,RST -j DROP
-A PORTSCAN -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
-A PORTSCAN -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL ALL -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL NONE -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP
-A PORTSCAN -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

# Drop fragmented packages
-A INPUT -f -j DROP

# SYN packets check
-A INPUT -p tcp ! --syn -m state --state NEW -j DROP

# Open ports for outgoing UDP traffic
-A INPUT -p udp --sport 53 -j ACCEPT
-A OUTPUT -p udp --dport 53 -j ACCEPT
-A INPUT -p udp --sport 123 -j ACCEPT
-A OUTPUT -p udp --dport 123 -j ACCEPT


# Open TCP ports for incoming traffic
-A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT

# Open TCP ports for outgoing traffic
-A INPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT


# Drop all other traffic
-A INPUT -j DROP
-A FORWARD -j DROP
-A OUTPUT -j DROP

COMMIT

What's next?

We secured our system with basic firewall. That will increase the security of our server. In one of the next episodes we will change the configuration of our SSH and therefore make it more secure.

As always You can use our Ansible playbook for faster provisioning of our server. You can find it on GitHub.

Install MySQL community server on CentOS

How to install and configure latest version of MySQL on CentOS?

Hi there! Today I want to show you how to install latest version of MySQL Community server (5.7.16) on CentOS 7. I will show you how to install it, set root password, configure server and optimize it for performance. Also I will show you how to create databases and assign users to them.

How to install MySQL on CentOS  in latest version

When you try to install MySQL on bare CentOS, you will actually install MariaDB (fork of MySQL) instead. It comes with version 5.5.* It's up to you what you want to use, but I prefer MySQL. Especially 5.7 version.  There is great performance boost and lot of new features added, comparing to 5.5 or 5.6.

If you have any data on MySQL, it's best to create a backupof data, before you will change anything. Here is a tutorial how to perform backup of MySQL databases.

Before you will actually install MySQL, make sure that MariaDB is not installed. You can just remove it. TIP: removing MariaDB will not remove any databases. It will remain on your hard drive.

sudo yum remove mariadb -y

So let's get to installation. First you need to enable MySQL repository. It's easy by adding following file to yum repositories:

sudo vi /etc/yum.repos.d/mysql-community.repo

and place there following content:

[mysql57-community]
name=MySQL 5.7 Community Server
baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/7/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://dev.mysql.com/doc/refman/5.7/en/checking-gpg-signature.html

Save the changes. It will enable MySQL repository where from you can install latest version.

Installation is pretty easy:

sudo yum install mysql-community-server -y

That's it! You have MySQL installed in latest version!

Setting root password in MySQL

Setting root password for MySQL is important. You should not leave your database server unprotected, even if you only plan to use localhost connection.

Before you will start anything, make sure that MySQL is up and running:

sudo systemctl start mysqld

If this is fresh installation of MySQL (MariaDB or older version of MySQL was never installed) , MySQL will generate temporary password.

You can find it by using following command:

grep 'temporary password' /var/log/mysqld.log

You will get random password, for instance  something like wG]_xj8tus. This is current password to root account. Keep in mind that this is only temporary password. It will expire, so you need to change it as soon as possible.

In order to change root password I like to use mysqladmin tool. So execute following command:

mysqladmin -u root -pwG]_xj8tus password NEW_STRONG_ROOT_PASSWORD

You need to replace wG]_xj8tus with the password from grepping log file.  Note that there is no space after -p argument. Also replace NEW_STRONG_ROOT_PASSWORD with your new root password. Keep in mind, that it needs to be strong password, containing lower and upper case characters, numbers and special characters. MySQL 5.7 comes with plugin that validates password strength. It won't allow you changing password to something that is easy to guess or brute force.

If you just upgrading MySQL/MariaDB, your root password will be the same. You can of course change it with mysqladmin if you wish.

However if you are upgrading from MySQL/MariaDB version prior to 5.7 it's nice to execute following command:

mysql_upgrade -u root -p

Type your root password. It will check all the data and alter tables for 5.7 rules.

You might ask - how about mysql_secure_installation? You can also use this command for setting root password after installation. However when you install fresh version of MySQL 5.7 there is no test database, anonymous users etc. Everything is clean and secured.

Configuration of MySQL 5.7

I like to tune up default MySQL configuration. With 5.7 version, lot of defaults are set in order to provide high security and performance. However there are few things I like to change. Most of the stuff is stored under /etc/my.cnf file. MySQL is reading configuration from there during startup. You can also change it dynamically, but here I will focus only on this file.

Edit the file first:

sudo vi /etc/my.cnf

There will be some defaults, you can leave them as they are. Most probably, all options here will be missing, so you need to add them under [mysqld] section.

Temporary in-memory tables

tmp-table-size=32M
tmp-heap-table-size=32M

I increase the size of temporary in-memory tables created by MySQL. If you are doing lot of advanced GROUP BY queries, you will probably have to increase it. Make sure that you are setting both to the same value. If one will be lower, second one will be limited to lower value as well.

Query caching

Query caching is disabled by default. It caches result sets from most frequent queries to database. It usually give nice performance boost to MySQL server:

query-cache-type=1
query-cache-size=32M

InnoDB buffer

Also another performance boost is creating buffer for InnoDB pools. It will keep the data in RAM, so reads will be much faster. Make sure to not to set these values higher than your RAM limits. It should be large enough to store as much data as possible.

innodb-buffer-pool-instances=1
innodb-buffer-pool-size=128M

Logging slow queries

Slow queries can kill your server. It's nice to know earlier that something is going on. I usually turn this on and check it frequently for such queries. If there are some complicated / not optimal queries, you should fix them as soon as possible.

slow-query-log=1
slow-query-log-file=/var/lib/mysql/localhost-slow-query.log<

TIMESTAMP fields behavior

If you would check MySQL log during startup, you can note that there is a warning regarding TIMESTAMP fields. You can read more about these changes here. If you want to suppress that warning, use following option

explicit-defaults-for-timestamp=1

In order to apply changes, you need to restart MySQL daemon:

sudo systemctl restart mysqld

 

Few notes about playing with configuration. You must know that it's not easy and it's highly depends on number of tables, queries, inserts and lot of other factors. I usually use two tools. First is MySQLTuner that can get you some insights about what to change in order to gain performance. Usually it's just tuning query cache, and InnoDB buffer size. However practice shows that it doesn't always show correct values. Sometimes you can get slower performance instead.

So I use other tool - Datadog. You can also use New Relic, or any other monitoring tool that generates charts. So I usually change one thing at the time. I'm restarting MySQL server and waiting around 48h to see if it's actually performing better or not. This might seem to be slow approach but it's definitely safe:)

But before you start to playing with your configuration, make sure that you don't have slow queries or large tables that you scan without indexes. Sometimes just adding index will reduce query time. No need to change MySQL configuration files:)

MySQL group

MySQL during installation create it's own group - mysql. Most of the data files, and logs have mysql:mysql ownership. It makes it hard to read the logs when you are non-root user. You need to use sudo to get access to logs. You can make it simpler by adding user to mysql group. When user will be in such group, he can easily read logs. In my case, user is named developer.

sudo usermod -a -G mysql developer

Create databases and users

Probably the most important part from MySQL user is how to create database and users? First login to MySQL with your root account:

mysql -u root -p

And then start with creating databases. You need to enter following query:

CREATE DATABASE database_name_here;

Replace database_name_here with your database name. I like to create name for database from domain. So for instance for blacksaildivision.com, database name would be blacksaildivision. Remember that you can't use the . in database name.

Once you have your database, you should create dedicated user that only has access to this database. Having separate user per database is good approach from security perspective.

In order to create user in database execute following query:

CREATE USER 'blacksaildivision'@'localhost' IDENTIFIED BY 'NotEasyToGuessPassword123^#';

It will create blacksaildivision user that can connect to MySQL server only via localhost. It means that he won't have access from outside. After IDENTIFIED BY you need to type password for user you want to create. Same rules as for root password, it can't be easy to guess.

After user is created it's time to give him access to database you created before. Query goes like this:

GRANT ALL PRIVILEGES ON blacksaildivision.* TO 'blacksaildivision'@'localhost';

So you give user full access to blacksaildivision database. .* means that he should have access to all tables in blacksaildivision database. After TO you need to specify user that you created in step before.

To apply privileges, you need to reload them. Fire following query:

FLUSH PRIVILEGES;

And that's it! Create as many databases and users as you wish:) You can test connection to database with new credentials from command line. First exit from current MySQL session and than use following command:

mysql -u blacksaildivision -p blacksaildivision

-u stands for user, -p means that user will connect by using password. Last thing is database name you want to connect to.

Start MySQL on system boot

Last thing is to add MySQL to boot list. So after CentOS will start, MySQL will start as well:

sudo systemctl enable mysqld

Remember that you can use our lamponsteroids project based on Ansible that will automate whole server setup:)

Install PHP from source on CentOS

PHP – how to install from source on CentOS

In this tutorial I would like to show you how to install latest version of PHP on CentOS 7. If you are using PHP you most probably will want to have latest version of PHP7. PHP 5 support officially ends this year. Version 7 is now commonly used. It gives lot of performance boost and new features.

Unfortunately default version that comes from repo in CentOS 7 is PHP 5.4, so you can't use yum command without any custom repo like remi. Including custom repository is one way of installing desired PHP version. Another option is to compile it from source code. This tutorial will show you how to do that. It's not as hard as it might sounds:)

Install required tools for compilation

In order to compile PHP from source you need to install few tools and libraries. First you need EPEL repository to be enabled. This repository contains more recent version of packages. Most probably you have it installed already, but just to be sure, execute following command:

sudo yum install epel-release -y

Once you have it installed execute following command to install required packages:

sudo yum install autoconf libtool re2c bison libxml2-devel bzip2-devel libcurl-devel libpng-devel libicu-devel gcc-c++ libmcrypt-devel libwebp-devel libjpeg-devel openssl-devel -y

Download and unpack PHP Source code

Next step is downloading PHP source code. Easiest option is to download it from GitHub PHP releases. Choose the version you would like to install. In my case it's 7.2.3. Copy link to tar.gz archive and execute following commands:

curl -O -L https://github.com/php/php-src/archive/php-7.2.3.tar.gz
tar -zxvf php-7.2.3.tar.gz
cd php-src-php-7.2.3

It will download the archive from GitHub, unpack the sources and change working directory to unpacked sources.

Compile PHP

Now it's time to compile PHP. First we need to build configure command. In order to do that execute following command:

./buildconf --force

Once configure command is created we can use it to configure PHP installation. This process will enable certain PHP extensions such as PDO, FPM, OPCache, GD library etc. If you need any libraries that are not provided here, you can execute ./configure --help option and check if there is something you need. Following command will install PHP with most common extensions:

./configure --prefix=/usr/local/php --enable-fpm --disable-short-tags --with-openssl --with-pcre-regex --with-pcre-jit --with-zlib --enable-bcmath --with-bz2 --enable-calendar --with-curl --enable-exif --with-gd --enable-intl --enable-mbstring --with-mysqli --enable-pcntl --with-pdo-mysql --enable-soap --enable-sockets --with-xmlrpc --enable-zip --with-webp-dir --with-jpeg-dir --with-png-dir

Apart from enabling extensions command above will also set where PHP will be installed. In my case it's /usr/local/php location. If you will want to remove compiled PHP you will simply have to remove entire directory given under --prefix option.

Next it's time to compile PHP. Please be aware that it takes few minutes:

make clean
make

Install compiled PHP

Once PHP is compiled it is time to install it. Simply execute following command:

sudo make install

PHP Configuration

PHP-FPM setup

Before we will be able to run PHP from Apache we need to setup PHP-FPM worker. After installation there should be PHP-FPM default configuration file in installation directory. We will alter the file and then change it a bit.

cd /usr/local/php/etc
mkdir fpm.d
cp php-fpm.conf.default php-fpm.conf
vi php-fpm.conf

We need to  uncomment/change these lines:

include=etc/fpm.d/*.conf
pid = /var/run/php-fpm.pid
error_log = log/php-fpm.log

COPY EVERYTHING UNDER Pool Definitions TO CLIPBOARD AND REMOVE IT FROM php-fpm.conf FILE
;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;

include=/etc/fpm.d/*.conf - by default there is one pool defined inside php-fpm.conf file. The best way to solve it is the same way as we solved Apache vhosts. We will include each pool in separate directory. In php-fpm.conf file one pool is already defined. We need to delete it from this file and put it inside fpm.d directory. We will have better control over the pools. The easiest way is just to Cut it from this file and paste it into new one.

Now let's create the file inside fpm.d for our example.com domain:

cd fpm.d
vi example.com.conf

PASTE TEXT FROM CLIPBOARD HERE AND CHANGE THESE LINES:

[www] -> [example_com] //Must be unique per file
user = apache
group = www
listen = 127.0.0.1:9000 //Port must be unique per file
catch_workers_output = yes
slowlog = /var/www/example.com/logs/php-fpm.slow.log
request_slowlog_timeout = 30s
php_flag[display_errors] = off
php_admin_value[error_log] = /var/www/example.com/logs/php-fpm.error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 64M
php_admin_value[open_basedir] = /var/www/example.com/htdocs

Each pool must have different name. So we need to change it from [www] to something else, for instance to domain name. It'll be easier to find the issues inside log files.

We set user and group to the same user as apache to have access to files.

Port will be different per pool. Standard way is to start from port 9000. Next will be 9001 etc.

We will catch errors and log them to file. In addition we set logging for  slow requests.

Nice part is that we can overwrite the settings from php.ini here. So we can overwrite error_log or memory_limit for instance. We should also set open_basedir so PHP will have access only to files inside our htdocs directory. Our server will be more secure with this setting.

php.ini and OPCache configuration

Second thing is php.ini file. After installation  php.ini file should located in /usr/local/php/lib. This is only the location. After compiling from source You won't anything there so we need to copy it from uncompressed sources.

cd /usr/local/php/lib
cp ~/sources/php-5.6.6/php.ini-development ./php.ini
vi php.ini

This is pretty large file with lot of configuration settings. Fortunately we only need to change some of the options:

short_open_tag = On
open_basedir = /var/www
disable_functions = exec,passthru,shell_exec,system,proc_open,popen
expose_php = Off
max_execution_time = 30
memory_limit = 64M
date.timezone = Europe/Warsaw
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
post_max_size = 5M
upload_max_filesize = 4M

opcache.enable=1
opcache.memory_consumption=64
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=7000
opcache.validate_timestamps=0 ;set this to 1 on production server
opcache.fast_shutdown=1

So we set few things here, enable <? tag, limit access to files from PHP level, disabled dangerous functions, adjusts timezone, security, max execution times, errors etc. In addition we have enable OPCache for PHP.

Each one of these options are well commented inside php.ini file. If You don't like the settings here or You need something else, feel free to change it for Your purposes.

Useful shell scripts for PHP

/etc/init.d/php-fpm

As You probably remember during Apache setup we create script so we can use service command to start / stop Apache process. Now we will do the same for PHP-FPM

With PHP source code there comes ready script for that purpose.

cd /etc/init.d
cp ~/sources/php-5.6.6/sapi/fpm/init.d.php-fpm php-fpm
vi php-fpm

Now we need to setup configuration for the file:

prefix=/usr/local/php
exec_prefix=${prefix}

php_fpm_BIN=${exec_prefix}/sbin/php-fpm
php_fpm_CONF=${prefix}/etc/php-fpm.conf
php_fpm_PID=/var/run/php-fpm.pid

Save the file and add executable permission.

chmod +x php-fpm
servcie php-fpm status
service php-fpm start
servcie php-fpm status

After that we should have php-fpm process up and running!

Add PHP to $PATH

We can do one more thing to make our life easier:) Add PHP executable to PATH, so we'll be able to call php command from every directory.

echo 'pathmunge /usr/local/php/bin' > /etc/profile.d/php.sh

Execute such command, log out, log in and You'll be able to execute:

php -v

Setup Apache for PHP-FPM

Now is the time to finally setup Apache for .php files. Let's edit one of the Virtual Hosts now.

vi /usr/local/apache2/conf/vhosts/example.com.conf

<VirtualHost *80>
    ServerName example.com

    <LocationMatch "^/(.*\.php(/.*)?)$">
        ProxyPass fcgi://127.0.0.1:9000/var/www/example.com/htdocs/$1
    </LocationMatch>

////Rest of the file below

So basically we need to proxy all files with .php extension to our PHP-FPM process.  Also we need to restart Apache and make sure PHP-FPM is running httpd server:

service php-fpm start
service httpd restart

How to test if PHP is working?

We need to test if our PHP installation works. The easiest way to debug and check what's going on would be to create test.php file inside our /var/www directory.

vi /var/www/example.com/htdocs/test.php

and paste phpinfo() function there:

<?php

phpinfo();

Save the file and open the file in Your browser, assuming that your vagrant setup is correct. For instance http://example.com/test.php or 192.168.99.99/test.php

If everything is OK you should get information about PHP installation. Well done!

What's next

We are one step closer to our LAMP server. The only thing we are missing now is MySQL which we will install in upcoming episodes.

If You are running Ansible for provisioning You can find everything from this series inside my GitHub.

Hardening Apache with Mod Security

Apache hardening with mod_security

In this part of setup complete webserver we will harden Apache with popular mod_security. Mod_security is a small module that works like application firewall. It protect the app before most common attacks and vulnerabilities. It's good to have such thing on the webserver.

I assume that You have Apache already installed, if not check out previous part - How to install Apache on CentOS?

LAMP on steroids

This is part of our series LAMP on steroids. Check the links below to learn how to setup awesome webserver!

  1. Choosing VPS
  2. Install EPEL
  3. Install and configure Apache HTTPD server
  4. Harden Apache with ModSecurity and OWASP Core Rule Set
  5. Install and configure PHP
  6. Install and configure MySQL server
  7. Configure firewall based on iptables
  8. Create developer user and setup SSH key-pair
  9. Configure SSH
  10. Install and configure Varnish to speed up websites
  11. More to come...

How to install mod_security on Apache httpd server

First thing is installation of required tools. We need them to compile mod_security.

yum install automake libtool libxml2-devel

Next thing that we need is to download and decompress mod_security. Download links can be found on official mod_security page.

cd ~/sources
wget https://www.modsecurity.org/tarball/2.9.0/modsecurity-2.9.0.tar.gz
tar -zxvf modsecurity-2.9.0.tar.gz

Now it's time to compile mod_security. While ./confgure we need to pass paths to axps and apr binaries. All binaries should be inside bin directory in apache installation path.

cd modsecurity-2.9.0
./autogen.sh
./configure --with-apxs=/usr/local/apache2/bin/apxs --with-apr=/usr/local/apache2/bin/apr-1-config --with-apu=/usr/local/apache2/bin/apu-1-config
make
make install
cp /usr/local/modsecurity/lib/mod_security2.so /usr/local/apache2/modules

If there was no error mod_security is ready to use. It can be found in /usr/local/modsecurity We need to copy generated .so file to apache extension directory.

ModSecurity and OWASP rules

mod_security is nothing without the rules that tells what attacks should be blocked. Fortunately there is a great package with lot of rules provided by OWASP. We will use such rules package to harden Apache HTTPD. So let's download, unzip and copy the rules to apache configuration directory.

cd ~/sources
wget -O owasp.tar.gz https://github.com/SpiderLabs/owasp-modsecurity-crs/tarball/master
mkdir /usr/local/apache2/conf/crs
tar -zxvf owasp.tar.gz -C /usr/local/apache2/conf/crs --strip 1
cd /usr/local/apache2/conf/crs
cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf

Now we are ready to use ModSecurity!

ModSecurity configuration

In previous article we added httpd-security.conf file with some basic rules that improves Apache security.  We will modify this file to load mod_security with OWASP rules and add some basic configuration. You need to know that mod_security is pretty large module with tons of configuration option. You can find them in ModSecurity reference manual.

vi /usr/local/apache2/conf/extra/httpd-security.conf

Once You open the file add these lines somewhere in the file:

LoadModule security2_module modules/mod_security2.so

<IfModule security2_module>
      Include conf/crs/modsecurity_crs_10_setup.conf
      Include conf/crs/base_rules/*.conf
      # Include conf/crs/experimental_rules/*.conf
      # Include conf/crs/optional_rules/*.conf

      SecRuleEngine On
      SecRequestBodyAccess On
      SecResponseBodyAccess On 
      SecResponseBodyMimeType text/plain text/html text/xml application/octet-stream
      SecDataDir /tmp

      # Debug log
      SecDebugLog /usr/local/apache2/logs/modsec_debug.log
      SecDebugLogLevel 3

      SecAuditEngine RelevantOnly
      SecAuditLogRelevantStatus ^2-5
      SecAuditLogParts ABCIFHZ
      SecAuditLogType Serial
      SecAuditLog /usr/local/apache2/logs/modsec_audit.log
</IfModule>

So from the top:

  • First we need to load mod_security module.
  • Next are rules from OWASP that we will include to ModSecurity. We need to include the setup and base rules. OWASP core rule set comes with lot more features that are marked as optional or experimental. We can enable those rules, but we also need to remember that it might not play well with our website. It's rather testing by trial and error then one rule will work well on every website. But in general including base_rules is OK.
  • SecRuleEngine enables detection and blocking of malicious attacks.
  • SecRequestBodyAccess enable inspection of data transported request  bodies
  • SeResponseBodyAccess buffer response bodies matched by SecResponseBodyMimeType
  • SecDataDir working directory for ModSecurity temporary purposes
  • Next thing is Debug log. By default all error logs goes to apache error log, but we can set different path to debug log. Best practice would be to change it per domain inside particular VirtualHost file. In previous article we setup directory structure and we have logs directory there. It would be wise to used it for debug log as well.
  • Audit Log is complementary log for Debug log. It has detail information about every error. It's disabled by default so we need to enable it and turn on logging relevant (warnings and errors) issues. Next options are for configuration the audit log. In general there are lot more of discussing at this topic.

If You want to learn more about how to setup and read mod_security logs, here is really great article about mod_security logging by Infosec Institue.

Now we just need to save the file restart apache and our httpd server has better security.

service httpd restart

What's next?

If You are following our series, You should have now part of LAMP stack (Linux Apache MySQL PHP). Apache is secured with mod_security.

Small note to those who would like to install mod_evasive as well to increase the security. To be honest, it's really not worth to install it on Apache. Why? Because when You run multiple instances via MPM mod_evasive doesn't share the info between the MPM instances. It means that one instance of apache can block the attacker but others wont. So if You have many MPM workers mod_evasive is just useless.

As always, if You are using Ansible for server provisioning You can use ready playbook, that will cover everything in this series. You can find it on GitHub.

In next episode we will add P to our LAMP server.

How to install apache from source on CentOS

How to install latest Apache HTTPD on CentOS

Hi there! Today I'd like to show you how do I install and configure Apache HTTPD on CentOS 7. I like to have it installed in minimal and secure way. Beware! This is pretty long tutorial covering lot's of aspects from compilation, through configuration, SSL/HTTPS/HTTP2, basic hardening etc.

How to install Apache HTTPD on CentOS using yum - easy way

There are two ways to install Apache HTTPD on CentOS. First is with yum and it is the simplest version:

sudo yum install httpd -y

Volia! You have httpd installed. However, if you check the version:

httpd -v

You will most probably get 2.4.6 version or slightly newer. If you check Apache website, you will note, that they have 2.4.33 version available. So, if you want to have access to latest features such as HTTP/2 support or latest bugfixes, you will have to try more difficult method which is compiling Apache from source. It might seem complicated but it's really not.

OpenSSL - do you have latest version?

If you want to enable HTTP/2 in Apache HTTPD which I strongly recommend for increased performance you need to have latest version of OpenSSL installed in your system. Older version does not support it, so you need to compile new version from source. I have separate tutorial how to install latest OpenSSL - make sure that you follow that first and then get back to this tutorial.

Remove old HTTPD first

Make sure that you don't have httpd installed. On some machines it comes by default, or you might be using older version. In order to avoid complications later I advise you to remove it first. However you must know that if you have some websites online that are using Apache, they will have some downtime before you setup new Apache. Execute following command to remove current Apache httpd from your system:

sudo yum remove httpd -y

Install EPEL

There are lots of different libraries in EPEL, but for compiling Apache HTTPD with HTTP/2 support we need one thing that EPEL provides - libnghttp2

In order to install EPEL repository execute following command:

sudo yum install epel-release -y

Install required tools for compilation

You need to install some tools that will help us compile Apache. It's basic stuff like compiler, required libraries etc:

sudo yum install autoconf expat-devel libtool libnghttp2-devel pcre-devel -y

Download and unpack source code

Next thing that you need are packages with source files. For compiling Apache, you will need 3 different packages - httpd itself, apr and apr-util. Last two are Apache Runtime libraries. They are required for Apache HTTPD.

I like to download packages from GitHub releases. Here are the links to the packages:

Click on tar.gz icon, copy the link to package and download them with curl or wget. Or simply copy commands below:

curl -O -L https://github.com/apache/httpd/archive/2.4.33.tar.gz
curl -O -L https://github.com/apache/apr/archive/1.6.3.tar.gz
curl -O -L https://github.com/apache/apr-util/archive/1.6.1.tar.gz

Unpack downloaded sources:

tar -zxvf 2.4.33.tar.gz
tar -zxvf 1.6.3.tar.gz
tar -zxvf 1.6.1.tar.gz

APR and APR-Util

Apache requires APR library for compilation. You need to copy the source codes to correct directory:

cp -r apr-1.6.3 httpd-2.4.33/srclib/apr
cp -r apr-util-1.6.1 httpd-2.4.33/srclib/apr-util

It's important to not to include version number in APR directories. If you just copy apr-1.6.3 without changing the name, it will give you a warning about missing apr directory.

Compile source code

Now you are ready to compile Apache httpd. It's important that you should not use root user for compilation. It can lead to serious security issues. I described it more on my other tutorial about installing GIT. In short words, imagine that you downloaded package from wrong source with malicious code. If you would compile it as root user, anything can happen to your server. Including cutting of your root access. I'm not saying that it's not possible to compile packages as root, because it is. It's just not safe. If you want to create separate user with sudo powers, you can read this tutorial.

So get inside httpd directory and compile your Apache version:

cd httpd-2.4.33
./buildconf
./configure --enable-ssl --enable-so --enable-http2 --with-mpm=event --with-included-apr --with-ssl=/usr/local/openssl --prefix=/usr/local/apache2
make

First command ./buildconf will build ./configure file required for configuration of the build.

./configure command will setup everything for compilation of Apache HTTPD. Here are the options that I use:

  • --enable-ssl will build Apache with SSL support, so you can enable HTTPS on your websites.
  • --enable-so will enable dynamically loaded modules. So you can enable and disable modules without recompilation (I will describe modules in configuration part)
  • --enable-http2 will enable HTTP/2 support.
  • --with-mpm will set multiprocessing modules for Apache. I'm using event, but you can use worker or prefork instead. event works best for me and I think that it is mpm that will give you most performance.
  • --with-included-apr It will use APR library that you copied to srclib directory
  • --with-ssl will point compiler to newer version of OpenSSL. Make sure that you compiled it first!
  • --prefix is the installation path for Apache httpd compiled package

Whole process might take a while. It depends how fast your server is.

Install HTTPD

After it's compiled you can install it. For that you need sudo or root account:

sudo make install

Apache should be installed in the directory you specified with --prefix option.

Cleanup

Last thing you can do now is to remove downloaded files. You won't need them now. It's not mandatory, but it's nice to keep server clean.

cd ..
rm -rf 1.6.3.tar.gz 1.6.1.tar.gz 2.4.33.tar.gz apr-1.6.3 apr-util-1.6.1 httpd-2.4.33

Add Apache executables to PATH

If you try to type httpd -v in your command line, it will result in command not found. That's because httpd is not on your $PATH. I'd like to have all executables from Apache available from everywhere. In order to achieve that, create file

sudo vi /etc/profile.d/httpd.sh

and paste there following contents:

pathmunge /usr/local/apache2/bin

Save the file, log out and log in from your current session to reload your profile. After that you should be able to use httpd -v command:)

Add Systemd entry

Starting, restarting, and enabling Apache on server start via systemctl command is very important thing.  You need to create another file:

sudo vi /etc/systemd/system/httpd.service

and paste there following contents:

[Unit]
Description=The Apache HTTP Server
After=network.target

[Service]
Type=forking
ExecStart=/usr/local/apache2/bin/apachectl -k start
ExecReload=/usr/local/apache2/bin/apachectl -k graceful
ExecStop=/usr/local/apache2/bin/apachectl -k graceful-stop
PIDFile=/usr/local/apache2/logs/httpd.pid
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Save the file and reload the systemctl daemon

sudo systemctl daemon-reload

Now you can try to start your Apache httpd server with following command:

sudo systemctl start httpd

It should start properly. If you will have any warnings, don't bother with them now. I will show you proper configuration in next step.

Once it's up and running you can try to type your server IP address in your browser like http://43.184.89.190/ and check if you see It works! message:) If so, you have Apache httpd running fine!

Create dedicated user and group for Apache

I usually create additional user and group for httpd daemon. It's good practice from security side. Each service should operate as separate user. It limits possible damage during attacks, httpd exploitation etc.

If you want to learn more about creating user and groups I recommend reading this tutorial. Here I'll just simply create group and user without shell.

sudo groupadd www
sudo useradd httpd -g www --no-create-home --shell /sbin/nologin

You can change the names as you wish. I like to use www group instead of httpd group for example. I usually add there other services as well, like nginx or php-fpm.

Adjust main config file httpd.conf

httpd.conf is main Apache httpd configuration file. You should start by editing this file with editor of your choice. I like to use vi, but you can use nano etc...:

sudo vi /usr/local/apache2/conf/httpd.conf

I usually don't remove any of the configuration directives from that file. If something needs to be deleted I comment it out with # at the beginning of new line. It's easier to revert changes.

Now I scroll through the file from top to bottom looking for particular lines and check if it has correct value. If I don't find given line I just add it usually at the bottom of the file.

Here are the important values:

# Make sure that ServerRoot is set to the same value as --prefix during ./configure
ServerRoot /usr/local/apache2

# Set ServerName to prevent warning on Apache start
ServerName localhost

# Default port set to 80 - HTTP protocol
Listen 80

# Set user and group
User httpd
Group www

# Configure entry file for your application. If you plan to use PHP make sure that it's as first possible file
DirectoryIndex index.php index.html

# Hide Apache version from header and from error files
ServerTokens prod
ServerSignature off

# Disable ETag to prevent disposing sensitive values like iNode
FileETag none

Save the file. After each change to configuration file you must restart HTTPD in order to apply changes. In order to do so execute following command:

sudo systemctl restart httpd

After restarting, make sure that Apache is working fine!

Configure loaded modules

During compilation we set --enable-so modules which means that we can disable and enable modules in configuration files.

List of modules is pretty long. Some of the are disabled (they have # at the beginning of the line). You should know, that the more modules are enabled the  "slower" Apache httpd is. I'm not saying that it's super slow, but you can google for some benchmarks showing different configurations.

Here is what I like to do. First of all, I comment out all modules = everything is disabled. I enable only that modules that I really use + the modules that are required for proper functioning of Apache httpd. It has few benefits - Apache is faster, eats less resources (CPU and RAM) and it's more resistant for given attacks. Usually when new security issue pops out, it's rather connected to one of the modules, than whole httpd. So you can have more chances to avoid potential security risk with having some stuff disabled.

So edit main configuration file one more time:

sudo vi /usr/local/apache2/conf/httpd.conf

And set modules that you use. Here is my go-to list of enabled modules:

# These modules must be enabled if you want Apache to start
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule dir_module modules/mod_dir.so

# If you are using PHP with PHP-FPM which I highly suggest enable proxy modules 
LoadModule proxy_module modules/mod_proxy.so 
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so 

# Enable pretty links and mod_rewrite that is highly used in all frameworks and CMSes 
LoadModule rewrite_module modules/mod_rewrite.so 

# Useful for WordPress sites - enables Require for setting up access to given resources. 
LoadModule access_compat_module modules/mod_access_compat.so 

# One more useful thing for WordPress and Let's Encrypt - enables Alias. If you are using composer and wpackagist it's a must, otherwise if you don't plan to use aliases, leave that disabled 
LoadModule alias_module modules/mod_alias.so 

# Enable gzip extension for compressing static files 
LoadModule deflate_module modules/mod_deflate.so 
LoadModule filter_module modules/mod_filter.so 

# Enable expires header for caching assets on browser side 
LoadModule expires_module modules/mod_expires.so 

# Enable SSL 
LoadModule http2_module modules/mod_http2.so 
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so 
LoadModule ssl_module modules/mod_ssl.so 

# Enable status module for monitoring Apache. If you don't plan to use that, leave that commented out 
LoadModule authz_host_module modules/mod_authz_host.so 
LoadModule status_module modules/mod_status.so

Save the changes and restart Apache. Make sure that it works after changes!

Configure MPM

Configure Multi-Processing modules contains two parts. First I override default Apache configuration for event mode that I picked during compilation. In that case you will know what is actually configured. So let's start with that first.

At the bottom of httpd.conf you need to uncomment the line:

Include conf/extra/httpd-mpm.conf

It will enable advanced Apache httpd MPM configuration and it will override the defaults.

Now edit enabled file:

sudo vi /usr/local/apache2/conf/extra/httpd-mpm.conf

This file contains configuration of all MPM There are configuration for each MPM module, so make sure that you are setting correct values. I enabled event mode, so this is the section I care about:

<IfModule mpm_event_module>
    StartServers 5
    MinSpareThreads 75
    MaxSpareThreads 250
    ThreadsPerChild 25
    MaxRequestWorkers 400
    MaxConnectionsPerChild 0
</IfModule>

Save the file and restart Apache. This was the easiest part of MPM configuration.

The real part of MPM configuration is later on when you start running your production website. When you feel that your website/application start running low on higher traffic you can start playing around with that values.

Important thing that you need to know - there is no one config to rule them all. What I want to say is that one config that works fine on one server might not work as fine as on second server. There are dozens of factors like application specific, CPU and RAM, traffic etc. I encourage you to play around with these values to find the optimal settings for your server.

Here is great post that explains in depth MPM configuration. I have two advice's about performance tuning.

First one, that you probably don't even need to change anything if you have regular website without huge amount of traffic. It's just good to know what settings are applied by mpm configuration. When httpd-mpm.conf file was commented out, you didn't have any idea what the settings are. Once it's enabled, you at least know what is configured.

Second advice is - do it slowly. Performance tuning is lengthy process, and as I said, it highly depends on various factors. I like to change one settings, like StartServers for instance and I wait day or two and monitor response times, CPU and RAM usage etc. Sometimes even if you increase something you won't see a difference in response time, but you will get higher CPU usage. Then you just can rollback the changes. If you modify 3 or 4 values at one time, it's hard to say which comes with best (or any) result.

Setup GZIP compression

Using GZIP compression has serious impact on performance and your website loading time. Let's turn it on in few simple steps. In order to use it make sure that mod_deflate and mod_filter are enabled.

First create new file that will contain GZIP settings for Apache:

sudo vi /usr/local/apache2/conf/extra/httpd-deflate.conf

Paste there following content and save the file:

<IfModule mod_deflate.c>
    <IfModule mod_filter.c>
        AddOutputFilterByType DEFLATE application/ecmascript
        AddOutputFilterByType DEFLATE application/javascript
        AddOutputFilterByType DEFLATE application/rss+xml
        AddOutputFilterByType DEFLATE application/xml
        AddOutputFilterByType DEFLATE application/x-javascript
        AddOutputFilterByType DEFLATE text/css
        AddOutputFilterByType DEFLATE text/html
        AddOutputFilterByType DEFLATE text/plain
        AddOutputFilterByType DEFLATE text/xml
    </IfModule>
</IfModule>

It will add GZIP compression to most popular file types such as HTML, CSS, JS etc. If you need anything else that requires compressing, simply add more MIME types.

Once you save the file you need to include it to main Apache configuration. So open httpd.conf one more time and on the bottom of the page there is section where you include different things, such as MPM. In this section add following line:

Include conf/extra/httpd-deflate.conf

Save changes and restart HTTPD. Gzip compression will be enabled!

Setup assets caching with expires headers

Another important thing that has serious impact on performance is caching. If you want to use cache headers make sure that you enabled mod_expires for that purpose. Rest of procedure is similar to enabling GZIP compression.

Start with creating file that will contain configuration for caching:

sudo vi /usr/local/apache2/conf/extra/httpd-expires.conf

Paste following configuration and add/remove mime types or adjust time of expiration it if you need:

<IfModule mod_expires.c>
    # Enable expirations
    ExpiresActive On

    # Expirations for given mime type
    ExpiresByType image/gif "access plus 1 month"
    ExpiresByType image/ico "access plus 1 month"
    ExpiresByType image/jpg "access plus 1 month"
    ExpiresByType image/jpeg "access plus 1 month"
    ExpiresByType image/png "access plus 1 month"
    ExpiresByType text/css "access plus 1 month"
    ExpiresByType text/javascript "access plus 1 month"
</IfModule>

Same as before, you need to add entry to httpd.conf file and after that, restart the server:

Include conf/extra/httpd-expires.conf

Enable SSL configuration

If you plan to use HTTPS which I highly recommend you need to include SSL configuration. By default it is commented out from main configuration file. Before including this file, let's edit it first:

sudo vi /usr/local/apache2/conf/extra/httpd-ssl.conf

File contains secure configuration and sensible defaults that you don't need to change. It will provide secure connection that targets most browsers. However this file also contains default VirtualHost for HTTPS which I always remove. The reason is - I like to know what entry points to the server are available. Since this page doesn't display anything useful we can simply remove it.

So scroll to the point where VirtualHost section starts and remove whole section. It's pretty long so make sure that you remove it completely including <VirutalHost...></VirtualHost> lines :

## Remove this part:
##
## SSL Virtual Host Context
##
<VirtualHost _default_:443>
#...
</VirtualHost>

Save changes to the file. Now it's time to include this file into httpd.conf file. Edit it and scroll to the bottom. SSL is commented out by default so all you need to do is to uncomment it like so:

# Secure (SSL/TLS) connections
Include conf/extra/httpd-ssl.conf

Save the changes and restart HTTPD service. This is only initial part of adding HTTPS to your website. To be able to successfully connect to your server with SSL there are few more steps.

Add Let's Encrypt integration

If you are not using Let's Encrypt free SSL certificates you can skip that part.

When you install certbot for obtaining Let's Encrypt certificates you must somehow validate the domain. I use webroot as authenticator. It means that inside your website directory certbot will create .well-known directory and will try to put some files there. It works in most cases unless you start playing around with root path to your website, like placing it in subdirectory. Sometimes there is also permission issue.

There is neat trick to solve all this issues. When you create new certificate, point webroot to /var/lib/letsencrypt
directory instead to your domain directory like /var/www/blacksaildivision.com/htdocs

Next, create new file for Apache configuration:

sudo vi /usr/local/apache2/conf/extra/httpd-acme.conf

and paste there following contents:

Alias /.well-known/acme-challenge/ "/var/lib/letsencrypt/.well-known/acme-challenge/"
<Directory "/var/lib/letsencrypt/">
    AllowOverride None
    Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require method GET POST OPTIONS
</Directory>

Please note that mod_alias must be enabled. What it will do on each request to .well-known/acme-challenge that certbot uses it will point it to /var/lib/letsencrypt directory. It solves the problem for all domains globally.

Last thing is to enable this feature in main configuration of Apache. Edit the file and add following section near the bottom of the file where all other Includes are placed:

# Let's Encrypt integration
Include conf/extra/httpd-acme.conf

Restart the Apache to apply new changes.

Enable extended status

If you want to get more insights about Apache or you plan to use monitoring tools like Datadog it's wise to enable extended version of status. It will give you basic info about workers and httpd itself. In order to enable, edit main configuration file and somewhere and following code (I like to place it just before all Include sections):

# Enable extended status
<IfModule status_module>
    ExtendedStatus On
</IfModule>

Remember that you need to enable mod_status and mod_authz_host in order to get status working. Mod_authz_host will be required later on when I show you how to configure localhost. Mod_status response should be restricted to avoid exposing sensible information.

In order to apply changes, please restart httpd.

Setting up root directory

Let's start with final part - setting up Virtual Hosts. There are couple of approaches. It depends how many websites you plan to have on single server, does all your apps have domains or are they just microservices etc. I'll show you two of my approaches. Regardless of any approach you need a directory where you will store you app/website files and logs.

On CentOS there should be directory /var/www which is recommended place to store website files. If you don't have such directory create one first:

sudo mkdir /var/www

Make sure that this directory has valid permission and valid owner which should be root:

sudo chmod 755 /var/www
sudo chown root:root /var/www

Setting up website directories for files and logs

Once root directory is ready it's time to setup directories for websites. Let's assume that I want to create folder structure for blacksaildivision.com website. So first I create directory inside /var/www and name it as domain:

sudo mkdir /var/www/blacksaildivision.com

Inside this directory I need two separate directories. One for my website files like index.html etc and one for logs. There are many ways of keeping logs on the server. Some people uses native directories like /usr/local/apache2/logs or /var/log but I prefer to keep everything in one place. So all nginx/httpd/php logs will go to logs directory inside my domain directory.

sudo mkdir /var/www/blacksaildivision.com/htdocs
sudo mkdir /var/www/blacksaildivision.com/logs

Once directories are created it's time to set valid owner and permissions. Let's start with ownership:

sudo chown root:root /var/www/blacksaildivision.com
sudo chown developer:www /var/www/blacksaildivision.com/htdocs
sudo chown developer:www /var/www/blacksaildivision.com/logs

So main directory for domain should be owned by root. However two directories inside should be owned by custom user. In that case main user for the server is called developer. He has sudo powers and has access to website files and can perform commands like git pull to download latest code.

Why not httpd user that we created for Apache? The reasons are permissions. When you download files with git pull as developer user, but directory is owned by httpd user it most often leads to unexpected errors and mismatch in owners.

If you need ( and you should ) create separate account for developer/server owner read this tutorial.

Group is set to www which I usually use as generic group for all website related applications such as httpd, php-fpm etc. Please note that developer user also belongs to that group.

Now it's time to set directory permissions:

sudo chmod 755 /var/www/blacksaildivision.com
sudo chmod 2775 /var/www/blacksaildivision.com/htdocs
sudo chmod 2775 /var/www/blacksaildivision.com/logs

2 at the beginning means that every new subdirectory created will have the same ownership as parent directory (developer:www)

So what if I need to store more than one domain? Simply create new directory inside /var/www like mywebsite.com and repeat the steps with htdocs, logs directory, set ownership and permissions.

Do I need to create domain directory? It depends. I usually create such directory even if I host single website only. There are some cases like microservices that does not have a domain and are run behind load-balancer. In that case you can skip creating domain directory and use /var/www/htdocs and /var/www/logs. Just remember to change paths accordingly in next steps.

Create basic Virtual Host file for HTTP website

Directories were created so now it's the most important part - setting up VirtualHost. Again, there are many different approaches how to store the virtual hosts etc. I'll show you mine. Let's create basic Virtual Host file that will only load static files from given domain. As example I use blacksaildivision.com domain.

First I create a file that will contain configuration for HTTP version of the website. I always add domain to the filename so it's easier to find given configuration if you have multiple domains:

sudo vi /usr/local/apache2/conf/extra/httpd-vhost-blacksaildivision.com.conf

Paste Virtual Host config there and adjust your domain and directory structure:

<VirtualHost *:80>

ServerName blacksaildivision.com

# Directory settings
DocumentRoot /var/www/blacksaildivision.com/htdocs
<Directory /var/www/blacksaildivision.com/htdocs>
    AllowOverride All
    Require all granted
    Options +FollowSymLinks -Indexes -Includes
</Directory>

# Logging
ErrorLog "/var/www/blacksaildivision.com/logs/httpd-error.log"
CustomLog "/var/www/blacksaildivision.com/logs/httpd-access.log" common

</VirtualHost>

File starts and ends with block defining VirtualHost. Inside this blocks you need to place your domain configuration. First one is ServerName which is your domain.

Next part is directory settings. You need to point httpd to read files from correct directory which is defined by DocumentRoot. Just below that line there are settings for root directory. AllowOverride will allow .htaccess files to override all default instructions in this block. Require all granted adds general access to this directory. It's a must if you want to read files from given location. Lastly I allow following symlinks which are often used by various frameworks and CMSes. I disable Indexes and Includes because of security reasons.

Last parts are logs. I define two logs. One is error log which will contain all invalid requests details etc. CustomLog is simply access log that will contain all requests.

So in general if you want to add different or another domain, simply create file with configuration like above and update ServerName, DocumentRoot and paths to logs.

Last part is to include this configuration file in main httpd configuration file:

sudo vi /usr/local/apache2/conf/httpd.conf

and at the very bottom of the file Include configuration:

Include conf/extra/httpd-vhost-blacksaildivision.conf

Save changes and restart Apache. Now try to visit your website! In case of any issues like 4xx/5xx errors simply check error log and see what the problem is. Usually there are typos or incorrect configuration of your domain causing the issues.

Adding PHP support

Most of you will probably use Apache with PHP. I use PHP in PHP-FPM mode. In my opinion it gives greater control over mod_php when it comes to performance tuning, configuration etc. It also has one more benefit - if you will ever need to switch from Apache to Nginx it's much easier as preferred way of using nginx + php is PHP-FPM.

If you want to read more about how to install and configure PHP - please follow this tutorial.

Edit the VirtualHost file where you want to add PHP support and start adding there following lines ( I prefer to add it just before logs section):

# PHP-FPM settings
<Proxy "fcgi://127.0.0.1:9001" timeout=30>
</Proxy>

It will define Proxy for PHP-FPM daemon. In my case FPM process is running on localhost (127.0.0.1) and on port 9001. Optional parameter is timeout. It's not required but sometimes it's helpful. Imagine situation where one of your scripts has bug like infinite loop or it's execution time takes way more than 30s. Apache will have to wait until proxy will return something. Now imagine that someone called this script 10000 times. It will kill your httpd daemon. It's safe to add timeout as additional guard. You can experiment with timeoutvalue for your application.

Proxy is defined, not it's time to use it. Just after section add  following lines:

<FilesMatch "\.php$">
    <If "-f %{SCRIPT_FILENAME}">
        SetHandler "proxy:fcgi://127.0.0.1:9001"
    </If>
</FilesMatch>

First I use FilesMatch to detect if requested file has .php extension. If it has I'm checking if the file actually exists. If you won't check it, request will be passed to FPM process anyway which can result in unexpected results and unnecessary lines in log files.

If everything is OK I'm setting Handler to defined proxy and PHP will handle rest of the request.

You can save the file and restart Apache and check if everything works fine:)

If you are using status or ping from PHP-FPM for performance debugging or tools such as Datadog and you don't want to expose additional port in your firewall, you can achieve it with HTTPD as well. Simply add following lines before <FilesMatch> section:

<LocationMatch "/(status|ping)">
    RewriteEngine Off
    Require all denied
    Require ip 127.0.0.1 60.120.72.81
    SetHandler "proxy:fcgi://127.0.0.1:9001"
</LocationMatch>

I'm checking if requested path matches /status or /ping. If match is found I'm turning of RewriteEngine in case of redirect in .htaccess etc. Than I'm turning off access for everyone but I allow access from list of IPs. I allow access from localhost, but I also allow access from my server IP address. So for instance If my server has IP 60.120.72.81 I add it as well.

Last thing is to set handler to use to proxy for PHP-FPM daemon.

Enable HTTPS inside VirtualHost

It's time to enable HTTPS for our website! I assume that you already have SSL certificates purchased and downloaded or generated via Let's Encrypt.

Edit the file first:

sudo vi /usr/local/apache2/conf/extra/httpd-vhost-blacksaildivision.com.conf

Just for now you can leave the regular HTTP version and add following contents below VirtualHost section. In such way you will have both HTTP and HTTPS enabled for single website. Later on I'll show you how to redirect HTTP to HTTPS. So just paste the following contents at the end of edited file:

<IfModule mod_ssl.c>
<VirtualHost *:443>

ServerName blacksaildivision.com
ServerAlias www.blacksaildivision.com

# Redirect www to non-www
RedirectMatch permanent "^www\.(.*)$" "https://blacksaildivision.com/"

# Directory settings
DocumentRoot /var/www/blacksaildivision.com/htdocs
<Directory /var/www/blacksaildivision.com/htdocs>
    AllowOverride All
    Require all granted
    Options +FollowSymLinks -Indexes -Includes
</Directory>

# PHP-FPM settings
<Proxy "fcgi://127.0.0.1:9001" timeout=30>
</Proxy>

<LocationMatch "/(status|ping)">
    RewriteEngine Off
    Require all denied
    Require ip 127.0.0.1
    SetHandler "proxy:fcgi://127.0.0.1:9001"
</LocationMatch>

<FilesMatch "\.php$">
    <If "-f %{SCRIPT_FILENAME}">
        SetHandler "proxy:fcgi://127.0.0.1:9001"
    </If>
</FilesMatch>

# Logging
ErrorLog "/var/www/blacksaildivision.com/logs/httpd-error.log"
CustomLog "/var/www/blacksaildivision.com/logs/httpd-access.log" common

# SSL configuration
SSLEngine on
SSLCertificateFile "/etc/letsencrypt/live/blacksaildivision.com/cert.pem"
SSLCertificateKeyFile "/etc/letsencrypt/live/blacksaildivision.com/privkey.pem"
SSLCertificateChainFile "/etc/letsencrypt/live/blacksaildivision.com/chain.pem"

</VirtualHost>
</IfModule>

Everything is wrapped with IfModule section. I usually check if mod_ssl is enabled. In case someone by accident would remove that module from list of enabled modules, section with VirutalHost simply won't be loaded and will not throw an error.

This is new VirtualHost so it must be wrapped in  <VirtualHost *:443>  section. Please note that port is now set to 443 instead of 80. It's default port for HTTPS. You might see some similarities between HTTP and HTTPS version. Most of things are just copied from HTTP version, so I won't be covering this parts here. Instead I will focus on new things.

You already know ServerName directive. But below there is another one - ServerAlias. You can basically access your website in two ways - with www. (https://www.blacksaildivision.com) and without www. (https://blacksaildivision.com). I prefer to use version without www. Less typing and URL looks more friendly IMHO:) So I want to redirect all www traffic to non-www. We don't have such redirect in HTTP version, but we will add it later on when I'll show you how to redirect from HTTP to HTTPS.

ServerAlias create an alias to given VirtualHost identified by ServerName. So no matter which URL you will use - the one from ServerName or from ServerAlias(es), Apache will use configuration from given VirtualHost. For instance we can add another alias like - ServerAlias blacksaildivisioncopy.com and it can point to same configuration. So your website can be accessible from two different URLs. It's very rare case but this is just an example.

What is important that your website should have redirect, so there always will be only single copy of your website. For instance if you have both www and non-www versions available without any redirect, your content will be indexed twice and will be seen as two different websites. It's bad for the SEO and your ranking in search engines. In order to redirect from non-www to www you should use RedirectMatch directive. This is permanent redirect, so it won't (and should not) change in the future. Also please make sure that you have your SSL cert valid for www. domain as well. If you won't have it, you will get an error when visiting version with www. prefix.

Last new part is SSL configuration at the bottom of the file. First we need to tell Apache that we will use SSL with SSLEngine on. Next are the paths to Certificate, Key and Chain. Chain is sometimes not mandatory in some SSL providers. So if you don't have Chain file, skip this directive. I'm using Let's Encrypt so paths to SSL are set to /etc/letsencrypt/live/blacksaildivision.com/ If you just have certificates bought from SSL provider, simply create new directory inside your domain like so:

sudo mkdir /var/www/blacksaildivision.com/ssl

and place all cert files inside this directory. Remember to update paths inside your VirtualHosts!

Once it's done simply save the changes and restart Apache. Try to visit your website with HTTPS and HTTP. Both version should work just fine. You can also check your server with https://www.ssllabs.com/ssltest/ If you configured everything correctly, you should get A score. You can aim for A+, but in my opinion there is too much disadvantages like limited access to your website from older devices etc.

Redirect HTTP to HTTPS

As we want to serve traffic over HTTPS only, we need to redirect all HTTP requests to HTTPS. Edit the file with your domain configuration and replace <VirtualHost *:80> to following code:

<VirtualHost *:80>

ServerName blacksaildivision.com
ServerAlias www.blacksaildivision.com

# Redirect everything permanently to https://blacksaildivision.com/
Redirect permanent "/" "https://blacksaildivision.com/"

# Turn off logging
ErrorLog /dev/null
CustomLog /dev/null common

</VirtualHost>

It's much shorter than previous version. I added ServerAlias with www. prefix. Using Redirect directive I'm redirecting all traffic to https:// version. And last thing is turning off logging. It's not necessary to log redirects from HTTP to HTTPS. If you have logging in HTTPS VirutalHost, logs will be there anyway. There's no need to duplicate code.

Save changes, restart Apache and check if redirect is working fine:)

Run Apache httpd on system start

Last thing is to add Apache httpd daemon to start with system boot. So after server start/restart  httpd will run automatically:

sudo systemctl enable httpd

So that's it. You have fully working Apache httpd in latest version installed on your system 🙂 This process might take some time, but you will have full control over httpd.

As always you can use our LampOnSteroids project (based on Ansible) to speed and automate everything up!

How to install EPEL on CentOS

How to install EPEL repository on CentOS

We need to install EPEL repository for our CentOS. This will be first step to setup our working Web Server. EPEL is additional repository for yum, with tons of useful packages which we won't find in default base repository. We will use it in the future so let's start using it now;)

In this post we will also setup local vagrant box. We will be using it for future operations like installing Apache or Varnish.

LAMP on steroids

This is part of our series LAMP on steroids. Check the links below to learn how to setup awesome webserver!

  1. Choosing VPS
  2. Install EPEL
  3. Install and configure Apache HTTPD server
  4. Harden Apache with ModSecurity and OWASP Core Rule Set
  5. Install and configure PHP
  6. Install and configure MySQL server
  7. Configure firewall based on iptables
  8. Create developer user and setup SSH key-pair
  9. Configure SSH
  10. Install and configure Varnish to speed up websites
  11. More to come...

Vagrant setup

The best idea without messing everything out is to use Vagrant.

If You don't have Vagrant installed please download and install it first. After installation You should be ready to use Vagrant. Create directory in place where You want to keep Your vagrant file.

mkdir vms
cd vms

Next thing what we need to do is to download CentOS box. We need the box to replace default one that comes with Vagrant. We will use chef/centos-6.6 from Vagrant boxes directory. You can search more boxes here if You want, for instance You can use 6.5 instead of 6.6. Decision is Yours:)

vagrant box add chef/centos-6.6

Vagrant will download the image of OS You selected. It might take a while, depending on Your network performance.

Next we need to init Vagrant instance.

vagrant init chef/centos-6.6

It will create Vagrantfile in Your directory. You can check out this file in any editor. This file has basic Vagrant configuration, you can adjust the options like forwarded ports,  synced folders etc. For now, let's leave this configuration file and let's bring our machine to life!

vagrant up

It will init the machine. After that You can login via SSH and start playing around:)

vagrant ssh

tl;dr

mkdir vms
cd vms
vagrant init chef/centos-6.6
vagrant up
vagrant ssh

 

How to install EPEL

Installing EPEL on CentOS is really easy.  First of all You need to get the URL for EPEL repo. If You are just following the instructions in this tutorial You can execute the command. But for instance if You have 32bit version instead of 64bit You need to replace x86_64 to i386

yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Depending on Your vagrant / server setup You might need to execute this command as root or at least as sudo user. If You are just following the steps in this tutorial You are probably Vagrant user. If You execute this command You will see permission denied error. Start this command with sudo (sudo yum install http://....) and everything should be OK:)

Next thing that we need to do is to import GPG key for this repo:

rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

And basically that's it! You are now able to use EPEL packages with yum!

To validate the installation You can execute yum repolist command  and You should see epel Extra Packages for Enterprise Linux 6 - x86_64

yum repolist

It means that everything is OK. Now, if You want to install any packages with the EPEL repository You need to use

yum --enablerepo=epel install PACKAGE_NAME

tl;dr

yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rp -y
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

 

Using Ansible to install EPEL

If You are using Ansible for server provisioning as I do, You can find ansible-webserver project on GitHub

https://github.com/astaz3l/ansible-webserver

It has EPEL role, that will help You install EPEL via Ansible

Drupal and Amazon SES

Drupal and Amazon SES SMTP server

Hello everyone!

Today I would like to write a bit on configuring SMTP Amazon SES with Drupal. We set up our Drupal installation on Amazon EC2 server. We wanted to change default mail() method in Drupal to SMTP server. As we already used Amazon we wanted to try SES. Here is the short info how we manage to run and configure it.

Setup Amazon SES account for Drupal

I assume that You already have Amazon account for purpose of this tutorial. If You don't have one, you need to sign up in order to use Amazon services first.

After You log in go and visit this page. This is simple dashboard that shows the statistics for Your SMTP server. SES has some limits and You can check them on this page as well.

Drupal - Amazon SES console

Let's set up SMTP account for Drupal. On the left side of Dashboard page you should see sidebar menu. Go to SMTP Settings. Under this page You will have access to data that are required for SMTP access (Server name, port)

Drupal - Amazon SES settings for SMTP

The only thing that we don't have are SMTP credentials - username and password. Click on "Create My SMTP Credentials button". It will take us to IAM page where You can create new user account. You can leave the default name or provide Your own. Click create button. After that You can download the credentials. But keep in mind that there are not the credentials that You'll be using in Drupal!

Drupal - Amazon SES - create user

After new user is created we need to setup his credentials for Drupal purposes. Go to IAM user page and click on Your created user.

Under Security credentials there are Sign-In credentials. Click on Manage Password. You can choose between password generated by IAM or Your own. Pick one of the option and remember the password. That's the one You'll be using in Drupal.

Drupal - Amazon SES - manage password

 

Setup From: account on SES for Drupal

We already have the user credentials for connecting with SMTP server in Drupal. There is one more thing we need to care about. In order to be able to set From: field we need to authorize the email. Go to SES console or click this link. There are two ways of verifying sender. You can authorize specific emails or whole domain.

If You want to authorize specific email (we will use this option) click Verify a new Email Address. Type Your email and click Verify this Email Address. You will receive email with activation link. After activation You should see the status verified.

Drupal - Amazon SES - verify sender email

If You want to authenticate whole domain You need to setup DKIM. You can read more about this process here.

 

Setup Your Drupal installation for Amazon SES

Now this is the final step. We need to configure Drupal for Amazon SES. First, we need a module that can change Drupal's native mail() function to SMTP protocol. In our example we will use SMTP Authentication Support module. It's really nice, easy to implement module for SMTP also with support for Drupal 8. Download and install this module. Go to configuration page (Menu Toolbar -> Configuration -> System -> SMTP Authentication Support).

What fields You need to set:

  • Turn this module on or off - set to ON
  • SMTP server - get the information from SMTP settings page on Amazon
  • SMTP port - 25, 465 or 587 (also listed on SMTP settings page)
  • Use encrypted protocol - Use TLS
  • Username - created username on Amazon SES
  • Password - password for this user (not authentication keys!)
  • E-mail from address - email that You verified

Drupal - SMTP settings for Amazon SES

Rest of the fields are optional. Really useful fields are:

  • E-mail address to send a test e-mail to - You can test if SMTP is working
  • Enable debugging - use it only for test, disable on production enviroment. If the setting doesn't work You will have a debugging messages with information what was wrong.

 

 

Basically that's it. After that You can start sending emails from Drupal using Amazon SES, using forms etc. This module will overwrite default Drupal functionality so You don't have to bother to change anything else. No need to change Your old code for sending emails etc.

Tips for choosing right server

How to choose right server for your app?

I need to start with the short statement - you probably don't need hosting or server. This article is dedicated mostly to tech people. If you are looking for tips how to pick a server for your website and you don't know what Shell, SSH or FTP is, please read only the chapter below:) If you want to jump to strictly tech stuff - click here.

You don't need hosting/server

If you are just starting a blog, small business website or e-commerce, don't bother looking for hosting and a developer who can create your website. These days we have great alternatives that will take care of your website from start to finish.

They will care about server infrastructure, security, backups, servers performance, updates to CMS, bug fixes and everything you need for having website up and running. You won't have to ask a developer to fix a bug or change a text on your website.

If you are just starting and you don't want to spend hundred of $$$ for a website you can try some services I will mention about. All you need is to have some time to configure everything. Most of them have free trial, so you can try it first for free. Some of them offers free features that are sufficient for lot of websites. Remember, you can always change it in the future to your self-hosted website.

If you plan to go with a blog and you want to focus on writing I suggest you get yourself familiar with https://wordpress.com and https://medium.com/

Medium is awesome alternative to popular WordPress. IMHO they have much nicer editor. If you care only about writing, try it first before going with WordPress:) You can always try both and decide which works better for you.

If you want to sell stuff online, go with https://www.shopify.com/  Super useful e-commerce with number of options. In general you need to buy a plan. Cheapest one is 29$ per month, but it's still cheaper and safer than running custom e-commerce like WooCommerce or Magento. They also have trial version:)

If you want to create a website for your business/company try WIX first - http://www.wix.com/ It's a website builder when you can create your own site with Drag&Drop. Lot of small businesses have websites build with WIX.

If you are fine with features that mentioned services are offering, you can stop reading here. Such services are great until you really need something custom. The feature that none of the services can offer. In such case you need to find someone who can build your website. You will need a hosting or server. If you need to look for server for your small website, make sure that you will choose right hosting plan.

If the website is small and it won't receive lot of traffic you should only pay attention to few details. I know that price will be important here, but remember. Sometimes the cheap comes out as expensive.

If you decide to go with shared hosting, your website will be slow or even unaccessible if lot of visitors will go to your website at the same time. You will need to change plan to something better or go with VPS/dedicated server. Make sure that you will choose such server with management (your server will have dedicated administrator). Trust me, if you don't know a thing about server administration and security, you don't want to choose unmanaged server:)

Make sure that they have backup plan, or do your own from time to time. Recently in Poland we have situation when one of the bigger hosting provider deleted all websites and thousands of people lost they websites permanently. They hosting plans were quite cheap so they got lot of clients. But after the incident all websites was down for 2 months. People who made backups on they own, rescued they websites quite fast, but rest lost their websites forever.

You can stop reading here. Rest of the things are for tech people! Unless you want to get some knowledge about servers:)

To cloud, or not to cloud?

Yes, you should go with cloud, but only if your website earns money. And I mean MONEY, not few dollars every month. Cloud is not cheap. It's expensive. And sometimes it can get really expensive. Like hundred of dollars every month. You pay the price, but you get a value.

We had a website that was mostly image based and got quite a lot of traffic. We were using Amazon AWS. Two EC2 instances (prod + stage), Elastic IP, S3. We paid around 500-600$ every month. Most of costs were traffic (around 2-3TB every month) and EC2. I felt like that's a lot of money for such website. Fortunately we didn't pay for it. Our client was big, was cloud oriented and money didn't matter, so we used AWS. But for my personal website and for lot of other websites 500$ every months is a killer.

So can you lower the costs somehow and still use cloud? Yes, you can. But first thing first.

Serverless maybe?

First you need to think about two important points.

  1. Do you have time to take care about server architecture, regular updates, configuration etc?
  2. Think about your application. Does it need specific server architecture, or you only care to have it up and running?

Most of people thinks - installing LAMP stack and setting some basic configuration is no brainer. Well, server management is unfortunately not that easy. Of course, your server will work for some time until you will get hacked or you will get some performance issues. And trust me, you will if you won't pay attention to your servers. We've been hacked twice, because we thought that server management doesn't require time and attention. Our servers has enormous load because of older version of few libraries. We were not even able to SSH to our servers;/

So if you only care about your app, try Heroku first. It's running in the cloud (on AWS actually), has support for tons of technologies and you don't need to care about server infrastructure at all. They will care everything for you. It feels that it might be expensive, but think if setting custom server + hiring server admin will not be more expensive?

Heroku is super convenient and powerful. You can setup and deploy your up within minutes. Their uptime is pretty good, scalability is nice and easy to configure. Give it a shot (they have free trial), maybe it will be much better than setting up a server 😀

Our solution

We go with old fashion way, with servers that we manage on our side. Downsides? We need to spend time to configure and maintain them. But from the other side we have full control of our server. And we don't pay $$$$$$$$$$$ for them:)

We adapt infrastructure to our projects. But in most cases it goes like that - VPS/Dedicated server/Cloud based server + Amazon S3 + CDN/Amazon CloudFlare.

Server features

When we are looking for server we are looking mostly at such features:

  1. SSH+Root
  2. OS and OS version
  3. CPU
  4. RAM
  5. SSD
  6. Transfer
  7. Additional IP address

SSH+Root - server must have SSH access. Era of FTP and manually uploaded files to server is fortunately behind us. So if server doesn't have SSH access it's not the right choice. Also, pay attention for full root access. We saw servers where you can login via SSH and do only basic stuff like GIT, creating files etc. But it was not possible to install anything. If it doesn't have root, you shouldn't use it.

OS - It's important to see what operating system they support. We are mostly looking for CentOS 7, as this is our OS of choice. If you want to go with Ubuntu, that's OK. We prefer CentOS. Version of the system is really important. Sometimes you can't install latest available version, like CentOS 7. If for instance latests supported version for server is CentOS6, we are looking for something different. Also make sure that it is x64 os. 32 has lot of limitations and you should avoid it.

CPU - we are mostly looking for processors with 2 or more cores. 1 core is enough if you want to run small app or only one website. But if you plan to go with something bigger, 2 or more cores will get you significant performance boost.

RAM - 1GB is minimum, but if it has much more RAM it's great. We avoid 512MB, it's way to small and you can get issues when you try to compile PHP from source for instance. Basically for optimisation, caching etc, it should have as much RAM as possible. We usually goes with 2GB and above. But 1GB is minimum you should aim for.

SSD - if it has SSD, that's great. We switched all our servers from HDD to SSD. Machines are much faster and more performant. Downside is that usually they are bit more expensive and have smaller disk sizes. If you need lot of disk space you should go with SSD + HDD.

Transfer - it is really important factor. If you will serve images from your server or you expect lot of traffic and you won't use CDN, you should go with 1-2TB per month. It really depends on the website and traffic, so it's hard to say how much transfer you will really need.

Additional IP addresses - if you plan to host multiple websites on single server make sure that it allows you to add additional IP address. If you want to go with SSL, and I'm sure you will want to do that, 1 IP address is not enough. You can't have two SSL websites on single IP address.

Your server does not have to be in Cloud, unless you want to get easier access to automatic scalability etc. We are using two providers mostly. For smaller apps/websites, we are using DigitalOcean. Simple pricing, nice servers, easy to use. Most of our servers are 10$ or 20$ per month. We don't go higher as it becomes more costly.

However if we need more horsepower, more RAM, CPU cores we are looking for something more performant. We are using dedicated servers for that. We tried multiple solutions but our favourite so far is Hetzner. They have really nice value for the money. For instance for around 50$ you can get 4 cores, 32GB rams and 30TB of traffic. Uptime is pretty high and their support is awesome. Highly recommended:)

How about pure cloud solutions?

You can of course go 100% with AWS, but as I mentioned above, it will cost you significant amount of money. It will have it's benefits, but I feel that AWS is dedicated to large companies and websites. I especially don't like their pricing as it's hard to predict how much you will pay for given configuration.

We however use AWS for two purposes.

Backup - it's important factor. Most of users that runs their own servers, usually forget about making backups. Guys, if you don't do backups, start today. Hard drives can break, you or someone else can delete something etc. Backup are super important thing! We are using S3/Glacier from AWS for this purpose. Once per day or every couple of hours we are making full backup of DBs/Images/Important data to AWS. It's really easy to setup and to use. After an hour or so, you should get your backups up and running. Glacier is relatively cheap and Amazon is taking care of persistence of our data.

CDN - If MaxCDN or CloudFront is not enough, we go with CloudFlare. IMHO they offer enormous number of features and configuration gives us much more power.

Should I use CDN?

Speaking about CDN. Should you use one? The answer is yes. It will give your website significant boost if your users are connecting from different countries. Hetzner has their servers in Germany, but some of our websites are worldwide oriented. CDN is a must here.

You can have CDN for free by using CloudFlare. They don't offer much options but it's for free:) If you have a blog with lot of images, it will also takes some of the transfer for you from the server. Images will be served from CDN instead of your server.

If free is not enough, you can use paid plan from CloudFlare or try MaxCDN.

If you need detailed control, we suggest to go with Amazon and their CloudFront:)

Summary

So it's all come down to how much time and money do you want to spend here. If you don't plan to run your own servers for your app, you should go with Heroku. If you want to play around with servers and want to have bigger control, go with DigitalOcean/Dedicated Server + S3/Backup server + CDN. It's the best value for money. If you have lot of money and you want best scalability and performance, go with AWS and all the features it offers;)

If you will go with DigitalOcean or custom server, you should learn about Ansible. It will save you lot of time when it comes to configuration of the server. Read the tutorial about Ansible.

If you want to learn how to configure your own server, you should read our series about setting up LAMP on steroids.