VPS / (Personnal) Virtual Private Server mini HowTo

 Introduction

I moved from a hosting web service (Dreamhost) to VPS : Ramnode then Scaleway (unreliable) and finaly Hetzner (great). It was surprisingly easy and I want now to share this experience.

Why a VPS

For less than 6€/month you can have a virtual server with 40GB of ssd disk, 20 TB of traffic and 4GB of memory. You do whatever legal things you want, even changing the kernel. Moving to an other provider is easy because you have a total control of a standard server.

The Linux/Debian choice

Linux is very good as a server and is widespread. I use Linux since 1993. At start I switched from distributions using the shining distribution of the moment. Debian is not the brightest of the moment (it’s Ubuntu, based on Debian) but after several moves I know that stability is a great quality, and for this quality Debian is by far the brightest distribution. For the others qualities Debian is also a very good choice so …

Choosing a domain name

If you will have several projects, it’s easier to choose a neutral short name and to add after sub-domains as needed. Now .org (and .com) domains prices will grow each year, be careful of the top registry (for example Eurid for the .eu) because you will be captive.

Installed services

FireWall (nft) and fail2ban

root@bobu ~ # cat /etc/nftables.conf
#!/usr/sbin/nft -f

flush ruleset

table inet filter {

chain input {
type filter hook input priority 20; policy drop;

# accept any localhost traffic
iif lo accept

# accept traffic originated from us
ct state established,related accept

# drop invalid packets
ct state invalid counter drop

# accepted incoming ports
tcp dport { ssh, http, https, domain, pop3, pop3s, smtp, submission } accept
udp dport { domain } accept

# icmpv6 for ipv6 connections
ip6 nexthdr icmpv6 icmpv6 type {
destination-unreachable, packet-too-big, time-exceeded,
parameter-problem, nd-router-advert, nd-neighbor-solicit,
nd-neighbor-advert, echo-request
} limit rate 100/second accept

# icmp for ipv4 connections
ip protocol icmp icmp type {
destination-unreachable, router-advertisement,
time-exceeded, parameter-problem, echo-request
} limit rate 100/second accept

# log le reste avec une limite
limit rate 20/hour log prefix "Input rejected: " reject
}

chain forward {
type filter hook forward priority 0; policy drop;
}

chain output {
type filter hook output priority 0; policy accept;
}

}

Fail2ban is not very useful (except against flooding) and will be less useful with IPv6 (attackers will change easily their IP). The best security is having good passwords and updated strong daemons. But I feel better when I react against these attacks.

DNS

I used bind9, but it crashed 2 times, and logs found attacks at the time of the crash. The security advisories of bind are more numerous than for powerdns. So I switched to powerdns (packages pdns-server pdns-backend-mysql on debian) with the mysql backend. I use pdnsutil to configure the domains. Some commands :

  • show a domain :
    pdnsutil list-zone bobu.eu
  • add an entry :
    pdnsutil add-record bobu.eu jhon A "$IP_NOW"
  • delete an entry or a list of matching entries :
    pdnsutil delete-rrset bobu.eu jhon TXT
  • Modification of an IP :
    pdnsutil replace-rrset bobu.eu jhon A "$IP_NOW"

The secondary domain (it’s mandatory to have one) is freely provided by bookmyname which is my registry. One update each 15 minutes.

Web server

Apache

apache is a good and smart piece of software without problems.

SSL

I use https://letsencrypt.org/ to have automatic and free certificates.

In a batch file (like in a cron-file), renew a certificate containing two wildcards domains :

certbot certonly -n --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --agree-tos --manual --preferred-challenges=dns --manual-auth-hook /root/local/bin/letsencrypt-dns-hook.sh -d "*.bobu.eu" -d "*.emmanuel.bobu.eu"

root@bobu ~ # cat /root/local/bin/letsencrypt-dns-hook.sh
#!/bin/bash

ZONE=$(echo -n $CERTBOT_DOMAIN|rev|cut -d. -f-2|rev)
NAME=$(echo -n $CERTBOT_DOMAIN|rev|cut -d. -f3-|rev)

pdnsutil replace-rrset $ZONE _acme-challenge.$NAME TXT \"$CERTBOT_VALIDATION\" >/dev/null 2>/dev/null # no --quiet option, so I use "/dev/null"
# pdns_control notify # Commented because ns-slave.free.org (my slave secondary server) don't care about "notify"
sleep 2s # letsencrypt seems only to check the main dns right now, else use "20m" to let the time for the slave DNS.

Statistics

Privacy is a big concern, people use more and more Ublock-origin and “do not track” things. I use Awstats, so I don’t miss hits and with this solution I don’t give free data to the big companies who suck everything.

To see keywords used in search engines queries I use google search console, but here I don’t give data.

 SMTP/Pop3

Why postfix

The real actual choice is between exim and postfix. I know exim from a long time. Exim offer a smart flexible configuration language. Debian add a smart splited configuration file full of macros and ifdef. Smartness is good some times and sometimes it’s just awfully a pain. You don’t want to know the physic properties of the water to drink a glass of water and you probably don’t want to use exim as well, except if you need some very specials things you don’t find in postfix or if you have simple needs and you are sure to follow the debian preconfigured cases.

postfix & dovecot together

I want to use a flat users file because I don’t have a lot of users, and I want to share this file between postfix and dovecot. I found a solution using pam (not updated for years, perhaps a simpler solution exists now-days). You must use “imap” for the pam service name, it’s fixed by saslauthd which is only configured by “/etc/default/saslauthd” (extract):

MECHANISMS="pam"
OPTIONS="-c -r -m /var/spool/postfix/var/run/saslauthd"

/var/spool/postfix/var/run/saslauthd is the socket communication path fixed by postfix.

Modify /etc/dovecot/conf.d/10-auth.conf to “include auth-system.conf.ext”.

The auth-system.conf.ext:

passdb {
 driver = pam
 # [session=yes] [setcred=yes] [failure_show_msg=yes] [max_requests=<n>]
 # [cache_key=<key>] [<service name>]
 args = max_requests=10 imap
}
userdb {
 driver = static
 args = uid=vmail gid=vmail home=/var/spool/vmail/%d/%n
}

The imap pam’s configuration file:

#%PAM-1.0
auth required pam_userdb.so crypt=none db=/etc/postfix/users 
account required pam_userdb.so crypt=none db=/etc/postfix/users

The corresponding /etc/postfix/users.txt file

toto@emmanuel.bobu.eu
longPassword
titi@emmanuel.bobu.eu
longPassword2
...

Convert it to the users.db (the db extension is hidden in the pam’s configuration file):

db_load -T -f /etc/postfix/users.txt -t hash /etc/postfix/users.db

We put the mails in  maildirs (postfix/main.cf extract):

# Virtual domain
virtual_mailbox_domains = /etc/postfix/vhosts
virtual_mailbox_base = /var/spool/vmail
virtual_mailbox_maps = hash:/etc/postfix/vmaps
virtual_minimum_uid = 1000
virtual_uid_maps = static:1002
virtual_gid_maps = static:1002

where 1002 is the uid and gid of vmail user and group.

Donc forget to use postmap for /etc/postfix/vmaps,  /etc/postfix/virtual and  /etc/aliases.

For exemple to add a user from my home I use :

USER=toto; DOMAIN=emmanuel.bobu.eu; echo "$USER@$DOMAIN $DOMAIN/$USER/" | ssh root@global.bobu.eu "cat >>/etc/postfix/vmaps; postmap /etc/postfix/vmaps"

PS: I know that root connection is “baaaad” but my firewall limit the brute force ssh attack and my passwords are very long. Thereby if a hacker can login as root it’s because ssh is compromised.

Infinite email in one email

Add the line “recipient_delimiter = .” in postfix/main.cf and all email addresses “toto@example.com”, “toto.cat@example.com” or “toto.dog@example.com” etc arrive to the email address “toto@example.com”. When I write to a company like docCorp, I use toto.en.docCorp@example.com. So I can filter emails easily.

Backup storage service

I like to backup my computers in a server far away, then if my house burns I keep my datas. I use duplicity and an sftp account on the server.

To activate the sftp server of ssh, put this line in sshd_config:

Subsystem sftp /usr/lib/openssh/sftp-server

Backup of the server

I use duplicity, it’s simple, efficient, secured and reliable for years.

The script is called from my home (to be sure to have my home’s computer running), it calls /root/local/bin/backup which contains:

#!/bin/bash

source /etc/backup-duplicity/main-env

# Delete a potential old lock
find $ARCHIVE_DIR/$ARCHIVE_NAME -name lockfile.lock -mtime +6 -print0|xargs -0 --no-run-if-empty rm

# Count incremental backups, counter stored in $MARK_FILE
if [ ! -f $MARK_FILE ]; then
  N=1
else
  N=$(cat $MARK_FILE)
  N=$((N+1))
fi

if [ $N -ge $MAX_INCREMENTAL ]; then
  CMD=full # full backup
  N=0
else
  CMD="" # incremental backup
fi

echo $N >$MARK_FILE

mysqldump -u root --all-databases |bzip2 > /tmp/all-database.sql.bz2

nice duplicity $CMD $DUP_OPT --exclude-filelist /etc/backup-duplicity/main-exclude.list --include-filelist /etc/backup-duplicity/main-include.list --exclude "**" / $DUP_TARGET
rm /tmp/all-database.sql.bz2
if [ $? -ne 0 ]; then 
        echo "erreur Duplicity $?" | mailx -s "Erreur Duplicity" root; 
fi

if [ "$CMD" = "full" ]; then
  duplicity remove-all-but-n-full 1 $DUP_OPT --force $DUP_TARGET
  duplicity cleanup $DUP_OPT --force $DUP_TARGET
fi

The password for mysqldump is strored in the /root/.my.cnf file.

I want to protect my home computer if the server is compromised, so I use a “chrooted” sftp connection to my home (sshd_config extract):

Subsystem sftp internal-sftp
 Match user toto
 ChrootDirectory /var/backup/bobu/
 AllowTCPForwarding yes
 X11Forwarding no
 ForceCommand internal-sftp

I needed to change the umask for toto in the pam configuration /etc/pam.d/sshd

session optional pam_umask.so umask=0027

WordPress

The multi-site wordpress choice

As a good guy I’m lazy, so I hate to repeat update / installation. How to share wordpress between hosts ? There is two solutions, the wordpress one and the debian one.

The wordpress solution (called “network wordpress”) is multi-sub-domains but not multi-domains and share one database for all sites. You can use a hack to have multi-domain but it’s a hack and it seems weird (seems not open source).

The debian solution is light: a special configuration wp-config.php which redirect to /etc/wordpress/config-example.com.php where example.com is the accessed domain, and some separate directories to share the wordpress program, the plugins, languages and themes.

Debian advantages: databases are independents, multi-domains, wordpress sites are almost completly indenpendents.

WordPress advantages: graphic integration.

Debian disadvantages: plugins, themes and languages must be installed manually, which is very simple: download, unzip, copy, activate (in the wordpress interface). wordpress sites must be at the root of the url (http://www.example.com/ but not http://www.example.com/home/). This is mandatory because the wp-config.php use just the domain name.

WordPress disadvantages: no real multi-domains support, less isolation, more complex solution.

The multi-domain requirement made me choose the Debian solution.

Debian wordpress multi-domains architecture

You have some documentation here /usr/share/doc/wordpress/README.Debian and here http://www.byteme.org.uk/2013/12/02/wordpress-debian-multisite/

What I add here is what I needed to understand by myself which is not already explained before:

Directories:

  • /usr/share/wordpress : the wordpress program and the special config file
  • /var/lib/wordpress/wp-content/ : the plugins, themes and languages
  • /srv/www/wp-content/$HOSTNAME/ : the specific par of wordpress (wp-content) for an host name.

Apache configuration:

The debian file for wordpress assume that all web sites not configured before (in the sites-enabled directory) is a wordpress web site. I don’t like this kind of unclean configuration (it’s not symmetric: two packages can’t use this hack together) and I prefer to have a configuration file by domain, even if it’s more verbose (but you can use Include directive).

Why not using a frontend like webmin ?

Often frontends are easier just at start : you learn to use the frontend and that’s it. But after a while you need to push the limits, and then you have to learn what is behind the front-end and the interactions between the front-end and the back-end. Finally you have a double learning work.