The Developer's Handbook for VPS Setup PDF

Summary

This document is a handbook for setting up a Virtual Private Server (VPS). It covers topics such as choosing and renting a VPS, setting up the VPS, securing the VPS and more.

Full Transcript

Sold to 1 [email protected] The Developer’s Handbook for VPS Setup Secure, Deploy, and Automate in Just One Day Marco Melilli Sold to...

Sold to 1 [email protected] The Developer’s Handbook for VPS Setup Secure, Deploy, and Automate in Just One Day Marco Melilli Sold to 2 [email protected] TABLE OF CONTENTS THE DEVELOPER’S HANDBOOK FOR VPS SETUP................................................................................................2 SECURE, DEPLOY, AND AUTOMATE IN JUST ONE DAY...........................................................................................2 CHOOSING AND RENTING YOUR VPS.................................................................................................. 6 1.1 UNDERSTANDING VPS SPECIFICATIONS.....................................................................................................6 1.2 HOW TO RENT A VPS.............................................................................................................................7 NEXT STEPS..............................................................................................................................................7 SETTING UP YOUR VPS....................................................................................................................... 8 2.1 LOGGING INTO YOUR VPS......................................................................................................................8 2.2 UPDATING YOUR SYSTEM........................................................................................................................9 NEXT STEPS..............................................................................................................................................9 SECURE YOUR VPS........................................................................................................................... 10 3.1 CREATE A NEW USER WITH ADMIN PRIVILEGES........................................................................................... 10 3.1.1 Create a new user................................................................................................................... 11 3.1.2 Add the new user to the sudo group......................................................................................... 11 3.1.3: Test the new user's sudo access............................................................................................. 11 3.2 SETTING UP SSH KEYS AND DISABLING PASSWORD AUTHENTICATION............................................................. 14 3.2.1 Creating the Key Pair............................................................................................................... 14 3.2.2 Copying the Public Key to the Server........................................................................................ 15 3.2.3 Testing the SSH Key................................................................................................................. 16 3.2.4 Disabling Password Authentication.......................................................................................... 16 3.3 CHANGING THE SSH PORT................................................................................................................... 17 3.4 HARDENING OPENSSH (OPTIONAL)....................................................................................................... 19 3.5 SETTING UP UFW (UNCOMPLICATED FIREWALL)........................................................................................ 20 Useful UFW Commands and Tips..................................................................................................... 22 3.6 SETTING UP FAIL2BAN......................................................................................................................... 23 Monitor and Manage........................................................................................................................ 25 3.7 KEEPING UBUNTU UPDATED.................................................................................................................. 25 NEXT STEPS............................................................................................................................................ 26 BUY A DOMAIN AND CONNECT IT TO YOUR VPS................................................................................. 27 BUY A DOMAIN ON CLOUDFLARE REGISTRAR................................................................................................... 27 28 4.2 SET UP DNS TO POINT TO YOUR VPS...................................................................................................... 28 4.3 ENABLE DNSSEC FOR YOUR DOMAIN..................................................................................................... 28 4.4 ENABLING SSL WITH CLOUDFLARE......................................................................................................... 29 CONFIGURING DOCKER AND THE REVERSE PROXY........................................................................... 30 5.1 INSTALLING DOCKER........................................................................................................................... 30 5.2 INSTALLING DOCKER COMPOSE............................................................................................................. 32 5.3 SETTING UP TRAEFIK AS REVERSE PROXY................................................................................................... 33 5.3.1 Creating the Traefik Network in Docker..................................................................................... 34 Sold to 3 [email protected] 5.3.2 Setup Traefik dashboard.......................................................................................................... 35 5.4 DEPLOYING APPLICATIONS WITH TRAEFIK.................................................................................................. 39 Example Hello World App................................................................................................................ 39 NEXT STEPS............................................................................................................................................ 40 DEPLOYING AND AUTOMATE............................................................................................................. 41 6.1 PREPARING YOUR VPS FOR DEPLOYMENT................................................................................................. 42 6.1.1 Create a Deployment User...................................................................................................... 42 6.1.2 Set Up SSH Key for the Deployment User.................................................................................. 42 6.2 SETTING UP YOUR PROJECT ON THE VPS.................................................................................................. 43 6.3 SETTING UP GITHUB ACTIONS............................................................................................................... 45 6.3.1 Set Up GitHub Secrets............................................................................................................ 45 6.3.2 Create a GitHub Actions Workflow........................................................................................... 46 6.4 Deploying Your Application......................................................................................................... 47 TIPS: ACCESSING YOUR DATABASE............................................................................................................... 48 NEXT STEPS............................................................................................................................................ 48 BONUS TIPS AND TOOLS................................................................................................................... 49 7.1 HOW TO IMPLEMENT SERVICE AUDITING VPS............................................................................................ 49 7.2 AUTOMATE DATABASE BACKUPS............................................................................................................. 50 7.2.1 Create a Clouflare R2 bucket................................................................................................... 50 7.2.2 Installing rclone...................................................................................................................... 50 7.2.3 Creating the Backup Script...................................................................................................... 51 7.2.4 Scheduling the Automatic Backup........................................................................................... 53 7.3 SETUP TELEGRAM BOT FOR ALERT NOTIFICATIONS....................................................................................... 53 7.3.1 Creating a Telegram Bot.......................................................................................................... 53 7.3.2 Retrieving the Chat ID............................................................................................................. 54 7.3.3 Sending Messages via the Bot.................................................................................................. 54 STAY CONNECTED.................................................................................................................................... 55 Sold to 4 [email protected] INTRODUCTION Ever considered having your own space on the internet to host all your projects? A Virtual Private Server (VPS) offers you exactly that: your personal sandbox in the cloud. It provides the freedom and flexibility to deploy, manage, and secure your projects, without relying on shared hosting limitations. This guide is designed for developers looking to set up their own VPS quickly, securely, and efficiently. By the end of this handbook, you’ll be able to deploy and automate your side projects with ease. Whether you’re a beginner or an experienced developer, you'll find valuable insights on server setup, security, and automation. By the end of this book, you will be able to: Configure a VPS. Implement key security measures to protect your server and data. Use Docker and Traefik to manage multiple applications. Automate your deployments using GitHub Actions. Set up DB backups and monitoring the system. I believe in learning by doing, so you won't just be reading - you'll be actively setting things up. By the time you finish this book, you'll have your VPS up and running, with your projects deployed and everything secured. While server management might not sound exciting at first, there's real satisfaction in setting up your server, deploying your projects, and seeing them live on the internet. It's a rewarding experience that combines technical skills with creativity. Are you ready to take the leap into the world of VPS management? Let’s get started! Sold to 5 [email protected] CHAPTER 1 Choosing and Renting Your VPS Before we dive into the technical aspects of setting up and managing a VPS, let's start with the basics: how to choose and rent a VPS that suits your needs. 1.1 Understanding VPS Specifications When choosing a VPS, you'll encounter various specifications. Here's what they mean: CPU: The number of virtual cores available for your server. More cores mean more computing power. RAM: The amount of memory your VPS can use for processing tasks. More RAM helps with running more applications and handling larger workloads. Storage: The disk space available, usually SSD for better performance. Your needs will depend on the size of your projects. For most beginners and small projects, a basic plan is usually sufficient. This typically includes: 1–2 CPU cores 2–4 GB of RAM 20–50 GB of SSD storage As your projects grow or your needs change, you can always upgrade your plan, but a lot of people underestimante how much traffic and requeste a small VPS can handle. Sold to 6 [email protected] 1.2 How to Rent a VPS Here are some well-known VPS providers to consider: Hetzner: Competitive pricing and reliable service, particularly in Europe. OVH: A wide range of services with data centers worldwide. DigitalOcean: Popular among developers for its simplicity and extensive documentation. Linode: High-performance SSD-based VPS solutions. You can compare them to see what they offer and at what price. Once you've chosen your provider: 1. Sign up for an account on the provider’s website. 2. Choose a plan that suits your needs. 3. Choose Ubuntu as operating system (it's the one we are using in this guide). 4. Choose a data center location close to your target audience for better latency. 5. Complete the payment and wait for the VPS setup (usually takes a few minutes). 6. You'll receive your VPS details, including the IP address and root password. Next Steps Once you have your VPS details, you're ready to move on to the next chapter, where we'll guide you through the initial setup process. Remember, the beauty of a VPS is its flexibility. If you find that your chosen plan doesn't meet your needs, you can usually upgrade or downgrade easily. Start small, get comfortable with managing your VPS, and scale as needed. Sold to 7 [email protected] CHAPTER 2 Setting Up Your VPS Now that you've your own VPS, it's time to set it up. This chapter will walk you through the initial setup steps, including logging into your VPS and running essential updates. 2.1 Logging into Your VPS At this point you should have received an email with your server's IP address and root password. Here's how to log in for the first time: 1. Download an SSH Client I recommend SSH clients because it offer a user-friendly interface, session management, and advanced features like file transfer, making them easier and more versatile than using the terminal. Here some free popular tools : Termius: macOS - Windows- Linux MobaXterm: Windows Putty: Window 2. Set Host and Port Open the client. Enter the Host (IP) and Port provided by your hosting provider. 3. Click Connect and enter your username and password when prompted. 4. If this is your first time connecting to this server, you'll see a fingerprint authentication prompt. Type 'yes' to continue. Or if you don't want to use an ssh client, you can use the Terminal: 1. Open your terminal (Command Prompt on Windows, Terminal on macOS or Linux). 2. Use the SSH command to connect to your server: ssh root@your_server_ip Replace your_server_ip with the IP address provided by your VPS host. 3. Enter the root password provided by your VPS host when prompted. Sold to 8 [email protected] 4. If this is your first time connecting to this server, you'll see a fingerprint authentication prompt. Type 'yes' to continue. You're now logged into your VPS as the root user. 2.2 Updating Your System WHY UPDATE YOUR SYSTEM? Keeping your system updated is crucial for several reasons: Security: Updates often include patches for recently discovered vulnerabilities. Stability: Bug fixes in updates can improve the overall stability of your system. Compatibility: Keeping your system updated ensures better compatibility with new software you might want to install later. Regular updates are a key part of maintaining a healthy and secure VPS. This ensures that you have the latest security patches and software versions. Use the following commands: sudo apt update sudo apt upgrade -y This will update the package lists and install newer versions of packages. After upgrading, reboot the server to apply all changes: sudo reboot Your SSH session will be disconnected. Wait a minute or two, then log back in using the SSH command from step 2.1. Next Steps In the next chapter, we'll dive into securing your VPS, which is crucial before you start using it for your projects. Sold to 9 [email protected] CHAPTER 3 Secure Your VPS Now that our VPS is set up and updated, it’s time to prioritize security. In this chapter, I've spent a lot of time summarising all the best practices to help you avoid getting hacked and losing control of your server. While it may seem like a lot to take in, the good news is that these steps only need to be done once. Afterward, you can rest easy knowing your server is secure and well-protected. 3.1 Create a New User with Admin Privileges WHY CREATE A NEW USER? Creating a new user with sudo privileges instead of always using root is a best practice for several reasons: 1. Limited scope: Regular users can only make changes to their home directories and personal files by default, reducing the risk of accidental system-wide changes. 2. Audit trail: When multiple people use the server, having individual user accounts helps track who did what. 3. Security: If someone gains unauthorised access to a regular user account, they still won't have full system access without the sudo password. 4. Principle of least privilege: Users should only have the minimum level of access necessary to perform their tasks. Sudo allows for this by requiring a password for privileged operations. Sold 10 to [email protected] 3.1.1 Create a new user In this example, we'll call the user "sammy", but you can choose any username you prefer: sudo adduser sammy You'll be prompted to set and confirm a password for this new user. Make sure to choose a strong, unique password. You'll also be asked to fill in some information about the user (like full name, phone number, etc.). You can skip these by pressing Enter. 3.1.2 Add the new user to the sudo group To give your new user admin privileges, we need to add them to the sudo group: usermod -aG sudo sammy This command adds the user sammy to the sudo group, which allows them to run commands with superuser privileges by using the sudo prefix. 3.1.3: Test the new user's sudo access Now, let's switch to the new user and test their sudo access: Switch to the new user, running the following command and entering the root password: su - sammy Once logged in as the new user, test the sudo command: sudo ls -la /root This command attempts to list the contents of the root directory, which requires sudo Sold 11 to [email protected] privileges. The first time you use sudo in a session, you'll be prompted for the password of the current user's account. This is a security feature of sudo. If you can see the contents of the /root directory, congratulations! Your new user has been successfully created and granted sudo privileges. In the next section, we'll look at further securing your VPS by disabling root login and setting up SSH keys. 3.1.4 Deny Root Login to the Server Now that we have created a new user with sudo privileges, it's time to take our security a step further by disabling root login. WHY DISABLE ROOT LOGIN? Disabling root login is a crucial step in securing your VPS for several reasons: 1. Reduced attack surface: The root user is the most powerful account on a Linux system. By disabling direct root login, you're removing a primary target for attackers. 2. Two-step authentication: Even if an attacker guesses a user's password, they still won't have root access without also knowing the sudo password. 3. Audit trail: When root login is disabled, all administrative actions must be performed through sudo, which leaves a clear audit trail of who did what and when. 4. Principle of least privilege: Users should only have the minimum level of access necessary to perform their tasks. Disabling root login enforces this principle. First, make sure you're logged in as the non-root user we created in the previous section. If you're still logged in as root, exit and log back in: ssh sammy@your_server_ip_address Replace sammy with your username and your_server_ip_address with your actual server IP. BEFORE CONTINUING Be sure to be able to login with the non-root user and remember the password, because now we are going to disable root user access. Sold 12 to [email protected] We need to modify the SSH daemon configuration file. Open it with a text editor like nano: sudo nano /etc/ssh/sshd_config If nano is not installed by default, run: sudo apt install nano In the file, look for the line that says PermitRootLogin. It might be commented out with a # at the start of the line. Change this line to: PermitRootLogin no If the line doesn't exist, you can add it at the end of the file. In nano, you can save and exit by pressing Ctrl + X , then Y , then Enter For the changes to take effect, we need to restart the SSH service: sudo systemctl restart ssh Now we are going to test if everything works fine. Open a new shh client window (don't close the existing one yet) and try to log in as root: ssh root@your_server_ip_address You should receive a "Permission denied" message. This confirms that root login has been successfully disabled! Sold 13 to [email protected] Remember, you can still perform administrative tasks using sudo. If you ever need prolonged root access, you can use sudo -i to start a root shell. In the next section, we'll further enhance our VPS security by setting up SSH keys for authentication. 3.2 Setting up SSH Keys and Disabling Password Authentication To enhance security, we will set up SSH key-based authentication and disable password authentication. WHY LOGIN WITH AN SSH KEY? Using SSH keys is more secure than passwords because they use strong public key cryptography, making brute-force attacks nearly impossible. Bots frequently scan for server IPs and attempt to guess passwords, but SSH keys, stored locally and never transmitted, provide robust protection against these threats 3.2.1 Creating the Key Pair On your local machine (not the VPS), open a terminal from and run from any folder: ssh-keygen When prompted you’ll be asked to enter a file name for the key. By default, it suggests “id_rsa”, but you can choose another name if desired (e.g., “my_ssh_key”). Next, enter a passphrase (password) to protect the private key: it adds an extra layer of security, ensuring that even if someone gains access to your private key, they can’t use it without the passphrase. This is optional, so you can also leave it blank. The command will automatically save the generated key pair in the default.ssh directory within your home folder (e.g., /home/username/.ssh/ on Linux or macOS, or C:\Users\username.ssh on Windows). After completing these steps, two files will be generated: Sold 14 to [email protected] Private key: This will be saved in /your_home/.ssh/id_rsa (or the name you specified). The private key should be kept secret and never shared. It’s crucial to create a secure backup, as losing this key means you won’t be able to connect to servers that rely on it. Public key: Stored in /your_home/.ssh/id_rsa.pub. This key can be shared freely and will be uploaded to any servers you want to connect to via SSH. IMPORTANT Keep your private key safe, don't lose it and avoid sharing it with anyone. 3.2.2 Copying the Public Key to the Server You can transfer your public SSH key to the server in two ways: 1. Using ssh-copy-id (recommended): This is the easiest and most secure method. Run the following command from your local machine: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 username@remote_host -i ~/.ssh/id_rsa.pub : Specifies the path to your public key. -p 22 : Ensures the correct SSH port is used username@remote_host: Replace username with your VPS username and remote_host with your server’s IP or hostname. 2. Or Manually In the VPS create or open the *authorized_keys* file to store your public key: nano ~/.ssh/authorized_keys Paste the contents of your id_rsa.pub file from your local machine into this file on a new line. Sold 15 to [email protected] 3.2.3 Testing the SSH Key Once the public key is copied, test the connection by opening a new terminal and running: ssh username@remote_host If the setup is correct, you’ll be able to log in without entering the VPS password, although you may be prompted to enter the passphrase for your private SSH key (if you set one during key generation). BEFORE CONTINUING Before disabling password authentication, ensure that your SSH key-based login is working properly. Otherwise, you could lock yourself out of your server. 3.2.4 Disabling Password Authentication To enhance security by ensuring only SSH key-based login is allowed, disable password authentication: 1. On your VPS, open the SSH config file: sudo nano /etc/ssh/sshd_config 2. Find or add the following line, and set it to 'no': PasswordAuthentication no 3. Ensure that the PubkeyAuthentication option is enabled (this should be set to “yes”), save and exit the file: PubkeyAuthentication yes Sold 16 to [email protected] 4. Restart the SSH service: sudo systemctl restart ssh 5. Open a new terminal and try logging in again with your SSH key to verify that the password authentication has been disabled. IMPORTANT Make sure there are no /etc/ssh/sshd_config.d/*.conf files that override the default configuration, that might override your SSH configuration settings, as they could unintentionally re-enable password authentication By completing this process, your server will be protected from brute-force password attacks, and only SSH keys can be used to log in. 3.3 Changing the SSH Port WHY CHANGING THE PORT NUMBER? Changing the default SSH port (22) to a non-standard one can enhance security by reduc- ing the likelihood of automated attacks. Many bots and malicious scripts frequently scan well-known ports, such as 22 (SSH), 80 (HTTP), 443 (HTTPS), and 21 (FTP). By using a less common port for SSH, your server becomes less. Some commonly used alternative SSH ports that you should avoid are 2222, 2200, 222, and 2022. While this is not a fool- proof security measure, it can significantly reduce the number of automated attacks. For previous Ubuntu versions you can check this guide, for Ubuntu 22.10 or later you can follow these commnads: Sold 17 to [email protected] 1. Edit the SSH socket file: sudo nano /lib/systemd/system/ssh.socket and change the ListenStream line inserting a value between 1024 and 49151: ListenStream=45273 2. Reload systemd and restart the SSH service to apply the changes: sudo systemctl daemon-reload && sudo systemctl restart ssh.service 3. Open a new terminal and try logging in again with the new port number: ssh -p 45273 username@remote_host Do not close your current SSH session until you’ve confirmed that you can log in using the new port, as misconfiguring the port could lock you out of your server. IMPORTANT Remember to use the new port when connecting via SSH from now on: ssh -p 45273 username@remote_host Sold 18 to [email protected] 3.4 Hardening OpenSSH (Optional) For additional security, you can modify certain SSH settings to reduce potential attack vectors. Edit the SSH configuration file: sudo nano /etc/ssh/sshd_config Add or modify these lines: MaxAuthTries 3 PermitEmptyPasswords no ChallengeResponseAuthentication no KerberosAuthentication no GSSAPIAuthentication no X11Forwarding no PermitUserEnvironment no AllowAgentForwarding no AllowTcpForwarding no PermitTunnel no DebianBanner no MaxAuthTries 3: Limits the number of failed login attempts before disconnecting, reducing the effectiveness of brute-force attacks. PermitEmptyPasswords no: Disallows login with empty passwords. ChallengeResponseAuthentication no: Disables keyboard-interactive authentication methods that can be vulnerable to attacks. KerberosAuthentication no and GSSAPIAuthentication no: Disable rarely used authentication methods. X11Forwarding no: Prevents forwarding of X11 (graphical) sessions, reducing attack surface. PermitUserEnvironment no: Prevents users from modifying the environment variables used by SSH, mitigating the risk of privilege escalation. AllowAgentForwarding no and AllowTcpForwarding no: Disables forwarding features that could be abused to create malicious tunnels. PermitTunnel no: Disallows tunneling, which could be used to bypass firewalls. DebianBanner no: Hides the SSH version and distribution details to prevent attackers from using this information to exploit known vulnerabilities. Sold 19 to [email protected] These are some functionalities you probably don't need and you can disable, If you think you may need to use any of these in the future you can keep them enabled. After modifying the SSH configuration, always check for syntax errors before reloading the service to avoid issues: sudo sshd -t If no errors are returned, reload the SSH service: sudo systemctl reload sshd.service 3.5 Setting Up UFW (Uncomplicated Firewall) UFW is a simple and user-friendly tool for managing firewall rules, providing an easy interface for configuring iptables. WHY INSTALL A FIREWALL? UFW provides a straightforward way to control network traffic, allowing you to block unauthorized access to your server. Allowing only necessary connections, such as SSH, HTTP, and HTTPS, while denying everything else by default, you significantly reduce the risk of attacks. This helps protect your server from unauthorized access, brute-force attempts, and other network-based threats, enhancing overall security It is usually installed by default as part of the base installation. If for some reason it’s not installed, you can easily install it with the following command: sudo apt install ufw Let's setup UFW for enhanced security: 1. Ensure IPv6 is enabled: To support both IPv4 and IPv6, ensure that IPv6 is enabled in UFW’s configuration. Open the UFW configuration file: Sold 20 to [email protected] sudo nano /etc/default/ufw Make sure the following line is set to enable IPv6 support: IPV6=yes 2. Set default firewall policies: This ensures all the connections will be blocked and only the connections you explicitly allow can reach your server: sudo ufw default deny incoming sudo ufw default allow outgoing 3. Allow SSH access: We have changed the SSH port (e.g., to 45273), allow connections to that port: sudo ufw allow 45273 4. Allow HTTP and HTTPS traffic: Enable access to your web server by allowing traffic on standard HTTP (port 80) and HTTPS (port 443): sudo ufw allow 80 sudo ufw allow 443 WHY OPEN ONLY HTTP AND HTTPS PORTS? Ports 80 and 443 will serve as the only entry points for external traffic. All client requests will flow through these ports, with our reverse proxy (Traefik) responsible for routing them to the appropriate applications running in Docker containers. This ensures that all incoming traffic is securely managed and properly directed to this ports, and all the other entry points will be closed. We’ll setup the proxy in Chapter 5. 5. Enable UFW: Once the rules are set, enable the firewall and don't close the current session window: Sold 21 to [email protected] sudo ufw enable 6. Check the status: Verify that UFW is running and see the active rules with detailed information: sudo ufw status verbose This command displays which ports are open and actively monitored by the firewall. WARNING After enabling UFW, it’s important to test your server connection from a different session or terminal to confirm that the firewall rules haven’t accidentally blocked access. If you find that you’re unable to connect in the new session, don’t close your current one. From the open session, you can quickly disable UFW to regain access by running: sudo ufw disable Useful UFW Commands and Tips To delete a rule If you want to remove a specific rule (e.g., HTTP access), you can delete it by referencing the port: sudo ufw delete allow 80 Alternatively, you can delete a rule by its rule number: sudo ufw status numbered sudo ufw delete Sold 22 to [email protected] To disable or reset UFW If needed, you can disable UFW temporarily, or reset it to clear all rules and start from scratch: sudo ufw disable # Disable the firewall sudo ufw reset # Reset UFW to default state, deleting all rules 3.6 Setting Up Fail2Ban Fail2Ban is an essential security tool for any server exposed to the internet. WHY USE FAIL2BAN? This tool helps us to protect our VPS against brute-force attacks and other malicious activities by monitoring log files and automatically banning IP addresses that show signs of malicious behavior. Fail2Ban starts watching your log files for the services you've set up in jails. A jail is a set of rules for monitoring a specific service (like SSH). It defines what to watch for and what action to take. For example for ssh, if it sees too many failed attempts from an IP address, it automatically bans that IP for the set time. Install Fail2Ban: sudo apt install fail2ban We are going to create a local configuration file by copying the default configuration: sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Sold 23 to [email protected] Now, edit the local configuration file: sudo nano /etc/fail2ban/jail.local Modify these settings in the `[DEFAULT]` section: bantime = 30m findtime = 10m maxretry = 3 banaction = ufw banaction_allports = ufw The settings provided are a good starting point to fine tuning the service: bantime: How long an IP is banned (30 minutes) findtime: The window of time to look for bad attempts (10 minutes) maxretry: Number of failures allowed before banning (3 tries) You may need to adjust bantime , findtime , and maxretry based on your specific needs and the level of attacks you will experience. In the `[sshd]` section modify these lines, making sure the port setting matches your actual SSH port: enabled = true port = 45273 mode = aggressive After configuration, start and enable Fail2Ban to run at boot: sudo systemctl enable fail2ban sudo systemctl start fail2ban Sold 24 to [email protected] Verify that Fail2Ban is running correctly: sudo systemctl status fail2ban Monitor and Manage To check the SSH jail status: sudo fail2ban-client status sshd it will show the status of the sshd jail and the banned ips, example: Status for the jail: sshd |- Filter | |- Currently failed: 0 | |- Total failed: 15 | `- File list: /var/log/auth.log `- Actions |- Currently banned: 2 |- Total banned: 5 `- Banned IP list: 10.23.123.123 10.23.123.124 To manually ban or unban an IP: sudo fail2ban-client set sshd banip 10.23.123.123 sudo fail2ban-client set sshd unbanip 10.23.123.123 3.7 Keeping Ubuntu Updated Regularly updating your server is crucial to ensuring that security vulnerabilities are patched and your system remains stable. Ubuntu provides a convenient tool called unattendedupgrades that allows you to automatically apply security updates without manual intervention. Sold 25 to [email protected] Here’s how to set it up: 1. First, ensure your package lists are up to date, then install the unattended- upgrades package: sudo apt update sudo apt install unattended-upgrades This tool automatically installs security updates and can be configured to update all packages if desired. 2. After installation, check that the unattended-upgrades service is running and active: sudo systemctl status unattended-upgrades.service 3. You should see a message confirming that the service is active. If it’s inactive or failed, you can start or restart it: sudo systemctl start unattended-upgrades.service By default, unattended-upgrades is configured to install security updates only. If you want to tweak the behavior (e.g., apply all available updates, enable automatic reboots after updates, or set email notifications), you can modify its configuration file: sudo nano /etc/apt/apt.conf.d/50unattended-upgrades Next steps By following these steps, you've significantly improved the security of your VPS! In the next chapter we will setup a domain name that points to your VPS IP address. Sold 26 to [email protected] CHAPTER 4 Buy a Domain and Connect It to Your VPS Using a domain to point to your VPS offers several advantages. It makes your server easier to access and it allows for flexibility: if your VPS IP changes, you can simply update the DNS settings for your domain, rather than needing to inform users of a new IP. Also, the most important point is that domains enable SSL certificates for secure connections, as many certificate authorities require a valid domain for issuance. You can buy a domain from any registrar you want but I suggest you to use Cloudflare because it is the cheaper (it will only ever charge you what we pay to the registry for your domain), and it will give you security features such as DDoS protection and SSL certificates for free. Cloudflare will act as a proxy between users and your server, so you can keep your IP address hidden Buy a Domain on Cloudflare Registrar 1. Create a Cloudflare account if you don't have one: https://dash.cloudflare.com/sign-up 2. Once logged in, navigate to "Domain Registration -> Register Domains" in the left sidebar. 3. Search for your desired domain name (eg: "mydomain.com") 4. If available, add it to your cart and proceed to checkout. 5. Complete the purchase by providing necessary information and payment. Sold to27 [email protected] NOTE If you already have a domain, you can still add the domain on Cloudflare and follow the instructions to change the nameservers in your registrar to point to Cloudflare. In this way you can take advantage of all the Cloudflare security features for free. 4.2 Set Up DNS to Point to Your VPS Now that you are the owner of "mydomain.com", you can decide to redirect https://mydomain.com to your VPS or use a subdomain like "app.mydomain.com", "api.mydomain.com", etc... TIPS Using a subdomain for your app could be useful if you plan to use the root domain for a static landing page that you can host completely for free on other services like Cloudflare Pages Let's start: 1. After purchase, in your home you should see the domain you have just bought. Click on it and then go the DNS -> Records tab on the left. 2. Click Add record to create a new DNS record. 3. Choose A as the record type. 4. For Name, enter @ (represents the root domain) or a subdomain (eg: "api") 5. For IPv4 address, enter your VPS IP address. 6. Set the proxy status to Proxied: it allows Cloudflare to optimize, cache, and protect all requests to your application, as well as protect your origin server from DDoS attacks. 7. Click Save. It may take up to 24 hours for DNS changes to propagate globally, but it's often much quicker. 4.3 Enable DNSSEC for your domain DNSSEC (Domain Name System Security Extensions) is an important security feature that adds an extra layer of protection to your domain. WHY ENABLE DNSSEC? DNSSEC ensures that the DNS responses your users receive are authentic and haven't been tampered with and it protects against DNS spoofing attacks, where attackers try to redirect traffic to malicious sites. Sold to28 [email protected] 1. Log into your Cloudflare dashboard and select your domain 2. Go to the DNS -> Settings tab. 3. Click on the Enable DNSSEC button. 4. Cloudflare will generate the necessary records. 4.4 Enabling SSL with Cloudflare Securing your website with SSL/TLS encryption is crucial for protecting user data and improving search engine rankings. Here's how to set it up with Cloudflare: 1. Log into your Cloudflare dashboard and select your domain 2. Go to the SSL/TLS -> Overview tab in your Cloudflare dashboard. 3. Under Your SSL/TLS encryption mode, select Configure. 4. Select the option Custom SSL/TLS and then Full (strict). This mode ensures end-to-end encryption and verifies the certificate on your origin server, providing maximum security. This mode requires that the certificate is not only valid but also properly issued by a trusted Certificate Authority (CA). How will we obtain an SSL certificate? In the upcoming chapters, we’ll set up Traefik as a reverse proxy to manage SSL certificate issuance and renewal. Traefik will leverage Let’s Encrypt to provide an SSL certificate for our server. To obtain this certificate, Traefik must complete a “challenge,” which verifies our control over the domain. This challenge is handled through the ACME (Automatic Certificate Management Environment) protocol, which automates the certificate issuance and renewal process. Meanwhile, Cloudflare will manage the domain verification, confirming our ownership of the domain name. Sold to29 [email protected] CHAPTER 5 Configuring Docker and the Reverse Proxy In this chapter, we’ll set up Docker, Docker Compose, and Traefik as a reverse proxy. This configuration will enable you to host multiple applications on your VPS efficiently and securely with automatic SSL certificate management. 5.1 Installing Docker Docker is a platform that allows developers to package applications and their dependencies into portable containers. These containers ensure that the application runs the same regardless of the underlying environment—whether on a developer’s machine, in a testing environment, or in production. We choose to use Docker because it allows to isolate applications, making it possible to run multiple projects on the same server without conflicts. Perfect for our use case! 1. Install prerequisite packages: sudo apt install apt-transport-https ca-certificates curl softwareproperties-common 2. Add Docker's official GPG key: Sold to30 [email protected] curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg 3. Add the Docker repository to APT sources: echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker- archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 4. Update the package database with Docker packages from the newly added repo: sudo apt update 5. Install Docker: sudo apt install docker-ce 6. Verify that Docker is running: sudo systemctl status docker 7. (Optional) To run Docker without sudo, add your user to the docker group: sudo usermod -aG docker ${USER} Log out and back in for this to take effect. Sold to31 [email protected] 5.2 Installing Docker Compose Docker Compose is a tool that simplifies the management of multi-container Docker applications by defining all services, networks, and volumes in a single configuration file (typically docker- compose.yml). This makes it easy to orchestrate complex environments, where multiple services need to interact, like web servers, databases, and background workers. It significantly speeds up the workflow by making service configuration reusable and easily shareable. Let's install it: 1. Download the latest version of Docker Compose. You can check latest version available in their releases page. At the time of this writing, the most current stable version is 2.29.5. mkdir -p ~/.docker/cli-plugins/ curl -SL https://github.com/docker/compose/releases/download/v2.29.5/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose 2. Make the binary executable: chmod +x ~/.docker/cli-plugins/docker-compose 3. Verify the installation: docker compose version Sold to32 [email protected] 5.3 Setting up Traefik as Reverse Proxy To serve multiple applications from a single server you will need to use a reverse proxy. That's because only one application is allowed to listen on the same port at one time. When someone types a URL in the browser, the request goes to the default internet port: Port 80: Handles regular, unencrypted HTTP traffic. This is important for initial connections or HTTP-to-HTTPS redirection. Port 443: Manages encrypted HTTPS traffic, ensuring secure communication between the client and the server. Of course it is possible to send a request from the browser to a different port if you specify it in the URL, for example, mydomain.com:3000, but that's awkward for your visitors. What is a Reverse Proxy? A reverse proxy is a server that sits between client requests (such as a web browser) and one or more backend servers, forwarding client requests to the appropriate backend server and then returning the server’s response to the client. It is often used to improve security by hiding the details of backend servers and their structure, making them less vulnerable to attacks. So a reverse proxy takes all incoming requests on the default internet ports and routes them to the respective application. We are going to put the proxy between Docker and the outside world, meaning Traefik (the reverse proxy) will sit in front of our Docker containers, handling incoming client requests. This setup allows Traefik to route requests to the correct Docker service based on domain names, manage SSL certificates, and balance traffic across containers, all while securing and simplifying the interaction between users and our containerized applications. Sold to33 [email protected] Do you remember our UFW (firewall) setup? We explicitly left ports 80 (HTTP) and 443 (HTTPS) open because Traefik, as a reverse proxy, uses these ports to handle web traffic. Since Traefik handles the incoming requests on these ports, UFW allows traffic only through 80 and 443 while blocking others. This combination protects your server while ensuring that legitimate web traffic can reach the applications running in Docker containers behind Traefik. WHY TRAEFIK? Traefik’s key strengths is its ability to automatically discover services from platforms like Docker and configure routes for them on-the-fly without requiring manual configuration. In contrast, Nginx typically requires manual configuration updates when new services are added or changed. Traefik also natively supports Let’s Encrypt, providing automatic HTTPS certificate generation and renewal, which reduces the overhead of managing SSL/TLS certificates. 5.3.1 Creating the Traefik Network in Docker Before setting up the proxy, we need to create a dedicated Docker network called “traefik.” This network will allow Traefik to communicate with your applications securely and efficiently: docker network create traefik In this way all services/containers connected to it will be able to interact with Traefik, enabling it to route requests to the appropriate containers. Sold to34 [email protected] 5.3.2 Setup Traefik dashboard The first container that we are going to create is the Traefik dashboard, the central place that shows you the current active routes handled by Traefik. To install it we are going to use Docker-compose and run a simple container with the dashboard Our Traefik configuration will be saved at the location /srv/traefik and will contain the following content: 1..env : Contains variables for your specific setup, e.g. domain name and HTTP basic auth credentials for the Traefik dashboard. 2. acme.json : The file where Traefik stores all your HTTPS certificates. 3. certs : a folder containing the aforementioned certificates in a more common format. 4. docker-compose.yml : Docker configuration file containing the Traefik and Traefik cert dumper containers. Let's start creating a directory for Traefik configuration, and creating all the necessary files: sudo mkdir /srv/traefik && cd /srv/traefik 1. Create a.env file: sudo nano.env Sold to35 [email protected] Add the following content, replacing with your own values: CF_DNS_API_TOKEN=your_cloudflare_api_token [email protected] DOMAIN_TRAEFIK=traefik.mydomain.com TRAEFIK_HTTP_BASIC_AUTH=username:$$hashed_password CF_DNS_API_TOKEN: you need to generate an API TOKEN from Cloudflare dashboard, with these permissions: Zone / Zone / Read Zone / DNS / Edit DOMAIN_TRAEFIK: we need to choose a subdomain to access to the dashboard. In this example i chose the subdomain "traefik": traefik.mydomain.com. To achieve that, we need to go the Cloudflare DNS settings (like we did in chapter 4.2) and add a new record A record with name "traefik" (or the one you choose) that points to the server IP (with proxy status enabled) 2. Create the acme.json file: sudo nano acme.json Add {} inside this file. And set correct permissions running: sudo chown acme.json # replace with your user name sudo chmod 600 acme.json 3. And finally we create the docker-compose.yml file: sudo nano docker-compose.yml Paste the following content (taking care to maintain the indentation, because the YAML files use indentation to indicate the structure and hierarchy of the data): Sold to36 [email protected] version: '3.8' services: traefik: image: 'traefik:v2.10' container_name: traefik restart: unless-stopped command: - --entrypoints.web.address=:80 - --entrypoints.websecure.address=:443 - --providers.docker - --providers.docker.exposedByDefault=false - --providers.docker.network=traefik - --api.dashboard=true - --api.debug=true # Enable a dns challenge named "cfresolver" - --certificatesresolvers.cfresolver.acme.dnschallenge=true - --certificatesresolvers.cfresolver.acme.dnschallenge.provider=cloudflare - --certificatesresolvers.cfresolver.acme.email=$ACME_EMAIL - --certificatesresolvers.cfresolver.acme.storage=/acme.json - -- certificatesResolvers.cfresolver.acme.dnsChallenge.resolvers=1.1.1.1:53,1.0.0.1:53 ports: - '80:80' - '443:443' environment: CF_DNS_API_TOKEN: '${CF_DNS_API_TOKEN}' volumes: - '/var/run/docker.sock:/var/run/docker.sock:ro' - './acme.json:/acme.json' networks: - traefik labels: - 'traefik.enable=true' - 'traefik.http.routers.traefik.rule=Host(`$DOMAIN_TRAEFIK`)' - 'traefik.http.routers.traefik.service=api@internal' - 'traefik.http.routers.traefik.tls.certresolver=cfresolver' - 'traefik.http.routers.traefik.entrypoints=websecure' - 'traefik.http.routers.traefik.middlewares=authtraefik' - 'traefik.http.middlewares.authtraefik.basicauth.users=$TRAEFIK_HTTP_BASIC_AUTH' - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded- Proto=https" networks: traefik: name: traefik external: true Sold to37 [email protected] If you know the basics of Docker and Docker Compose, you should be able to understand this configuration. This docker compose will create one container named traefik, here the most important parameters: traefik This section configures the Traefik container. o image: 'traefik:v2.10': Specifies that Traefik version 2.10 will be used. o container_name: traefik: Names the container “traefik”. o restart: unless-stopped: Ensures that Traefik automatically restarts unless manually stopped. o command: This block provides command-line arguments to Traefik when it starts. These include: - Entry points for HTTP (:80) and HTTPS (:443) - Use a Docker network named “traefik” (the one we created at the section 5.3.1) - Enable Traefik’s web dashboard and debugging. - Define a DNS challenge resolver for obtaining SSL certificates through Let’s Encrypt. - Store ACME certificates in a local acme.json file ports Maps the host ports to the container ports, exposing Traefik’s HTTP (80) and HTTPS (443) services to the outside world. environment CF_DNS_API_TOKEN: Passes the Cloudflare API token to Traefik, used for managing DNS records in the SSL certificate process. The value is provided via the.env file we created before volumes Defines data shared between the host and container. o /var/run/docker.sock:/var/run/docker.sock:ro: This gives Traefik access to the Docker socket so it can detect and manage Docker services. o./acme.json:/acme.json: Mounts a local file to store SSL certificate data securely. networks Connects the Traefik container to the external Docker network named “traefik” (that we created at the beginning of the chapter) enabling communication between Traefik and other Docker containers. labels These are Docker labels that configure Traefik’s behavior for routing and security. o traefik.enable=true: Ensures Traefik will manage this container. o traefik.http.routers.traefik.rule=Host($DOMAIN_TRAEFIK): Defines that the Traefik dashboard will be accessible via the domain defined in the.env file. Sold to38 [email protected] o traefik.http.routers.traefik.tls.certresolver=cfresolver: Configures the certificate resolver for HTTPS connections (we use the DNS challenge method via Cloudflare to obtain and renew SSL certificates from Let’s Encrypt) o traefik.http.routers.traefik.entrypoints=websecure: Ensures the dashboard is only accessible via HTTPS (port 443) o traefik.http…...basicauth.users=$TRAEFIK_HTTP_BASIC_AUTH: Configures basic authentication with credentials provided via.env file o traefik.http…..redirectscheme.scheme=https: Redirects HTTP requests to HTTPS for better security. o traefik.http…..XForwarded-Proto=https: Adds a security header to forward HTTPS requests properly. networks o name: traefik: Defines the Docker network name. o external: true: Indicates that this network is created outside of this specific docker-compose file (it already exists, infact we created it in the section 5.3.1). 4. Now check if the docker-compose.yml is correct running: docker compose config and start the container: sudo docker compose up -d You should be able to access to the dashboard opening your browser and going to: traefik.mydomain.com , insert your credentials and you are in! 5.4 Deploying Applications with Traefik With Traefik now set up and handling SSL certificates, routing, and automation, deploying additional containers becomes straightforward. All you need to do is add the necessary labels to your Docker Compose files, and Traefik will take care of the rest. Example Hello World App Here’s how you can deploy a simple “Hello World” app using Docker and Traefik. Create a new docker-compose.yml file with the following content: Sold to39 [email protected] version: '3.8' services: hello-world: image: nginxdemos/hello container_name: hello-world labels: - 'traefik.enable=true' - 'traefik.http.services.hello-world.loadbalancer.server.port=80' - 'traefik.http.routers.hello-world.rule=Host(`api.myapp.com`)' - 'traefik.http.routers.hello-world.entrypoints=websecure' - 'traefik.http.routers.hello-world.tls=true' - 'traefik.http.routers.hello-world.tls.certresolver=cfresolver' networks: - traefik networks: traefik: name: traefik external: true This simple configuration will run an "hello world" container and thanks to the "labels" Traefik will handle routing for requests to hello.mydomain.com to this container, using HTTPS with automatic certificate management. For each new application you have to add a new record A in Cloudflare DNS Settings to redirect all the new subdomains to your server. So in this example you need to add the "hello" subdomain, in this way the user will be redirected to the server and the reverse proxy will take care of the request. That's it! For each new application, you just need to add similar labels to make it accessible via Traefik with automated SSL certificate management. Next Steps Congratulations! You’ve configured your VPS with Traefik and Docker. These setup steps only need to be done once. Moving forward, deploying new applications will be as simple as adding the correct labels. In the next chapter, we’ll explore automating deployments for your projects using GitHub Actions and Docker Compose. Sold to40 [email protected] CHAPTER 6 Deploying and Automate In this chapter, we'll explore how to automate your deployment process using GitHub Actions and Docker Compose. This setup allows you to automatically build and deploy your application whenever you push changes to your GitHub repository. This is one of the many ways to automate deployment, so choose the one you prefer or the tools you already use. Considering that your project is probably on Github, I think that Github Actions are simple and straightforward, allowing us to do everything without using external tools. In any case, I recommend you to follow and understand this chapter so that you can learn at a high level the steps to take to automate deployments on your VPS. How does it works? GitHub Actions allow you to automate your software development workflows right in your GitHub repository. We'll set up a workflow that builds a Docker image of your application and pushes it to GitHub Container Registry (ghcr.io). This workflow will be triggered when you push a new commit on your repository on github (main branch). The workflow will follow this steps: 1. Build your project in a Docker Image 2. Push the image to the github registry 3. It connects to your VPS using SSH 4. Launch the command docker-compose up that will pull the docker image just built and it will run all the containers needed by your project Sold to41 [email protected] It's great, isn't? In this way after the first setup you will not have to worry about anything except developing your project and committing to Github! BEFORE CONTINUING We assume that your project is already configured with a "Dockerfile" to be containerized, and you already have a docker-compose.yml file that runs all the containers you need for your project like the backend, db, etc... All this is beyond the scope of the book, but I will try to address it in as much detail as possible 6.1 Preparing Your VPS for Deployment Before setting up the Github Action, we need to allow Github to connect safely to our VPS. Of course we will not use your main user "sammy", because we don't want to share our private credentials with a third party company. For this reason we will create a new user called github with limited permissions, which we can easily get rid of in case his credentials are compromised. 6.1.1 Create a Deployment User Login to your VPS and create a new user "github": sudo adduser github And add it to the Docker group, so that it can have permissions to create and run docker containers: sudo usermod -aG docker github 6.1.2 Set Up SSH Key for the Deployment User To connect to our VPS we need to generate an SSH key for the "github" agent (remember that we have disabled the password login). Follow again the steps in the chapter 3.2.1 and run the command ssh-keygen to generate a private and public key from your local machine. Sold to42 [email protected] This time don't set a passphrase this time. Private key: This will be saved in /your_home/.ssh/id_rsa (or the name you specified). Save this key and have it ready because we will need it in the next section Public key: Stored in /your_home/.ssh/id_rsa.pub. Following the chapter 3.2.2, copy the public key to the authorized_keys file. 6.2 Setting Up Your Project on the VPS To maintain all your projects organised and independent, we will create a different folder per project. On your VPS, create a directory for your first application "myapp", in the same folder where we put the Traefik configuration: sudo mkdir /srv/myapp and create two files inside: 1..env : used to store sensitive data such as passwords, API credentials, and other information that should not be written directly in code. 2. docker-compose.yml : it defines and runs multi-container Docker applications. It specifies the services, networks, and volumes your app requires, allowing you to manage them together. This is an example for a small project with two containers, nodejs and mysql: 1..env: # MYSQL MYSQL_USER=XXXXX MYSQL_ROOT_PASSWORD=XXXXX MYSQL_PASSWORD=XXXXX MYSQL_DATABASE=myapp_db # NodeJS App PORT=3000 DB_HOST=myapp-db # db container name DB_PORT=3306 DB_SCHEMA=myapp_db... Sold to43 [email protected] 2. docker-compose.yml: version: '3.8' services: backend-api: container_name: backend-api image: ghcr.io//:master depends_on: - myapp-db env_file: -.env labels: - 'traefik.enable=true' - 'traefik.http.routers.nest-api.rule=Host(``)' - 'traefik.http.routers.nest-api.entrypoints=websecure' - 'traefik.http.routers.nest-api.tls=true' - 'traefik.http.routers.nest-api.tls.certresolver=cfresolver' networks: - traefik myapp-db: image: mysql container_name: myapp-db restart: always ports: - '127.0.0.1:3306:3306' #expose the port only to localhost so we can access to the db through a tunelling or a vpn env_file: -.env volumes: - mysql:/var/lib/mysql networks: - traefik volumes: mysql: name: myapp-db networks: traefik: name: traefik external: true Replace the repository path and the container names with your actual configuration Sold to44 [email protected] What can you notice here? 1. We created two containers: backend-api and myapp-db. You should be able to understand all the docker keys, but the important keys worth noting are the labels related to traefik: traefik.enable=true: Enables Traefik for this service, meaning Traefik will manage routing for the backend-api container. traefik.http.routers.nest-api.rule=Host(****): Defines the routing rule for Traefik. This rule says that if a request comes to the hostname api.myapp.com, Traefik should route the traffic to this service (backend-api). traefik.http.routers.nest-api.entrypoints=websecure: Specifies the entry point for this service as websecure. it will handle secure traffic (likely HTTPS). traefik.http.routers.nest-api.tls.certresolver=cfresolver: Specifies the certificate resolver cfresolver, this is how Traefik will automatically get and renew SSL certificates for api.myapp.com. 2. Networks: Both containers don't expose the port (usually 3000 and 3306) to the host machine, since Traefik manages the traffic and routes requests to the correct service internally. All the containers can communicate through the network named "traefik", that we configured in the previous chapters. In this scenario Traefik will route the traffic to the backend-api container (through the labels we set), and the backend-api container can communicate with the db. 6.3 Setting Up GitHub Actions 6.3.1 Set Up GitHub Secrets To use this workflow, we need to set up some secrets in your GitHub repository: 1. Go to your GitHub repository -> Settings -> Secrets and variables -> Actions 2. Add the following secrets: SSH_SERVER_HOST : Your VPS IP address SSH_SERVER_PORT : Your SSH port (we used 45273 for the tutorial in chapter 3) SSH_USERNAME : The username for SSH access (we created the user github for this scope in the previous section) SSH_PRIVATE_KEY : The SSH private key for accessing your VPS (paste the private key that one we created in the previous section 6.2.2) Sold to45 [email protected] 6.3.2 Create a GitHub Actions Workflow 1. Open your project root folder from your IDE and create a new directory:.github/workflows 2. Inside this directory workflows, create a new file named build-and-deploy.yml 3. Copy the following content into the file: name: Docker Image CI on: push: branches: - master pull_request: branches: - master jobs: build-and-push-image: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - name: Checkout repository uses: actions/checkout@v4 - name: Log in to the Container registry uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Extract metadata (tags, labels) for Docker id: meta uses: docker/metadata-action@v5 with: images: ghcr.io/${{ github.repository }} - name: Build and push Docker image uses: docker/build-push-action@v6 with: context:. push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} - name: Deploy to VPS uses: appleboy/ssh-action@master Sold to46 [email protected] with: host: ${{ secrets.SSH_SERVER_HOST }} port: ${{ secrets.SSH_SERVER_PORT }} username: ${{ secrets.SSH_USERNAME }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin docker pull ghcr.io/mygithubuser/myrepository:${{ steps.meta.outputs.version }} cd /srv/myapp docker-compose up -d Replace the repository path and the container names with your actual configuration. All the keys in “secrets” and “github” will be retrieved automatically from github (we set these values in the section 6.3.1). This workflow does the following: 1. Triggers on pushes and pull requests to the master branch 2. Logs in to the GitHub Container Registry using the secrets.GITHUB_TOKEN that is automatically created at the start of each workflow job by GitHub. If you go to your repository Settings -> Actions -> Workflow permissions, you will be able to restrict the permissions to read only. And then we can specify more granular permissions with the key permissions:. In this github action we set contents: read and packages:write 3. Builds a Docker image from your Dockerfile in the root of the project: context:. and pushes the image to GitHub Container Registry 4. Connects to your VPS through SSH, logs in to the Github Container Registry, Pull the latest image just built and runs up all the containers following the docker-compose-yml file in the server. 6.4 Deploying Your Application Congratulations! With everything set up, your application will now deploy automatically whenever you push to the master branch of your GitHub repository. The GitHub Action will build a new Docker image, push it to GitHub Container Registry, and update your VPS to use the new image. To manually trigger a deployment, you can run: cd /srv/myapp docker-compose up -d This command pulls the latest image and starts (or restarts) your containers. Sold to47 [email protected] Tips: Accessing Your Database In the example above the database is in a container and there are no exposed port to the internet. But I imagine you have noticed the lines: ports: - '127.0.0.1:3306:3306' This means that the port 3306 is exposed to the local machine, so you can access to it from the VPS. You can still decide to expose it to the internet, or routing the traffic through Traefik. But considering the backend is on the same machine, I suggest you to keep this configuration and access your database through a tunnel: 1. Use an SSH tool (like Termius on Mac) to create a tunnel, forwarding the database port from your VPS to your local machine. 2. Once the tunnel is set up, you can use tools like MySQL Workbench (or other tools to interact with your db) to connect to the database as if it were running locally. Alternatively, you could set up a VPN (like OpenVpn or Tailscale) to securely access your VPS network, allowing direct access to the database. Next Steps Your project is now up and running! In the next chapter, we'll discuss how to set up automated backups to ensure your data is always safe, and other bonus tips. Sold to48 [email protected] CHAPTER 7 Bonus Tips and Tools 7.1 How to Implement Service Auditing VPS Regularly auditing your network services is crucial for maintaining security. To audit network services that are running on your system, use the ss command to list all the TCP and UDP ports that are in use on a server. An example command that shows the program name, PID, and addresses being used for listening for TCP and UDP traffic is: sudo ss -plunt The p , l , u , n , and t options work as follows: p shows the specific process using a given socket. l shows only sockets that are actively listening for connections. u includes UDP sockets (in addition to TCP sockets). n shows numerical traffic values. t includes TCP sockets (in addition to UDP sockets). Key points to note: 0.0.0.0 means the service is listening on all IPv4 interfaces [::] means the service is listening on all IPv6 interfaces You can notice from the output the sshd service listening to the port "45273" and the proxy listening to the port "443" and "80". Review this output regularly and disable any unnecessary services or restrict them to specific interfaces if possible (being careful not to disable essential system services, or docker container ports that are only accessible from the inside and not exposed to the outside). Sold to49 [email protected] 7.2 Automate Database Backups In this section, we’ll walk through the process of automating database backups using rclone to upload backups directly to Cloudflare R2, as well as scheduling the process with cron jobs. WHY CHOOSE CLOUDFLARE R2? We’ve already set up a Cloudflare account, and its R2 service stands out as an excellent cloud storage option. R2 offers zero egress fees, meaning you won’t be charged for data retrieval, leading to significant cost savings. Plus, its full S3 compatibility ensures seamless integration with existing tools and workflows. 7.2.1 Create a Clouflare R2 bucket To interact with Cloudflare R2 for backups, you'll need your R2 Access Key and Secret Key. Follow these steps to retrieve them: 1. Log in to your Cloudflare dashboard. 2. Navigate to the "R2" section and create your R2 bucket named “myapp” 3. Go to the "Manage R2 API Tokens". Here, you can create a new API token or use an existing one. Ensure the token has permission to write to your R2 bucket: "Admin Read & Write" and limit specifying the bucket where you want to store your backup. 4. Once created, you’ll receive an Access Key, Secret Key, and an Endpoint URL. These will be used to configure rclone. 7.2.2 Installing rclone On Ubuntu you can install rclone using the package manager: sudo apt install rclone Once rclone is installed, you’ll need to configure it to connect to Cloudflare R2 storage: rclone config Sold to50 [email protected] This will start an interactive configuration process. Follow the prompts: 1. Choose n for new remote. 2. Name your remote (e.g., cloudflareR2 ). 3. Select the storage type by choosing option S3 Compatible storage (it's typically 5 ) 4. When prompted, choose Cloudflare R2 Storage as the provider. 5. You can choose to authenticate using environment variables or enter your Cloudflare R2 Access Key and Secret Key directly. Ensure that these are stored securely if entered directly (e.g., in a.env file). 6. Leave the region blank or set it to auto unless you have a specific region to use. 7. Enter the Cloudflare R2 endpoint: https://.r2.cloudflarestorage.com (you can find it in your R2 dashboard) 8. Keep the default settings for the advanced configuration typing n 9. Confirm and quit the configuration (typing q ) Test the configuration: rclone ls cloudflareR2:your-bucket-name If the connection works, you should see the contents of your R2 bucket! If not, review your configuration for any issues with keys, endpoint, or permissions. Now we are going to create the script that will 7.2.3 Creating the Backup Script To enhance the security and maintainability of your backup script, you can store sensitive information such as database credentials in an.env file. Or you can use the existing of your project. Example: DB_HOST= MYSQL_USER= MYSQL_PASSWORD= DB_SCHEMA= Create and edit the backup script in any folder you want: touch backup_script.sh sudo nano backup_script.sh Sold to51 [email protected] Add the following content: #!/bin/bash # Load environment variables source "$(dirname "$0")/.env" # Backup file details BACKUP_FILENAME="backup_$(date +%Y%m%d%H%M%S).sql.gz" BACKUP_PATH="/srv/myapp/${BACKUP_FILENAME}" RCLONE_REMOTE_NAME="cloudflareR2" # The name you gave your Cloudflare R2 remote in rclone config BUCKET_NAME="myapp" REMOTE_BUCKET_PATH="/backup_db" # Backup and compress the database docker exec ${DB_HOST} /usr/bin/mysqldump --no-tablespaces -u ${MYSQL_USER} -- password=${MYSQL_PASSWORD} ${DB_SCHEMA} | gzip > ${BACKUP_PATH} # Upload the backup to Cloudflare R2 using rclone rclone copy ${BACKUP_PATH} ${RCLONE_REMOTE_NAME}:${BUCKET_NAME}${REMOTE_BUCKET_PATH} # Check if rclone command was successful if [ $? -eq 0 ]; then echo "Backup uploaded successfully." rm ${BACKUP_PATH} # Optionally, remove the local backup file if it exists #Deletes files older than 30 days in your specified bucket [optional] rclone delete --min-age 30d ${RCLONE_REMOTE_NAME}:${BUCKET_NAME} else echo "Failed to upload backup." # [TODO]: Here you can choose to send an email or notification to you, so you know that there is a problem with the backup fi Replace all the path and bucket name following your configuration, while the other secrets are read automatically from the.env file, making it easier to update your settings without modifying the script directly. Now if you want to run the backup script manually make it executable: sudo chmod +x backup_script.sh and run with:./backup_script.sh Sold to52 [email protected] The script will do a dump of the entire database in the specified folder, then copies it remotely to the Cloudflare bucket and finally (optionally) you can decide to delete the local copy and clean the bucket keeping only the most recent files. 7.2.4 Scheduling the Automatic Backup To automate the backup process, schedule the script using a cron job: 1. Open your crontab file: crontab -e 2. Add a line to run the script at your desired time, e.g., 2 AM daily: 0 2 * * * /srv/myapp/backup_script.sh 7.3 Setup Telegram Bot for Alert Notifications Telegram bots are an excellent way to receive instant notifications about your server's status. 7.3.1 Creating a Telegram Bot 1. Search for botfather in telegram and open its chat. Make sure it has blue tick to confirm its authenticity. 2. Send /newbot to create a bot (always refer the output of /start command first if the name of the command changed) 3. Botfather will respond you with chat link of new bot and a HTTP API token. Keep API token secure and safe as anyone with token can use and send any message using your Telegram bot. Eg: 532613213321:AAHDSdsdas_hmcdflasdoidsa Sold to53 [email protected] 7.3.2 Retrieving the Chat ID 1. Send a any message to the bot 2. Get Chat ID of the chatroom using bot API. We will use the command curl from a terminal: curl https://api.telegram.org/bot/getUpdates 3. Fetch the chat id from the json output. It should be under results.message.chat.id 7.3.3 Sending Messages via the Bot Now every time you want to send a message notification to your phone, you can use this curl command: curl -X POST https://api.telegram.org/bot/sendMessage -H 'Content- Type: application/json' -d '{"chat_id": "", "text": "Your message here"}' You can incorporate this into your scripts to receive notifications about important events or errors. For example in the script backup_script.sh we created before you can add the following line:...... # Check if rclone command was successful if [ $? -eq 0 ]; then... else echo "Failed to upload backup." #Send Telegram Notification curl -X POST https://api.telegram.org/bot/sendMessage -H 'Content- Type: application/json' -d '{"chat_id": "139XXXX22", "text": "Upload Backup MyAPP DB Failed"}' fi Sold to54 [email protected] CONCLUSION Congratulations! You've made it to the end. Let's take a moment to reflect on the journey we've been through and the valuable skills you've acquired. Throughout this guide, we've covered a wide range of essential topics to manage a Virtual Private Server: 1. We started with the basics of choosing and renting a VPS 2. We dove deep into setting up your VPS, covering everything from initial login to system updates. 3. Security was a major focus, as we explored creating new users, setting up SSH keys, and implementing firewalls. 4. Finally we tackled the practical aspects of configuring Docker and setting up a reverse proxy, giving you the tools to deploy multiple projects efficiently. By now, you should feel confident in your ability to set up, secure, and manage your own VPS. You've gained practical skills that are highly valued in the job market and essential for any serious developer working on web projects. Remember, the world of server management is constantly evolving, so there's always more to learn. I will try to keep this guide updated but I encourage you to keep exploring and experimenting with your own server! Stay Connected Did you find any error, do you need some help or do you only want to share some feedback? Please contact me at [email protected] If you found this guide helpful and want to stay updated on more tips, tricks, and insights about DevOps, full stack development, indie hacking, and building digital products, follow me on X @marcomelilli I regularly share valuable content, answer questions, and engage with the developer community. I'm looking forward to receiving feedback about this guide and staying connected with like-minded professionals like you! Thank you for choosing this guide. I hope it has been a valuable resource for your development journey and future projects! Happy coding, and may your servers always stay up and secure! Sold to55 [email protected]

Use Quizgecko on...
Browser
Browser