So, you have a Raspberry Pi or an extra computer, and you're thinking you'd like to setup a blog on it. Then you realize, that your broadband connection may not have a static IP address. It's okay. You can do it all. It is important to note that while you absolutely can setup a website at home, not all ISPs support it. Many explicitly forbid it, but allow it so long as you aren't using tons of bandwidth for it. At any rate, read your service terms. The blog we will be setting up will take markdown files, render them to HTML on the fly, and your updates are as easy as an upload. Just use SFTP to drop the files into /var/www/public/posts
to post to the site.
DNS
Every registrar has different features and different setup steps... I can't cover all of it, but you can easily get DynDNS services at freedns.afraid.org. You can sign up for free, and there are free subdomains available. Most importantly, once you're setup you can use a simple curl command in cron to update your IP address:
crontab -e
And then:
0 0 * * * curl https://sync.afraid.org/u/CyTXMbtq5cPnLjEg5vKHTPDE/
Or something similar (this is dependent upon your account and subdomain and all that jazz), will change the IP address of whatever domain you choose. The key, if you want to use some other service, is to look for dynamic DNS. You need a way to auto-update your domain's IP address any time your home address changes.
System Software
For the most part, any Linux distribution will work for our purposes, but as most people use Debian/Ubuntu, I will assume that's what you're using.
sudo apt install nginx fcgiwrap certbot git markdown -y
Nginx is the web server, fcgiwrap allows nginx to run CGI and capture the output and push to ports 80/443, certbot allows us to get TLS certificates, git will grab the code for the site that we need, and markdown will translate markdown files to HTML.
With that software installed, we now need to configure nginx and get our TLS certificate.
First, you need to open vim to sudo vim /etc/nginx/sites-available/example.com.conf
.
server {
listen 80;
listen [::]:80;
root /var/www/public;
server_name example.com www.example.com;
location ~ ^/\.well-known/acme-challenge(.*) { default_type text/plain; allow all; }
location ~ ^/(.*) { return 302 https://$host$request_uri; }
access_log off;
error_log off;
}
This configuration will just let us get our certificate and forward all traffic from non-TLS to TLS. We need to enable it sudo ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/; sudo nginx -t && sudo nginx -s reload
and we can now proceed to get the certificate.
sudo certbot certonly --webroot -w /var/www/public -d example.com -d www.example.com --dry-run
If that worked, you can just run it without --dry-run
sudo certbot certonly --webroot -w /var/www/public -d example.com -d www.example.com
With the certificate in place, we now need to add our TLS site configuration to nginx. Re-open that config file and add this to the end:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
root /var/www/public;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES256:ECDH+AES128:!aNULL:!SHA1:!AESCCM;
charset utf-8;
index index.sh index.cgi index.html;
location ~* \.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|html|woff)$ {
expires 2d;
add_header Cache-Control "public, no-transform";
}
location ~ (\.cgi|\.py|\.pl|\.lua|\.sh)$ {
fastcgi_pass unix:/run/fcgiwrap.socket;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_read_timeout 300s;
fastcgi_send_timeout 600s;
}
location ~ /\.(?!well-known).* {
deny all;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
log_not_found off;
}
Then, just enable that config: sudo nginx -t && sudo nginx -s reload
SITE SOFTWARE
You now need to grab the BashBlog.
git clone https://github.com/genbuckturgidson/BashBlog.git
cd BashBlog/
sudo make install
sudo chmod /var/www/public/ -type f -exec chmod 640 '{}' \;
sudo chmod /var/www/public/ -type f -iname "*.cgi" -exec chmod 750 '{}' \;
sudo chmod /var/www/public/ -type d -exec chmod 750 '{}' \;
sudo chown -R yourUserName:www-data /var/www/public
You can customize what ever parts you wish. With the default installation of this software, before any customization, you just toss your markdown files into /var/www/public/posts
and the blog engine will handle everything else. That's how this site works. The chmod
commands and the single chown
command are security measures. On any site, you should always try to prevent the user/group that the web process runs as from having any write permissions to the filesystem. Read is required for static files, and execute for executable files (like CGIs and directories), but that is all. If that is all that is required, then that is all that should be given.
For more information about BashBlog you can visit the GitHub page for it.
OTHER CONSIDERATIONS
When running something at home, it is rather important that you make sure someone can reach it. For this, you will need to setup a few forwards in your router. In general, you will need to make sure that port 80 and port 443 are forwarded to your server. In openWRT, this is in Network -> Firewall -> Port Forward.
If you’re not comfortable with opening up your home, look into Cloudflare’s argo tunnels.
Yet, think of what you just did! You have now allowed people from outside your home into your home and onto your network. You now need to make sure that any user input or GET variable, cannot be used to exploit your server and thereby allow further access to your home network. Have fun, be careful, keep on computing!