Hosting a website is a rite of passage since the 1990s, for those who wish to truly have a voice on the internet. It can be done inexpensively, on commonly available hardware. Any decent operating system (e.g. FreeBSD, OpenBSD, Linux distros) can easily run a web server, and many servers exist (nginx, lighttpd, Apache, OpenBSD’s own httpd and more).
This tutorial will teach you how to set up a secure web server on Arch Linux, using nginx. We will use Let’s Encrypt as our Certificate Authority, enabling the use of encryption via https://
URLs with nginx listening on port 443.
We will also enable HTTP/3, which can operate on port 443 or 8443 in UDP. HTTP/2 (with TLS) typically operates on TCP port 443, which you will also use.
Let’s Encrypt is a non-profit Certificate Authority run by the Internet Security Research Group. You can learn more about Let’s Encrypt on their website:
You can read about nginx here:
This guide was last updated on 15 October 2025. Arch Linux changes a lot, so please do let us know if any of this guide breaks. Nginx though is pretty conservative about changes, following a Principle of Least Surprise, and Arch is pretty vanilla in how it packages things, so it’s probably OK for the most part.
This guide talks about Arch Linux, but these instructions can easily be adapted for other distros, or FreeBSD. Always read the manual!
One IPv4 address and one IPv6 address for the host. Both IP addresses must be publicly routed, and pingable from the internet.
If you only have IPv4, or only have IPv6, you can adapt accordingly, but you should ideally have both; they should also both come from the same ISP, with the same AS number.
Port forwarding is also acceptable, but ill advised; if possible, make sure you have direct IP routing.
You must also ensure that ports 80 and 443 are open. IP routing and packet filtering are beyond the scope of this article, but you might check the router section for further guidance.
Specifically, open these ports in your firewall:
Nginx has supported HTTP/3 since release 1.25, and Arch always has the latest version of Nginx, so its nginx package does indeed support HTTP/3.
You need A
(IPv4) and AAAA
(IPv6) pointers in your DNS configuration, for your domain name, pointing to the IPv4 and IPv6 address of the host that will run your web server.
You might consider hosting your own DNS, using the guides provided by Fedfree.
It is assumed, by this tutorial, that you have configured the following:
example.com.
(bare domain) A and AAAA recordswww
(www.example.com
) A and AAAA recordsExample entries (from the ISC BIND zone file used for libreboot.org):
libreboot.org. IN A 81.187.172.132
www IN A 81.187.172.132
libreboot.org. IN AAAA 2001:8b0:b95:1bb5::4
www IN AAAA 2001:8b0:b95:1bb5::4
You may wish to configure DNS CAA (Certificate Authority Authorization) for your domains. Something like this would be placed inside your DNS zone file (the syntax below is for an ISC BIND zone file):
example.com. IN CAA 0 issue "letsencrypt.org"
example.com. IN CAA 0 iodef "mailto:you@example.com"
Where example.com
is specified, substitute your own domain name (change the email address aswell).
More information is available here:
https://letsencrypt.org/docs/caa/
More information about ISC BIND zone files:
../dns/zonefile-bind.html
The specified email address should ideally match what you provide to certbot, while generating new certificates. By putting CAA records on your zone files, only LetsEncrypt shall be permitted to create certificates for the domain name.
Please also refer to Arch Wiki pages:
As root:
pacman -S nginx
You will also install certbot and openssl. As root:
pacman -S certbot openssl
Certbot supports use of ECDSA as opposed to RSA. See:
https://eff-certbot.readthedocs.io/en/latest/using.html#using-ecdsa-keys
That page there recommends this document, which tells you how change your configuration file:
https://eff-certbot.readthedocs.io/en/latest/using.html#config-file
You might put this (example) in your /etc/letsencrypt/cli.ini
config:
key-type = ecdsa
elliptic-curve = secp384r1
rsa-key-size = 4096
This will be relevant later, but please note: regardless of notes on this page about RSA key size, my testing indicates that the default in Arch Linux now is to use ECDSA, not RSA.
Therefore, the RSA key size may be irrelevant.
The Diffie-Hellman key is used for TLS handshakes, between your client’s browser and the nginx server. You can learn more about Diffie-Hellman on wikipedia:
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
And about TLS here:
https://en.wikipedia.org/wiki/Transport_Layer_Security
Run this command (if you will be serving TLSv1.2 users):
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
NOTE: If you’re using only modern TLSv1.3 (see later sections of the guide), then you will not use this file at all. This file is needed if you will be supporting TLSv1.2 in your setup.
You should now have this file: /etc/ssl/certs/dhparam.pem
- please verify its contents. You will later make use of this, while configuring nginx.
You may find these resources insightful (read every page/thread), regarding key size for dhparams:
Changing this key every few months would be a good practise. You could do it when you renew certificates in certbot.
The key size of 2048 bits still secure enough, at least until year ~2028. If you do want to use something stronger, write 4096 instead, and use
NOTE: You also need to use --rsa-key-size 4096
later on when running certbot, if using this higher RSA key size, but note again that ECDSA is preferable over RSA (for stronger encryption).
Certbot implements the ACME protocol used by LetsEncrypt, interacting with it for the creation, renewal and revocation of certificates. For more information, please refer to the following resources:
Certbot is the reference implementation, but alternative programs are available. This tutorial will make use of certbot, because that is the one recommended by Let’s Encrypt.
If you already have certificates in place, you can skip this step.
We will actually set this up first. When you installed nginx, it will have been started automatically. You must stop it now:
systemctl stop nginx
Although certbot does provide nginx integration, we will not be using it, because that is not as flexible as is ideal. Instead, we will be using the certonly mode in certbot.
If you already have certificates in place, you can skip this step.
STOP! Before you continue, please ensure that DNS is working. You can try to ping your server on example.com
and www.example.com
, where example.com
is to be substituted with your actual domain name.
If you’ve already got DNS properly configured, you can literally just run it now to generate your brand new key.
STOP! DO NOT run these commands straight away, but read instead and keep them for reference:
certbot certonly -d example.com --rsa-key-size 4096
certbot certonly -d www.example.com --rsa-key-size 4096
Read the following sections, and really learn about certbot. When you’re ready to continue, run the above commands (adapted for your purposes).
First, read the entire certbot manual:
https://eff-certbot.readthedocs.io/en/stable/using.html
MARNING: LetsEncrypt’s OCSP service is being shut down.
You may have previously used the --must-staple
argument in certbot
.
If you’ve previously used this option, you must revoke your certificate and then re-issue it, without OCSP Must Staple enabled.
Otherwise, renewals via certbot would be rejected and your site would go offline.
See: https://letsencrypt.org/2024/12/05/ending-ocsp/
In certbot, the default size is 2048 bits. If you’ve generated 2048-bit dhparam.pem
, you should use the default RSA 2048-bit size in certbot aswell. If you specified 4096 bits, then you should use that in certbot.
You can pass --rsa-key-size 4096
in certbot for the higher 4096-bit key size, but please do consider performance requirements (especially on high-traffic servers).
RSA key sizes of 2048 bits are still perfectly acceptable, until around the year ~2028. Some of the elliptic curve-based ciphers that you’ll configure in nginx, for TLS, do also have an equivalent strength of 7680-bit RSA.
When certbot generates a certificate, it will ask you whether you wish to spin up a temporary web server (provided by certbot), or place files in a webroot directory (provided by your current httpd, in this case nginx).
This is because LetsEncrypt does, via ACME protocol, verify that the host machine running certbot
is of the same IP address as specified by A/AAAA records in the DNS zone file, for a given domain. It will place files in a directory relative to your document root, showing that you do actually have authority of the host. This is done, precisely for authentication purposes.
You do not have to stop nginx every time. We only did it as a first-time measure, but we’ll later configure certbot to work in certonly mode but with nginx running; stopping nginx will not be required, when renewing certificates. More on this later.
If this is your first time using certbot on this server, certbot will ask you other questions, such as:
If all went well, certbot will tell you that it ran successfully.
You may notice I did two runs:
-d example.com
-d www.example.com
This is down to you. You could do it all in one run:
certbot certonly -d example.com -d www.example.com --rsa-key-size 4096
However, in this instance, it would mean that you have both domains (which are technically different domains) handled by one set of keys. While this may seem efficient, it may prove to be a headache later on.
Check inside this directory: /etc/letsencrypt/live
You should now see directories for example.com
and www.example.com
, whatever your domain is.
MAKE SURE to always keep backups of /etc/letsencrypt
, on external media. Use of rsync
is recommended, for backups, as it handles backups incrementally and it’s generally quite robust. Refer to the rsync
manpage for info.
Although it may not be readily apparent from the certbot output, you will now have an account on Let`s Encrypt, as defined by a generated set of keys, and losing them could be a headache later as it may prevent auth, especially when renewing keys.
Navigate to directory at /etc/nginx
. Inside this directory, you will see a lot of files, and subdirectories.
More general documentation, specific to Arch Linux, can be found here:
https://wiki.archlinux.org/title/Nginx
When you see lines beginning with the #
character, please know that they are comments. They do not alter any behaviour; instead, they are used to disable parts of a configuration or to provide annotation.
For example, if the following configuration line were commented like so:
# gzip on;
To uncomment that line, and enable it, you would change it to:
gzip on;
It is important that all config lines end with the ;
character, as you will see in the following configuration examples.
Open this file, so that you can study it. It is important that you get to know the default configuration, so that you know what you’re doing later when you learn (from Fedfree) what to change.
For reference, this is the default file that was present on my Arch Linux system when I installed nginx, but with the non-interesting commented lines removed in this version for Fedfree:
#user http;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
# Load all installed modules
include modules.d/*.conf;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Now we will go through each part of it:
Refer to this line:
#user http;
You should uncomment this, since http
is the default http user on Arch Linux, when installing Nginx.
Change it to:
user http;
See: https://nginx.org/en/docs/ngx_core_module.html#user
Now observe this line:
worker_processes 1;
You could change 1
to auto
, which will make nginx run on every thread you have for your CPU (every core). Otherwise, 1
means that it will run on a single thread; such is likely sufficient, for a small personal instance, especially one that is private-only. I changed mine to auto
. See: https://nginx.org/en/docs/ngx_core_module.html#worker_processes
Now, regarding these lines:
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
Note that these paths are relative, so they would be written to /etc/nginx/logs/error.log
. See: don’t see. Turn them on if you need to debug an issue, otherwise turn them off because it can get big fast.
Uncomment these if you wish, but I’ll leave them commented on my setup. They record error messages to a dedicated log file as indicated.
Now refer to these lines:
#pid logs/nginx.pid;
# Load all installed modules
include modules.d/*.conf;
You can probably keep the pid line commented. By default, nginx will write its PID (process ID) to /var/nginx.pid
, and that’s OK. Perhaps as in the Debian version of this guide, you could explicitly change it it to:
pid /run/nginx.pid;
This is what I did.
The include
for modules.d
loads whatever configs you have in /etc/nginx/modules.d/
, which can be used to load various module configs. By default, there’s nothing in there.
Now observe this block:
events {
worker_connections 1024;
}
This specifies how many times nginx can fork, on each worker process. If you had eight worker processes, then you could have 8192 worker connections. This default setting is a nice conservative number, and you can probably just leave it alone; 1024 is plenty.
You might consider changing the block as follows:
events {
worker_connections 1024;
multi_accept on;
}
The multi_accept
directive, when enabled, will allow nginx to handle more than one connection at a time, per worker thread. On an invidious instance, especially a busy public one that lots of people use, where there are going to be lots and lots of connections per user, you might consider turning this on. I’m turning it on regardless, for my setup.
Now, inside the http {}
block, the big one that you see next, we will go through part part accordingly.
Observe these lines:
include mime.types;
default_type application/octet-stream;
Note that the include
has a relative part, which is always relative to /etc/nginx
when relative. Therefore, this would be the file /etc/nginx/mime.types
- and in there, you see a types {}
block, with a bunch of entries. These entries contain, per line, first a given MIME type and then (separated by any number of spaces or tabs) a given file extension. For example:
application/octet-stream bin lha lzh exe class so dll img iso;
text/plain txt asc text pm el c h cc hh cxx hxx f90 conf log;
In the above examples, a .exe
file would be considered a binary, which in typical browsers means it would be presented as a file to download; meanwhile, the text/plain
types would be treated as text to be displayed inside of a browser window.
You can read more about MIME types here:
https://en.wikipedia.org/wiki/Media_type
The defaults here seem quite sensible and prudent. Change/add some if you wish, but I wouldn’t bother unless you really needed to for a specific application.
The default_type
directive specified a default MIME type, where none is specified for a given file extension. Again, the defaults are quite sensible and boring. Boring is good.
Next, observe this line:
#access_log logs/access.log main;
If uncommented and therefore enabled, the access_log
directive here would specify that server access logs are written to the relative path corresponding to /etc/nginx/logs/access.log
- if you were just using this for your Invidious instance for example, you probably want to leave this turned off. Who knows what crap your users will be watching? You don’t want to know. People are weird. Even people you think you know are are probably alien to you, in reality. Leave it disabled.
Next, observe this line:
sendfile on;
Where possible, basically on static files (not dynamic pages or compressed pages/files), the sendfile()
function will be used in your kernel. Supported on at least the Linux and FreeBSD kernels, this bypassed the userspace read()
and write()
functions for example, instead doing file I/O directly in kernelspace. It can offer significant performance benefits if you’re largely serving static sites, but only if you’re not also using gzip compression (more in this a bit later). More specifically, the sendfile
method copies from disk directly to a net socket, rather than buffering anything in memory. This also reduces the wear and tear on your memory, especially if hosting lots and lots of static sites with many millions of visitors, though the actual benefit in this regard is likely non-existent in reality.
More info on these pages:
sendfile()
, so nginx’s sendfile
directive is useless there.The sendfile
directive is covered in more detail here: https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/
In a lot of self-hosting setups where bandwidth might be limited, you’re likely much better off using gzip anyway, in which case sendfile is useless. This is because in such setups, the bottleneck is likely your internet connection, not your CPU/RAM.
I simply left sendfile enabled. If I’m really serving a static file, without compression, then it will benefit me regardless. This might be useful on image files for example, which you typically would not compress with your httpd.
Next, observe this line:
#tcp_nopush on;
^ This option pertains to your network’s MTU and MSS. The MTU defines the maximum size of packets on your network, measured in bytes (on a typical PPPoE network, this might be 1492 or 1500 if jumbo frames are supported). MTU is short for Maximum Transmission Unit.
If a packet exceeds MTU size, it will become fragmented, sending part of it in a new packet. This is handled automatically, by your router.
MSS (Maximum Segment Size) is defined in the TCP header, for any given connection on the internet; it defines the largest size, in bytes, of data that can be sent in a single TCP segment, where the segment consists of a header and then the data part. It is this context that we are most interested in.
The tcp_nopush
directive makes nginx wait until the data part is full, per MSS rule, before sending, so that lots of data can be sent simultaneously, instead of pushing out additional packets.
Related: tcp_nodelay
, while not set here, can also be set. If set, the last packet will be sent immediately.
Information:
I turned tcp_nopush
on, in my setup. I also added tcp_nodelay
, turning it on.
Next, observe these lines:
#keepalive_timeout 0;
keepalive_timeout 65;
Leave this as-is. It’s a sensible default. It defines the number of seconds after which a keep-alive session will time out. As it says on the tin. More info: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_time
Next, observe:
#gzip on;
For your private server, you might just leave it commented, and therefore disabled. Uncomment it to enable gzip compression. See: https://nginx.org/en/docs/http/ngx_http_gzip_module.html
If you do turn on gzip, you might also add these sensible configurations:
gzip_vary on;
gzip_proxied any;
These are useful for reverse proxies; I’m setting up Nginx on Arch Linux so that I can run my own private Invidious instance, with Nginx as its reverse proxy, so I turned these and gzip all on.
Now I’ll copy the whole server block for you to consider:
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The listen
line means it listens on port 80, as in the above example. You might also have it listen on IPv6.
Below the listen 80
line, add this:
listen [::]:80 localhost;
Next, regarding the commented charset
line:
See: http://nginx.org/en/docs/http/ngx_http_charset_module.html
The charset
option (in nginx.conf
) is not enabled by this guide, or by Arch Linux. If enabled, this file is an example of what can be set. It defines a charset, which nginx would provide in the Content-Type
field of a response header.
Fedfree recommends that you do not worry about the charset. The default will be fine.
The access_log
line specifies access logs for this specific host, written to /etc/nginx/logs/host.access.log
- no point enabling logging here, since we won’t use the default server block.
The location /
block specifies rules for the document root, which this block defines as the directory: /usr/share/nginx/html
with possible index files being index.html
or index.htm
- these are conservative defaults, and totally OK.
The error_page
directive is self explanatory. With it, you can then create your custom status page corresponding to each, in the document root. Unless you’re trying to be fancy, the default boring status pages are fine.
Now here are some more settings you might want to add, in this file:
server_tokens off;
- you should add this globally. It prevents people from being able to see what nginx version or operating system you use.types_hash_max_size 2048;
- size of hash table storing MIME types. The default value of 1024 is too low. Increasing it will allow you to configure more MIME types. See: https://nginx.org/en/docs/http/ngx_http_core_module.html#types_hash_max_sizeRegarding types_hash_max_size
: Even at 2048, I got this warning when doing nginx -t
(test config):
2025/10/12 21:15:12 [warn] 3349#3349: could not build optimal types_hash, you should increase either types_hash_max_size: 2028 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size
Setting it to 4096 silenced the warning.
More config items to consider adding, in nginx.conf
:
server_names_hash_bucket_size 128;
^ See: https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
Leave this at the default value or not set. Values possible: 32, 64 and 128.
If you have a particularly long domain name in use, you might consider increasing this to 128. For example:
extra.ludicrously.long.to.the.point.of.being.comically.absurd.sub.example.com
For blog.johndoe.com
, the default setting is fine. I set mine to 128.
server_name_in_redirect off;
^ Leave this turned off. It may even be prudent to set this like so, and turn it off explicitly. When set to off, the primary server_name
(default landing page) will not be used, and instead the one specified in the “Host” request header field, or the IP address of the server, will be used.
The default is off anyway. We need this turned off, because we’ll be using virtual hosts, and redirects.
More info: https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name_in_redirect
Gzip compression, continued:
gzip_comp_level 6;
gzip_buffers 16 8k;
If using gzip, you might enable these configs.
^ Default 1, this sets the compression level on gzip-compressed responses. Here, you must take into account the capabilities of the clients. The suggested value of 6 here may be a nice compromise. You will see little benefit setting this to 9, in most cases.
NOTE: nginx does not cache gzipped files in memory, so it be run every time, but the overhead of gzip is quite low. With lower compression level set, you would have lower CPU usage if that became a problem on high traffic servers. You should tweak this according to your needs.
Think about it. Most files that a web server will serve, are text files, and text compresses more easily. Text files are typically small, and so it makes more sense to compress them, in terms of CPU cycles. The savings on bandwidth usage are easily measurable. On the other handy, most binaries that you serve are going to be things like images and videos, many of which are already compressed. Ergo, it makes sense to disable compression for binaries. For example, compressing (in nginx) a JPEG file would likely yield little benefit in terms of compression ratio, while wasting more CPU cycles. Relying on read()
and write()
also makes little sense, for large files, if the sendfile()
function is available!
Level 6 is a reasonable compromise between performance and high compression ratio.
As for gzip_buffers
:
Set the number of buffers to send per gzip-compressed response, and the size of each buffer. The buffer size should ideally be set to the same size as BUFSIZ
on your system, which on most 64-bit hosts (at least x86_64
) is 8KB. The BUFSIZ
number is a #define
provided by your libc, which is most likely the GNU C Library if you’re running Arch Linux.
If you have the Arch package named glibc
installed, you shall find BUFSIZ defined (as a number of bytes) in /usr/include/stdio.h
. Example entry:
/* Default buffer size. */
#define BUFSIZ 8192
It is recommended that you always set buffer size the same as BUFSIZ
, and specify the number of buffers as required.
It is recommended that you set this, explicitly, and you might try 16 8k
first as suggested above (though the directive is commented, in the above example).
If you’re running a 32-bit host, it may be more efficient to use 32 4KB buffers instead. Nginx defaults to 32 4k
or 16 8k
, depending on the host platform.
In some situations, you might set this much higher, e.g. 32 8k
, but it is recommended to use a more conservative configuration (for your setup).
gzip_http_version 1.1;
Defaulting to 1.1
, this says that the client must support a minimum HTTP version to receive a gzip-compressed response. It can be set to 1.0
or 1.1
.
Nginx also supports operating as an HTTP/2
server, which this guide will later show you how to do, but HTTP/1.1
clients are compatible with HTTP/2
compliant servers (via backwards compatibility, in the HTTP/2
specification).
It is recommended that you explicitly set this to 1.1
, as that will ensure maximum compatibility. Later in this guide, you will also be advised to disable client usage of TLS prior to version 1.2, and TLS 1.2 was first defined in year 2008:
https://www.ietf.org/rfc/rfc5246.txt
The HTTP/1.1
specification became canon in year 1999, so it’s more than likely that your clients will support it, but since we’ll be mandating use of TLS 1.2 or newer, there is little point in gzip_http_version
being set to 1.0
. The HTTP/1.1
specification is defined, here:
https://www.ietf.org/rfc/rfc2616.txt
The newer HTTP/2
specification is defined here, as of year 2015:
https://www.ietf.org/rfc/rfc7540.txt
We’ll have more to say about this, later in the guide.
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
^ Where gzip is enabled, files conforming to text/html
MIME type will always be sent out compressed, if the client supports it (per your configuration of MIME types as already described above).
See: https://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_types
The gzip_types
directive specifies additional files of given MIME types, that you wish to compress in responses.
In the above example, MIME types are declared explicitly. Uncomment it to enable. The value *
shall declare that files of all MIME types are to be compressed.
The default value for this is text/html
, which means that Arch’s default nginx configuration only compresses text/html
, because this directive was not set by default. Therefore, you should uncomment/add this line. Again, you do not need to write text/html
in here, as nginx will always compress that MIME type when gzip is enabled.
Now we can begin configuring Nginx!
As stated earlier, make sure you have this in your nginx.conf
:
server_name_in_redirect off;
Also add these:
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/redirects-enabled/*;
Add all of these inside the http
block of your nginx.conf
, but not inside the server
block.
The first one, server_name_in_redirect
must be turned off, to enable use of SNI. This way, you can host many web sites on your server.
The include for conf.d
let’s you add whatever other modifications you want later on, without modifying the main config. This helps keep track of things. The asterisk means that anything in that directory would be included.
The sites-enabled
directory is where you’ll place configs per website. Doing a separate config per site makes it easier to manager. You could also do one per subdomain if you want, though I normally just have one file for e.g. libreboot.org
, and subdomains for it are specified in that file as well.
The redirects-enabled
directory will contain per-site redirects, for HTTP to HTTPS; in general, we are to assume that you will only want HTTPS on each site, with HSTS turned on. Therefore, having server
block for one site, just to do an HTTPS redirect, seems pointless. However, you may well want to do just that.
It should be noted that the sites-enabled
and redirects-enabled
directories won’t contain files. Everything in there will be a symlink, to the corresponding file in redirects-available
and sites-available
respectively. This is how we do it on my Debian nginx guide, and it makes managing lots of sites really easy. If you need to take a specific site offline, you can just remove a symlink!
You may as well now create these directories, as root:
mkdir -p /etc/nginx/sites-enabled /etc/nginx/sites-available \
/etc/nginx/redirects-enabled /etc/nginx/redirects-available \
/etc/nginx/conf.d
Just to make sure we’re on the right track, your nginx.conf
should, in total, look something like this:
user http;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid /run/nginx.pid;
# Load all installed modules
include modules.d/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
types_hash_max_size 4096;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
server_tokens off;
server_names_hash_bucket_size 128;
server_name_in_redirect off;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/redirects-enabled/*;
# default server, if no domain / unconfigured domain host:
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Yours may look slightly different, depending on what options you chose from the above (and you may have added even more).
Let’s tidy this up shall we? Now that we have sites-available
and sites-enabled
, we can do move that server
block to its own file, perhaps /etc/nginx/sites-available/default
?
Create the file /etc/nginx/sites-available/default
with these contents:
# default server, if no domain / unconfigured domain host:
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now remove that corresponding section, so that your main /etc/nginx/nginx.conf
looks something like this:
user http;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid /run/nginx.pid;
# Load all installed modules
include modules.d/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
types_hash_max_size 4096;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
server_tokens off;
server_names_hash_bucket_size 128;
server_name_in_redirect off;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/redirects-enabled/*;
}
Now you’re talking.
Make sure it all works, before you continue. As root again, try this command:
nginx -t
If all went well, there should be no errors, and you’ll see something like this:
[root@archlinux ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Now, in your main nginx.conf
, you should prep for TLS configuration on your domains. We will use a server-wide configuration for this, while specifying which certificates to use per-host.
Have a look here: https://ssl-config.mozilla.org/
This generates the configuration for you. If you’re just doing a private instance, that only you access, you may as well pick Modern.
For maximum compatibility, a public instance should probably pick Intermediate, but the choice is yours.
On this day, 12 October 2025, I picked Modern and got a seemingly plausible looking configuration. However, the config they gave me enabled a bunch of OCSP-related functions, which would break your server because LetsEncrypt does not support OCSP.
The generated configs also included several server
blocks, which I ignored because these will be done separately. Remember: we configure certs per-host, but we configure encryption server-wide.
Therefore, for the Modern option, I ended up with:
# modern configuration
ssl_protocols TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_prefer_server_ciphers off;
ssl_stapling off;
ssl_stapling_verify off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
For Intermediate, I ended up with:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
ssl_stapling off;
ssl_stapling_verify off;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m; # about 40000 sessions
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
ssl_dhparam "/etc/ssl/certs/dhparam.pem";
Pay attention to this line, in both setups:
ssl_ecdh_curve X25519:prime256v1:secp384r1;
X25519 (Montgomery Curve) is the fastest of the three, and some believe more secure than prime256v1; prime256v1 (NIST P-256) is more widely supported The most secure (and the slowest) is secp384r1 (NIST P-384).
For a highly secure setup, you might limit it to only secp384r1, like so:
ssl_ecdh_curve secp384r1;
If you want all three, it’s best to go in this order:
ssl_ecdh_curve secp384r1:X25519:prime256v1;
For compatibility reasons, you may want to at least have prime256v1 and secp384r1. In my case, I opted to only support secp384r1
in my setup. I was setting this up for my private Invidious instance, and it was only for my personal use, so a lower level of compatibility was deemed acceptable. In practise, secp384r1 is supported by almost everything these days.
You’ll note that the pem
file is the path you saved your Diffie Hellman config to earlier.
You may have also noticed that we’ve added HSTS (Hypertext Strict Transport Security). This means that visitors to your sites will know later to always use HTTPS, instead of relying on HTTP to HTTPS attacks; the latter could be used for downgrade attacks, which must be avoided if possible. (on that note, use of a browser configuration that does HTTPS-first would be quite prudent)
NOTE: the always
option on the add_header
line forces those headers to always be added, to all HTTP responses that go out.
See: https://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header - as you can see, there are conditions under which add_header
is actually applied. We want HSTS, nosniff and x-frame-options deny to always apply, no matter what!
We will enable HTTP/3. See:
https://nginx.org/en/docs/http/ngx_http_v3_module.html#quic_retry
Although not yet officially standardized, nginx supports it.
Add this inside the http
block on /etc/nginx/nginx.conf
:
add_header Alt-Svc 'h3=":443"; ma=2592000; persist=1' always;
Further configuration will be applied, in your per-host configs later on in this guide.
This add_header
directive tells clients what port to use, and that an HTTP/3 service is available.
If doing this with the above Modern setup example, you would have something that looks like:
# modern configuration
ssl_protocols TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_prefer_server_ciphers off;
ssl_stapling off;
ssl_stapling_verify off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header Alt-Svc 'h3=":443"; ma=2592000; persist=1' always;
Adapt accordingly, if you’re doing something else (e.g. Intermediate setup).
Doing this globally means that all your websites would then require HTTP/3 to be configured. Therefore, you might wish to add this per-host instead.
HTTP/3 can be port 443 or 8443, but NOTE: it’s UDP, not TCP.*
Therefore, please open ports 443 on both TCP and UDP in your browser.
This also means that you cannot use HTTP/3 over an SSH tunnel, which only works for TCP connections.
Before continuing, make sure /etc/nginx/nginx.conf
was configured correctly (according to your needs), based on all of the previous sections.
Including modern TLSv1.3-only setup, my setup ended up looking like this:
user http;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid /run/nginx.pid;
# Load all installed modules
include modules.d/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
types_hash_max_size 4096;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
server_tokens off;
server_names_hash_bucket_size 128;
server_name_in_redirect off;
##
# TLSv1.3 configuration (modern hardening)
##
ssl_protocols TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_prefer_server_ciphers off;
ssl_stapling off;
ssl_stapling_verify off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header Alt-Svc 'h3=":443"; ma=2592000; persist=1' always;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/redirects-enabled/*;
}
You’ll note that in any case, we explicitly set OCSP stapling to off. This is done specifically on the theory that it may be default enabled in the future, so we make sure it’s definitely turned off.
That max-age
setting for HSTS specifies a number of seconds for your browser to remember it, meaning that your server is telling clients to remember use of HTTPS as a preference for at least two years.
OCSP used to be provided by LetsEncrypt, but they shut it down, as per the following article:
https://letsencrypt.org/2024/12/05/ending-ocsp/
Remember earlier that we import -enabled
directories in nginx.conf
, but we place actual files inside -available
directories, and symlink them from -enabled
, to enable them.
Therefore, having created these directories earlier on, now do this as root:
cd /etc/nginx/sites-enabled/
ln -s ../sites-available/default default
Later on, we will be adding a 2nd website, after nginx is up, and generating TLS certificates without shutting down nginx like we did before. This section is to be followed, in preparation for that.
When the server is operational, you don’t want to to kill active connections, especially on a busy website, and especially if you’re going to run databases of some kind.
Early in this guide, you were instructed to use the certonly
mode in certbot, with certbot acting in standalone mode, rather than webroot mode. However, you should be able to add new TLS certificates while nginx is running, for new domain names that you wish to add.
Remember that server
block that we moved? Observe that file now, specifically /etc/nginx/sites-available/default
- and you see:
# default server, if no domain / unconfigured domain host:
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Observe this part of the file:
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
This specifies a default for the document root /
, which applies to all subdirectories thereafter.
Subsequent location
blocks can be specified, for subdirectories, over-riding this default behaviour.
Below that block then, add the following new location
block:
location ^~ /.well-known/acme-challenge {
default_type "text/plain";
root /usr/share/nginx/letsencrypt;
}
Thus, the entire file shall look like:
# default server, if no domain / unconfigured domain host:
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /.well-known/acme-challenge {
default_type "text/plain";
root /usr/share/nginx/letsencrypt;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Create this directory, as root:
mkdir -p /usr/share/nginx/letsencrypt/.well-known/acme-challenge
LetsEncrypt’s challenge response, in the setup that we’re using, only runs on HTTP. This is perfectly OK for us, because we can point A/AAAA records at the server without configuring hostnames under nginx, and then run certbot in certonly mode with a webroot specified, so that we don’t have to stop nginx.
You must now reload nginx:
systemctl reload nginx
Directory listings (indexing) is disabled by default, in nginx, so the contents of your acme-challenge
directory will not be publicly visible.
With this, and with the prior configuration change (as shown above), you now enabled certbot
to operate with nginx enabled.
If a given website does not yet have a configuration in your nginx server, but it points to your server, then the default site at /etc/nginx/sites-available/default
shall apply as a placeholder, and it specifies the location to use for LetsEncrypt’s ACME challenge, specifically when it completes the HTTP-01
challenge (more on this later).
We can add an HTTP to HTTPS redirect per website, creating a unique configuration for each of them, but this is a lot of copying and pasting that should probably be avoided.
Do this instead:
touch /etc/nginx/redirects-available/global
cd /etc/nginx/redirects-enabled/
ln -s ../redirects-available/global global
Now edit /etc/nginx/redirects-available/global
, placing inside it the following contents:
server {
server_name
www.example.com example.com;
listen 80;
listen [::]:80;
location / {
return 301 https://$host$request_uri;
}
location ^~ /.well-known/acme-challenge {
default_type text/plain;
root /usr/share/nginx/letsencrypt;
# in this case, the 301 redirect rule does not apply, because
# this location block shall override that rule
}
}
Adapt the entries in server_name
accordingly.
You can add any host here.
For example, if you hosted the following domains:
example.com
foo.com
bar.com
Let’s say you also configured the following subdomains:
bar.foo.com
foo.bar.com
another.example.com
Then in the above file, you might have for server_name
the following:
server_name
www.example.com example.com
another.example.com
www.foo.com foo.com
bar.foo.com
www.bar.com bar.com
foo.bar.com;
You’ll note the ACME challenge also configured here. In general, we have an HTTP to HTTPS redirect, but Certbot needs plain HTTP for certificate creation and/or renewal. That is why we define a document root for /
but with the HTTP 301 redirect response, and we override it thusly.
This means that you can have this configuration apply for every domain, but you must remember to update the server_name
list in this file, when adding new domains/hosts.
In so doing, your actual site configurations, which will be files provided per website, can just focus on the TLS certificate configuration, and any site-specific configuration that you desire. This is the cleanest way to do it, especially if you run a lot of sites.
Finally, we get to adding a website. The previous sections of this guide have already taught you everything you need to know. Commands (replace example.com
with your domain name that you made TLS certificates for):
Your site will live in /usr/share/nginx/example.com
. It could actually live at any location, so adapt according to your own requirement:
mkdir -p /usr/share/nginx/example.com
Create the file:
touch /etc/nginx/sites-available/example.com
(you’ll still need to actually configure the site)
cd /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/example.com example.com
This file shall configure your website. You could also configure subdomains here, if you wish; some people may prefer to have a separate file for subdomains; for example, a example.com
file and a another.example.com
file, but the choice is yours. The latter may be desirable, if you run a website that has a lot of subdomains.
In any case, the file contents should start off looking something like this, for a basic static HTML site (you may want to configure a reverse proxy, which will be covered in separate guides).
Observe the following example:
# HTTPS: redirect www.example.com to example.com
# NOTE: if you wish, you could put this in redirects-available instead
# and create the appropriate symlink in redirects-enabled
server {
server_name www.example.com;
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic reuseport;
listen [::]:443 quic reuseport;
http2 on;
http3 on;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/www.example.com/chain.pem;
disable_symlinks on;
return 301 https://example.com$request_uri;
}
# HTTPS: this is your actual website configuration
server {
server_name example.com;
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic reuseport;
listen [::]:443 quic reuseport;
http2 on;
http3 on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
root /usr/share/nginx/example.com;
index index.html;
disable_symlinks on;
# uncomment this to enable autoindexing, otherwise directories
# without html index files will return HTTP 403
# DO NOT turn on autoindex unless you're sure you need it, because
# it's a potential security threat under some circumstances
# autoindex on;
}
Now test your website!
Before you proceed, run this command:
nginx -t
This may warn you of any misconfiguration.
Start nginx, like so:
systemctl start nginx
Also make sure to enable it at boot time:
systemctl enable nginx
Now do some basic tests:
Now try this:
curl -I http://example.com/
You should see a 301 redirect. Ditto for http://www.example.com
. Both should lead to https://example.com/
and https://www.example.com/
respectively.
Now try this:
curl -I https://www.example.com/
You should see a 301 redirect to https://example.com/
Now try:
curl -I https://example.com/
If you’ve not placed an index.html
file in your document root, you should see an HTTP 403 response. You should see the HSTS and nosniff options too.
Now try all of the above addresses in your browser.
SSL Labs host an excellent test suite, which can tell you many things, like whether CAA, HSTS, TLS 1.3 and other things are enabled.
See:
https://www.ssllabs.com/ssltest/
Mythic Beasts have an excellent IPv6 tester on their website. You should also test it yourself, on IPv4. See:
https://www.mythic-beasts.com/ipv6/health-check
This section pertains to the host config that we just enabled, bringing the target domain name online via the web.
In the above configuration, www.example.com
automatically redirects (via HTTP 301 response) to example.com
. It is recommended that you either do this, or do it the other way round: example.com
redirects to www.example.com
. This is for search-engine optimisation (search engines also favour sites that are HTTPS-only, these days).
For non-www to www redirection, simply swap the HTTPS server blocks above, and adapt accordingly. For SEO purposes, it doesn’t matter whether you do www to non-www or non-www to www, but you should pick one and stick with it.
For my purposes, I typically prefer that the main website run on example.com
instead of www.example.com
, because I think that looks much cleaner. It’s the difference between Pepsi and Coca Cola, so pick your poison.
You will note, that HTTP/2 is enabled in the above config, but only for HTTPS. HTTP/2 was already covered earlier on in this guide, and it enables many speed plus other improvements over the old HTTP/1.1 specification.
HTTP/3 is also enabled and working.
You’ll also note that symlinks are disabled. This means that symlinks inside the document root will not work, at all. This is for security purposes, and you are encouraged to do this for every domain name that you host.
However, there may be some situations where symlinks are desired, so this is done per host, rather than server-wide.
Run nginx -t
to test your configuration. In most cases, it will tell you what you did wrong.
When renewing certificates with the command certbot renew
, certbot expects to operate on port 80, so we configured port 80 plain HTTP access just for LetsEncrypt’s ACME challenges.
The only viable challenge method used requires unencrypted HTTP, but our server does away with that for websites. For anything other than the ACME challenge, URIs automatically redirect to the corresponding HTTPS link.
Basically just keep Arch Linux up to date. Check the latest home page news entries:
https://archlinux.org/
Here is some information about pacman
, the package manager in Arch Linux:
https://wiki.archlinux.org/title/Pacman
Nginx is basically bullet-proof. You might otherwise try etckeeper which is a nice tool for keeping track of changes you make to configs under /etc
.
When you make a configuration change, you can do this:
systemctl reload nginx
Or this:
systemctl restart nginx
Nginx is very powerful, and highly configurable.
Always make sure to run the latest OpenSSL patches. Re-generated dhparam.pem
from earlier in this guide, every few months. (you could do it, scripted, as part of automatic certificate renewal).
Before renewing for the first time, you should test that it will work. Certbot provides a test function:
certbot renew --dry-run --webroot -w /usr/share/nginx/letsencrypt
You should absolutely make sure nginx is running! For the purpose of this test. With this setup, the HTTP-01
challenge type (via LetsEncrypt) is used, and it happens while the server continues running.
Otherwise, if all is well, just do this:
certbot renew --webroot -w /usr/share/nginx/letsencrypt
systemctl reload nginx
The reload
command is so that nginx makes use of any newly generated certificates. The reload
command differs from restart
in that existing connections stay open, until complete, and new connections will also be made under the old rules, until the new config is applied, per site. In this way, the reset happens without anybody noticing, and your site remains 100% online.
If all is well, it should Just Work. If it didn’t, you’ll need to intervene.
If there’s something Fedfree can do to improve this tutorial, please get in touch via the contact page.
Renewal is very different than creating a new certificate, and the latter is covered in another section of this guide.
Firstly, test that your configuration works with a dry run:
certbot renew --dry-run --webroot -w /usr/share/nginx/letsencrypt
You should put certbot renewals on an automated crontab, though keep in mind: although the duration of certificates is 3 months (with LetsEncrypt), you may be generating multiple certificates at different times, so the times may get out of sync for each of them.
Therefore, it is recommended to run certbot renew
every week, just in case.
A more automated way to do it is like this:
#!/bin/bash
certbot renew --webroot -w /usr/share/nginx/letsencrypt
systemctl reload nginx
# if you also have mail for example, with certs e.g. mail.example.com
# systemctl restart postfix
# systemctl restart dovecot
^ Add the above to a new file at /sbin/reloadservers
, and mark it executable:
chmod +x /sbin/reloadservers
OPTIONAL: add the command from earlier in this tutorial, that generated the dhparam.pem
file. Add it in the above script. That is, if you’re using this file, you could re-generate it before renewal.
Arch Linux no longer has crontab
by default. Do this:
pacman -S cronie
systemctl enable cronie
systemctl start cronie
By default, it uses vi
for crontab -e
; if you don’t have vi, you can symlink it. In my case, I use vim
, but you can adapt accordingly:
cd /usr/bin
ln -s vim vi
Then do:
crontab -e
Add the following to crontab:
0 0 * * 0 /sbin/reloadservers
This will check for renewals weekly, automatically applying them.
Systemd timers are great, but I use cron, and it still works, so I’ll still use it. For my Invidious guide, I even have half a mind to restore /etc/rc.local
for part of it.
See: https://letsencrypt.org/docs/challenge-types/
By default, certbot renew
will use the HTTP-01
challenge type, which shall require that certbot bind on port 80. This is a problem, because nginx is listening on port 80, so you would get an error.
Doing it on a webroot (using certbot certonly
instead) will work perfectly, because that requires port 80 and http://
, but your web server configures it such that ACME challenges (at /.well-known/acme-challenge
) do not redirect.
The only thing that can make use of /.well-known/acme-challenge
is certbot, and LetsEncrypt communicating with it. Everything else should continue to redirect.
The DNS-01
challenge type is not provided for, in any way, by this tutorial. It is mentioned here for reference, because it’s an interesting option anyway.
The DNS-01
challenge type is practical, if:
The DNS-01
challenge can be completed, without killing nginx. This means your site visitors will not lose their connection, no matter how briefly. You would run systemctl reload nginx
, after all certificates are renewed. It must be done individually per each domain name. It means you need to actually be there, inserting responses to each challenge, in each DNS zone file, for each domain… this is why it’s only practical if you’re running your own DNS. You could probably do some kung-fu with sed and awk to make the job easy, operating directly on your zone files either locally (if running on the same machine as nginx) or over ssh.
The TLS-ALPN-01
challenge type is what we would prefer, but according to that page, it’s not yet working on nginx or certbot.
The benefit of this method is that it can be done purely at the TLS level, so we would not have to mess with redirect rules under nginx.
When this option becomes viable in the future, it may be documented on Fedfree.
You will already know how nginx is configured, at this point. In this new scenario, you’re very happy with your current website but now you want to host yet another one. You can host it on this machine, quite easily. Hosting multiple websites on the same machine is trivial.
This guide had you set up Nginx with SNI for TLS purposes, so it’s quite possible. Most/all modern browsers support SNI these days. SNI (Server Name Indication) is a feature in modern TLS that permits clients to access via different certificates, based on what is specified in the Host header. See:
https://en.wikipedia.org/wiki/Server_Name_Indication
Remember the default site, at /usr/share/nginx/html
?
If you want to point a new domain (www.newdomain.com
and newdomain.com
) to your server, it will work on port 80 via the localhost
server name in nginx. This is assuming that you didn`t already host the domain elsewhere with HSTS, in which case you can simply copy the keys/certificate to your new installation.
You will note that we included the LetsEncrypt snipped enabling the webroot
method to work, via HTTP-01
challenge. In our setup, the HTTP-01
challenge will work perfectly, so long as the target domain is accessible on part 80, which it is in this situation.
If DNS is properly set up, just do this (for newdomain.com
):
certbot certonly --webroot --agree-tos --no-eff-email --email you@example.com -w /usr/share/nginx/letsencrypt -d newdomain.com --rsa-key-size 4096
and for www.newdomain.com
:
certbot certonly --webroot --agree-tos --no-eff-email --email you@example.com -w /usr/share/nginx/letsencrypt -d www.domain.com --rsa-key-size 4096
LetsEncrypt challenge response is done over port 80. If all went well, you should have the new files under /etc/letsencrypt/live/newdomain.com
and /etc/letsencrypt/live/www.newdomain.com
.
Now when the website is up later on, your crontab will auto-renew the certificate.
If all went well with certbot, and you have the new certificate, you can simply configure the new domain name, adapting the same procedures you already followed before on this page. When you’re sure it’s fine, you can then do:
nginx -t
If nginx reports no problems, you can then do this:
systemctl reload nginx
Again, this is only for adding a brand new certificate. For renewal, you will instead rely on certbot’s renew function.
If you believe the key is compromised, you should revoke it immediately.
Alternatively, you might have forgot something in certbot, such as:
--rsa-key-size 4096
(if you wanted that)In this circumstances, it is best to revoke the key. Certbot will also ask whether you want to delete the key (say YES).
With the certificate revoked and deleted, you can then generate a new one.
You do not need to stop nginx, but TAKE NOTE: while the certificate is revoked, if you’ve also deleted it, nginx will fail to re-load. Therefore, when you do this, you should then do one of the following things:
www.example.com
and example.com
on port 443)Sample commands:
certbot revoke --webroot -w /usr/share/nginx/letsencrypt --cert-path /etc/letsencrypt/live/example.com/cert.pem --key-path /etc/letsencrypt/live/example.com/privkey.pem --reason unspecified
certbot revoke --webroot -w /usr/share/nginx/letsencrypt --cert-path /etc/letsencrypt/live/www.example.com/cert.pem --key-path /etc/letsencrypt/live/www.example.com/privkey.pem --reason unspecified
Other options available for --reason
are as follows:
unspecified
keycompromise
affiliationchanged
superseded
cessationofoperation
You can then generate a new certificate, and restart nginx.
Fun fact:
At the time of publishing this guide, Nginx’s own website did not enable HTTP to HTTPS redirects or HSTS, but it did have HTTPS available, site-wide; some links however would go back to unencrypted HTTP.
The following page shows you how to force use of HTTPS, in common web browsers:
https://www.eff.org/https-everywhere/set-https-default-your-browser
NOTE: Modern browsers usually have this functionality built in now, so you no longer need to install a browser plugin. Just enable the HTTP-first mode, which in my opinion should be turned on by default.
This could be a separate guide at some point, but I did find this handy dandy reference that someone made:
https://github.com/denji/nginx-tuning/blob/f9f35f58433146c3af437d72ab6156b3eb8782c9/README.md
As stated by that author, the examples in the link are from a non-production server. You should not simply copy everything you see there. Adapt it for your setup. Nginx is extremely powerful. It runs some of the biggest websites on the internet.
The URL above is to a specific revision, of the guide in that repository. You can clone the repository like so, to get the latest revision:
git clone https://github.com/denji/nginx-tuning
The purpose of the Fedfree guide is simply to get you up and running. You are highly encouraged to play around with your setup, until it performs exactly the way you want it to.
The HTTP ETag
header is sent out by default, in nginx, for static resources such as HTML pages.
Since this wasn’t mentioned anywhere in the default Arch configs, that means it’s enabled. You can learn more here:
https://en.wikipedia.org/wiki/HTTP_ETag
And here:
https://nginx.org/en/docs/http/ngx_http_core_module.html#etag
You might want to explicitly enable this, just in case nginx ever changes the default to off in the future. It is a useful performance optimisation, because it avoids re-sending the same unmodified page if a client has already seen and cached it.
Clients that cache will store this ETag value, and when requesting a resource, include their stored ETag value in the request; if nginx sees that the local version has the same ETag, it sends back an HTTP 304 Not Modified
message to the client, rather than the contents of the requested file.
Use of ETags and Gzip compression, as enabled by this guide, will save you a lot of bandwidth. Have fun!
PS: You might read online that ETags are insecure, but they’re really not, and this article explains why:
https://www.pentestpartners.com/security-blog/vulnerabilities-that-arent-etag-headers/
The security issue with ETags is if you’re also running an NFS share, on really ridiculously old versions of NFS when inodes of files were used as a handlers; if the inode were known, it could (on those older versions) enable access to a file without authorisation… on NFS if you’re running a version of it from the year 1989.
Nginx does not use inodes when generating an ETag!
Nginx’s logic that handles ETag generation can be found here:
https://raw.githubusercontent.com/nginx/nginx/641368249c319a833a7d9c4256cd9fd1b3e29a39/src/http/ngx_http_core_module.c
Look in the function named, in that file:
ngx_int_t
ngx_http_set_etag(ngx_http_request_t *r)
You’ll see it all there. Fedfree recommends that you leave ETags enabled. Nginx’s implementation of ETags is perfectly safe, in the configuration that Fedfree has provided for you.
That is all.
Markdown file for this page: https://fedfree.org/docs/http/arch-nginx.md
This HTML page was generated by the Libreboot Static Site Generator.