
I’ve had NAS devices for almost 15 years and have grown our local network to 40+ devices for a family of 4, as the PCs, laptops, smart TVs, tablets, phones, voice assistants, and even light bulbs have piled up. I’ve been working remotely forever, but our kids are now stuck with us studying from home. Having stable internet access and reliable services has become more and more important, whether for work, education, or entertainment. And while it can be fun to tinker with technology, at some point you want to have a stable state where “it just works.”
If like me you’re facing these growing needs and expectations, you can’t be complacent, and you do need to invest in your own education to get the best out of your network devices. If you expect your Internet Service Provider (ISP) to take care of all of this for you, in most cases you’ll run into a wall of arbitrary limitations, incompetent tech support, and overall disregard for customers.
Meanwhile bad actors keep finding new ways to break into everyone’s infrastructure. Do not think you’re protected by the obscurity of being just a random home user: most attacks are massive brute force efforts against thousands if not millions of devices at a time. With even the biggest companies in the world now massively using remote work, this is only going to get worse. You don’t have to become a full-time cybersecurity expert, but you need to know how to “drive defensively” in the online world.
If you use more than the very basic features of your Network Attached Storage [NAS], be prepared to become a part-time system administrator in charge of:
- The NAS with its files and services, which you’ll likely run in Docker containers (why and how to do so will be the object of a separate post).
- Your entire local area network (LAN), from your upstream broadband provider to your Wifi and Ethernet LAN, as well as all the connected devices.
- And in many cases, secure access to local resources from the outside.
This guide is meant to help you get back in charge. You don’t want to copy everything I’ve done verbatim as your mileage may vary, but the gist of it should be useful regardless of the exact equipment you’ve purchased. I won’t explain every technical term at length, but I will link to resources that do so. Read on, take what you can use, discard what doesn’t apply to your circumstances, and enjoy!
1. General Principles and Tips
To get a grip on your local infrastructure without opening huge security holes or turn your network into an unmanageable mess, at a minimum you need to:
- Make conscious decisions about which device on your network will handle firewall, port forwarding, and DHCP duties, which you may want to complement with custom DNS and DDNS services. I’ve chosen to handle all of these on my main router – a dual WAN Synology RT2600ac with two 1GB fiber optic providers – except for DDNS which is handled by a container on the NAS.
- Avoid opening ports you’re not using and be as narrow as possible in your port forwarding rules. If you’ve done a good job forwarding ports manually you might as well disable UPNP, that’s one less security liability to worry about.
- Disable default admin and guest accounts on all routers and NASes.
- Use two-factor authentication on routers and NASes.
- Anything related to remote access has to be buttoned up with extra precautions. Disable SSH access if you’re not going to use it, and if you do use SSH, change its default port.
- Keep firmware up to date as new vulnerabilities get patched.
- Make your local devices easier to administrate from one central place by assigning them IP addresses via DHCP reservation. I also keep track of devices in a spreadsheet where I have their brand, model, IP address, MAC address, and a few other items of interest.
- Set your ISP modem to bridge mode and disable as much of its extra features as possible so that it’s just a modem and not a crap router/AP. If you don’t do this, you’ll have double NAT problems to access your LAN from the internet. ISPs tend to provide underpowered devices that you have to fight with to be able to administrate. You might also need to contact your ISP to move away from CGNAT to have your own public IP address. Be warned, this can be an uphill battle with some ISPs. Some ISPs can also provide IPv6 addresses but to be honest I haven’t looked at whether I could make use of it as an end user.
To make your life easier, use a single router for routing and set other routers as dumb access points, i.e. just wifi with no routing nor DHCP. If you’re in a large house, use either a mesh network (more convenient but more expensive) or a collection of traditional APs set with the same wifi name but on different channel (to avoid interferences). This will let you move around the house with your mobile devices staying connected to the stronger signal without having to switch manually to a different AP.
If you intend to stream 4K remux movies (i.e. perfect copies of BluRays that can take 60+GB for a single movie) you’ll want to run Category 6 cable around your house if possible. Wireless speeds are getting better and better but wired is tried and true for stability and reliability regardless of how thick your walls are or interfering signals from the neighborhood. Ethernet over the powerline or coaxial are decent alternatives if you can’t get Ethernet cables everywhere you’d like to.
There are some other, more advanced networking concepts that you may also want to learn, depending on your exact needs:
- Virtual Private Networks (VPN): whether you’re contracting a service provider such as Private Internet Access, or hosting your own home VPN, this is often in the mix for secure remote access.
- Virtual Local Aera Networks (VLAN), this lets you segment your LAN, for instance if you want certain devices (guests come to mind) not to see the rest of the LAN.
For more on this general topic, read:
- How to enhance the security of your Synology NAS – basics to start with.
- How I over-engineered my home network for privacy and security – some more advanced concepts.
On self-hosting more specifically:
2. DNS & DDNS for Secure and Convenient Access to Outside and Self-Hosted Domains
There’s a lot going on behind the scenes to turn you typing oliviertravers.com in your browser to this site actually loading up. If you want to a) have a say about how domain names are resolved for requests coming from your network, and b) host some domain names locally, that can entail taking quite a few steps. While you could rely on your ISP’s DNS servers, which is the default behavior if you don’t do anything, these are often fairly badly maintained so it’s often worth taking these over with the DNS service(s) of your choosing.
There are many ways to approach this, here’s what I do to resolve both external domain names and those domains I self-host:
- The DNS Server on my router resolves self-hosted domains. If you don’t set up your own DNS server, you’ll be able to access your self-hosted services from the outside but you’ll have a loopback problem and they won’t resolve locally. You can also run this on your NAS but in my opinion, better run it on the router if you can.
- Handling of self-hosted domains is done with a dedicated zone where I created A records (even for subdomains) pointing to my NAS IP address, as explained here.
- The DNS server then forwards requests for all domains outside of my self-hosted domains to a public DNS resolver such as Cloudflare’s 1.1.1.2 / 1.0.0.2 to get fast resolution as well as a layer of protection from malware (an extra benefit above their standard 1.1.1.1 server).
- Cloudflare also provide DNS services for your own domain. I have an A record for each top level self-hosted domain, and CName records for their subdomains. To do this you need to open a free account with them and create an API token.
- My two ISPs, like most residential broadband providers, don’t market static IPs to consumers, so I run a Docker container to handle DDNS with the aforementioned Cloudflare DNS account, meaning I don’t have to use a third-party service such as DuckDNS.
Were Cloudflare to have a massive outage, I’d need to change the forwarding DNS servers (e.g. to point to Google’s 8.8.8.8) in one place, and we’d again be able resolve domains throughout the LAN in just a couple of clicks.
External DNS requests can be sent over HTTPS (aha DoH), also via Cloudflare, but it doesn’t look like Synology’s DNS Server knows how to forward to anything but IPv4 addresses. This is tentative and needs to be further investigated, but for more advanced options like this, I may need to use something like PiHole. I’ll probably revisit this at some point and update this entry accordingly.
A fancier alternative for DNS resolving that I’ve started testing is Cloudflare for Teams’s Gateway (free for up to 50 users) which lets you define policies to resolve, block, or override domain name requests based on the rules of your choosing. This can be done via DoH too, but again I need a solution for DoH forwarding.
Using your own DNS server is vastly preferable to editing hosts files as 1) you don’t have to maintain the latter on each device, and 2) good luck even being able to do that on non-rooted mobile devices. Also make sure to export your zone settings to a file as backing up your general router or NAS settings won’t include them.
3. Make Access to Your Apps Easier with Your Own Domain Name, SSL Certificates, A Reverse Proxy, And Redirect Rules
As you inevitably add more and more containers to your NAS, you’ll find that all these http://ipaddress:port URLs become harder and harder to remember. The ideal format instead would be https://subdomain.domain.tld. That involves a few steps:
- You can buy a cheap domain name such as a .top domain for $2 then renew it for $4/y. Once you have your domain name, set up short, memorable and easy-to-spell subdomains. How to handle the corresponding DNS records is explained above.
- You’ll want to secure the domain and all its subdomains in one swoop with a free wildcard SSL certificate from Letsencrypt. If you use Cloudflare, create an API token like described here.
- With a reverse proxy, you can then forego port numbers, so as promised you end up with something like https://books.mydomain.top or https://movies.mydomain.top that your friends and family can actually remember. In some cases (e.g. AirDC++) you might need to add custom headers for authentication to work properly.
- Add redirect rules from http to https, and your users will just need to type books.mydomain.top without specifying the protocol or port, which is a much better user experience on a phone.
To recap, all the above will load subdomain.domain.tld via https from the right container at its local IP address and port, whether you’re loading the URL from your LAN or from the internet. Pretty cool huh?
This can be a bit technical so take it one step at a time, but in my opinion it’s well worth it and these three awesome tutorials by the extremely helpful Luka Manestar will hold your hand all along the way:
- Let’s Encrypt + Docker = wildcard certs (automated creation and renewal)
- Synology Reverse Proxy (under the hood it’s a bunch of Nginx rules)
- http to https redirects
Once it’s all set up, switch your SSL/TTL encryption mode to “Full” in the Cloudflare settings (in “flexible” mode I ran into the dreaded 522 timeout error), and switch DNS proxy status to “Proxied” to avoid exposing your public IP. You can test your SSL web server if it’s accessible on the public Internet.
If you mess up your redirect rules and are struggling to test the edited ones because of browser caching, test in incognito mode or do a hard refresh via the browser’s developer mode as explained here.
If you’re starting to be overwhelmed by the number of ports exposed by your containers, run this from the CLI to get a recap:
docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Further out I plan to explore these articles:
4. Know What’s Going On with Your LAN & NAS with Grafana & Friends
Now we’re admittedly going beyond strictly 101 topics, but don’t be too intimidated as there are many great blogs and videos to make new technical concepts and tools approachable. It all revolves around Grafana, a powerful querying and dashboarding solution that can be fed with logs and data streams respectively. If you’re not a hardcore IT person this might sound daunting, but once you get the hang of using Docker containers you’ll find it’s actually a fairly quick setup process.
The stacks suggested below complement each other and will be less involved and more adapted to (power) home use than complex enterprise solutions such as the Elasticsearch/Logstash/Kibana (ELK) stack, Splunk, or Prometheus. But the beauty of containers is that you’re always only minutes away from testing something new, and if you don’t like it, get if off your NAS like it was never there in a matter of seconds.
4.1. Handle Logs with the Promtail / Loki / Grafana stack
Promtail collects logs generated by apps and containers, then Loki ingests them, thus making them available to Grafana to visualize them. I recommend that you set these containers first as your default logging solution for all your containers, as this seems to work only with newly-created containers.
You end up with a mini-Google search engine of all your logs from one search box, which is so much more convenient that accessing individual logs from Portainer or the command line. Since you’ll likely have some troubleshooting to do to get other containers working, you’ll get the most benefit by, again, starting here as opposed to rushing to install your media servers et. al. If you’re just getting started with your system, set up PLG as a first order of business, you can always add Telegraf+InfluxDB later once your container stack starts to gel.
See:
- Loki configuration for Docker container logs – if you do one thing, do this!
- Loki tutorial for Synology
- Deploying Loki and Promtail together with the TIG stack
If this all sounds overkill, consider using Dozzle as a self-contained real-time log viewer.
4.2. Handle Data Streams with the Telegraf / InfluxDB / Grafana Stack
Where the PLG stack takes care of log files of discrete events, the TIG stack does the same for continuous streams: Telegraf collects, InfluxDB stores, then Grafana visualizes ongoing data streams generated by hardware and software. This can give you very detailed insights into the usage and health of your NAS or router, such as memory or CPU load, via Simple Network Management Protocol (SNMP).
For details, see:
- Monitor ESXi, Synology, Docker, PiHole, Plex and Raspberry Pi and Windows using Grafana, InfluxDB and Telegraf (includes a demo)
- A beginner’s guide to SNMP
5. Other Entries of Interest
I started this blog in 2000 and have been writing about plenty of different topics, but if you were interested in this entry, I bet you’ll like these:
- Getting the Most Out of Your Synology Networked Attached Storage: Did You Know It Can Do That?
- How to Bend Plex to Your Will to Handle Complex Libraries Without Losing Your Mind
- Things I Found Out the Hard Way to Get the Most of the Nvidia Shield
- How I Save Time with the Right Shortcuts, Handpicked Apps, and Finetuned Hardware
This is the initial version of this entry, which I’ll revisit in the months to come as I explore some of the more intricate topics. While I’m fairly technical, I’m not a networking engineer and I’m learning all that stuff through google / trial / error. Constructive feedback welcome in the comments below.