
While the initial consumer-grade NASes sold during the late 2000s/early 2010s had fairly weak CPUs and little memory, newer models priced in the $200-$500 range from about 2019 forward are way more powerful. In parallel Docker has emerged as a ubiquitous platform to easily and quickly deploy software regardless of the underlying operating system. It is thus time to see even cheap NASes as bona fide servers and not just storage.
This entry will run the gamut of services you can use on modern NASes. Its intended audience is made of savvy home users that have a variety of professional and personal needs. The content here is probably too technical for complete newbies, and not advanced enough for people already on the forefront of homelabs and self-hosted services.
I believe there’s a large number of people who have a vague sense they could get more out of their NAS, but don’t quite know where to start. You are the people I’m trying to help, as I’ve been going through that journey myself. This entry is very broad but relatively shallow to give a sense of the massive possibilities within reach of even cheaper devices and will point to other sites for in-depth details.
1. Wait, What’s the Point of a NAS?
To start, let’s frame what consumer-grade, prebuilt NAS devices such as those from Synology or QNAP are uniquely good at:
- Self-contained experience that’s not as overwhelming as a completely do-it-yourself approach where you build a computer or NAS from individual components and have to select, install, and maintain its operating system. This vastly simplifies questions about CPUs, motherboard chipsets, Linux flavors etc.
- Decent bang for the buck: not the very cheapest approach, but still affordable, especially once you factor in time spent as well as energy costs. PCs with their powerful GPUs typically consume much more power, require more cooling, and a bigger UPS.
- Ability to get support from vendors. With the DIY approach your only recourse is the user community.
- Small form factor makes it easier to store somewhere out of sight like in a ventilated basement cabinet or rack.
As we’ll see through the rest of this entry, all of this can be combined with a very deep range of software you can run thanks to the magic of containers and virtual machines. You get the benefit of a small, versatile server, without a lot of the ownership costs and hassle traditionally associated with assembling and maintaining your own server.
2. Hardware: DIY vs. Turnkey; Minimum Specs
Ready-to-go NASes from an established brand are definitely more expensive than assembling the same parts yourself, especially if you’re willing to buy used parts (and even more so if you live in the US where’s there’s a vibrant used market for computer parts). The DIY approach should be the choice of people:
- With good technical skills, ranging from hardware to OS to software
- Able and willing to support themselves
- With a good tolerance for failure and setbacks
- Who want to tinker with every aspect of their build
- Aiming for more powerful builds
- Who have enough free time to see it through
In other words, if you want a turnkey experience and don’t mind spending a bit more to save time and get more peace of mind, brand-name NASes are the way to go. Among these, Synology stands out as a vendor that backs their products over the long haul (my DS213j from 2013 still gets upgrades nine years later) and provides a good range of native apps in a nice graphical UI.
For storage, the two main considerations are:
- The number of bays, and whether you can add an extension unit. My Synology DS920+ has 4 bays and can accommodate an optional 5-bay external box. This can take you into 100TB+ territory, depending on the size of your drives and whether you dedicate one or more of them to RAID redundancy. If you’ve been around the block for a while, the fact we can casually talk about private individuals having 100+TB of storage at home sounds kind of insane. As of 2023, the very successful 920+ has been replaced by the DS923+, the DS423+ is also pretty close.
- Whether you use one or two SSDs as a cache drive or as extra storage. First, the bad news: everyone who’s benchmarked SSD caching on the DS920+ came to the conclusion that it’s barely useful at best. On the other hand, you could set up M.2 SSDs as a storage volume even though it was not officially supported for the longest time. After two years of running my Docker containers off my mechanical hard drives, moving them to a Raid 1 array using two Samsung 970 EVO Plus 2GB drives has been an eye opener. Large media libraries became so much faster to use, I only regret that I didn’t do it sooner. DSM 7.2 finally introduced official support for SSD storage volumes, but with only recognization of a limited number of models, breaking NVMe volumes running on unsupported drives. Fear not, this script fixes this issue which freaked me out when I ran into it.
- To then turn the S in NAS from Storage to Server, you need two main resources:
CPU: As you’re about to see, you’ll want something that can run Docker. Officially that excludes ARM models though it seems there are workarounds. CPU-intensive processes include video transcoding and scanning large media libraries. Support for the AES-NI instruction set helps handling encrypted content.
RAM: when all is said and done, you might end up running 30 Docker containers or more, in addition to several native apps. This all adds up. More RAM also means that applications can maintain larger caches, usually making them feel more responsive.
Anticipating a bit on the software we’re about to discuss: media servers like Plex and Komga can consume 500+MB of RAM each, depending of the size of your libraries. The Docker UI will report high RAM use because it aggregates RAM and cache, for a better breakdown Portainer tells apart actual memory use from cache.

While the DS920+ is officially limited to 8GB, if you buy the right 16GB DDR RAM, you can actually get to 20GB which is what I did with a Crucial CT16G4SFD8266 stick. Be careful as with many other RAM references your NAS will refuse to boot. Again, going beyond 8GB on that model is not supported by Synology but plenty of people besides me have done it without issue. See this for details.
In summary, my DS920+ is maxed out with 20GB of RAM, 48TB of usable HDD storage, and 2TB in redundant SSD storage to run demanding containers and their caches.
Regardless of whether you add a lot of apps or focus primarily on storage, you must put your NAS behind a UPS. No ifs, no buts, no laters, do it! Crashes due to blackouts are one of the leading ways to brick any computer.
3. Native DSM Applications: Low Hanging Fruit Meeting a Decent Range of Core Needs
One of the main selling points in favor of Synology is its DiskStation Manager (DSM) operating system. While it’s Linux underneath, it comes with an easy-to-use user interface and a range of useful applications. For many people, that’s all there is to it, and there’s nothing wrong with that. These include file sharing/syncing/backup, productivity and multimedia apps to effectively build your own cloud.
I bought my first Synology NAS in 2013 (a modest DS213j) so I can vouch for Synology’s staying power. My first NAS was a QNAP TS-209 bought in 2007, so I’ve been using such devices for 15 years, which gives me perspective on how this category of devices has evolved.
4. Third-Party Applications: Staying Native, Going Containers, And What About VMs?
As good as they are, there’s only a few apps coming with DSM, but these days a NAS is really a small form factor PC, not just a glorified network hard drive. It’s thus legitimate to want to install more applications depending on what you’d like to run. How to go about that?
4.1. Synology Community: Keep It Simple Stupid
If you want to install apps aside from those that come with DSM but want to keep it simple, change your Trust Level in the Package Center to include trusted publishers. This will give you access to packages from SynoCommunity. Very little tinkering is needed, and you might just meet your needs with the available packages. There’s really not a lot more to say here, as this will look and feel like native DSM apps.
4.2. The Case for and Against Docker: A World of Possibilities, If You’re Able and Willing to Spend the Time to Learn
To open up a much bigger world beyond what’s provided by Synology and its user community, you really need to consider using Docker. You’ll go from a few dozens of supported packages to thousands, with much broader support since Docker works on all major operating systems. Consider this: these days, some apps are only released as Docker containers.
It’s a fun platform to learn, but you’ll have to be ready to spend significantly more time to get on top of all the foibles than if you were to stick to DSM apps. But fear not, there are tools to make Docker more approachable.
Aside from the extra learning curve, the main drawback of using Docker is that it adds a single point of failure to your stack. If the docker daemon crashes, and it can, all your containers will obviously be down too. After a transient internet connectivity issue, I once had to reboot my Synology from the command line as Docker wouldn’t otherwise restart. It’s not a big deal on a day-to-day basis, but it’s something to keep in mind if you place availability above everything else. If like me you run 20+ containers, expect a full reboot to take 10 minutes or so as your entire stack restarts.
4.2. Bare Minimum Docker & Linux Basics to Know Your Way Around
We’ll see later that there are some graphical user interfaces that simplify Docker administration, but there are core concepts that you’ll still need to understand:
Volumes, and whether you want to mount or bind them. If you care about making your containers seamlessly transportable from host to host, then mount. For now I just use Docker on my main NAS so I bind, which is admittedly the lazier approach that I might revisit later. See this related discussion.
Networking, starting with port mapping between the host (aka local, i.e. the Synology OS) and container. You’ll want to make sure your host ports don’t overlap. To display ports used by your containers, run this from an SSH command line:
docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Not all packages let you change their internal port, so stick to their default unless you can override it through an ENV setting.
Environment variables define details of how your container should be set up. Note that there’s a long-standing bug that prevents Portainer from updating the ENV variables of an existing container on Synology, so you’ll want to use the default Docker UI when you need to do that. The most common ENV variables are UID, GID, and TZ (time zone).
UID and GID values and file permissions. In my experience this is where there’s a lot of friction to get containers to fully work, especially if they need to write data. This happens on your Synology’s file system, outside of Docker/Portainer.
For more, read Learning Containers From The Bottom Up.
4.3. Remote Access via SSH, Guacamole
To find out out UIDs/GIDs or edit file permissions, you’ll need to enable SSH on your NAS and know a few Linux commands starting with sudo, cd, ls, chmod, and chown.
Some instructions are specific to DSM, such as this one to stop/start docker:
synopkgctl stop Docker synopkgctl start Docker
If your main PC is running Windows, I recommend MobaXterm which is a way better terminal than Putty. You’re likely to get an annoying message that you “Could not chdir to home directory” when you log in, to address it enable User Home Service.
You can also access your NAS and PC via a web browser thanks to Apache Guacamole in a Docker container. In addition to SSH, this will also let you access your PC’s Windows UI via VNC and RDP. Read:
- How to enable Remote Desktop on Windows 10
- Installation of OpenSSH For Windows Server 2019 and Windows 10
This is the type of service that would be disastrous to let a nefarious actor gain access to, so pay extra attention to how you secure it. Do NOT leave these services on when you don’t need them and do NOT keep the default guacadmin account. That means closing the SSH port in your firewall, turning off RDP, and turning off the Guacamole container if you’re only going to use it infrequently, say when you’re travelling away from home. It wouldn’t be a bad idea to use an access control profile for the container’s reverse proxy too.

4.4. Make Docker More User Friendly with Portainer or Yacht + SSH/Linux Basics
While using the Docker CLI and Docker Compose gives you a lot of control over your setup, it’s more work than I necessarily want for what remains a hobby. DSM comes with a Docker GUI but it’s fairly limited, so in order to simplify the installation and maintenance of containers I use Portainer CE (this page includes a link to a live demo, see also this tutorial). There’s also a newer, less mature but good-looking alternative in Yacht.
You can install and configure containers manually by retrieving images from Docker Hub, but with application templates you’ll get up and running even more quickly. Read this thread explaining how to enable 80+ app templates with a couple of clicks. Speaking of Docker Hub, be careful who you get your images from, there are some malicious actors using it for cryptojacking.
To update an existing container in Portainer, simply select it, click on Recreate and make sure to pull the latest image as per the screenshot below. Do not update from auto-updaters within the apps themselves.

Portainer itself can’t be updated that way, you’ll have to use the command line with elevated permissions to stop, delete, and recreate your container (below creatively named ‘portainer’) from the latest image:
docker stop portainer
docker rm portainer
docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always --pull=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
You can group a collection of containers into a stack which makes sense if they share resources and you’d like a quick way to redeploy the stack elsewhere in the future.
4.5. Manual Container Configuration via Docker-Compose Still Has Its Uses
While you’re bound to find up-to-date container images for popular packages, it’s less likely someone went through that trouble for less successful or newer ones. In that case, you’ll have to use Docker Compose, i.e. configure your container via a docker-compose.yml file. See this Idiot’s guide to getting this working. Yes, containers created manually will be visible in Portainer.
Be aware that, like Python, YAML relies on semantic whitespace. I personally find this infuriating, but some people, who also know doubt relish pineapple pizza, seem to love it. See 10 YAML tips for people who hate YAML.
On occasion you might also want to generate a docker-compose yaml definition from a running container, which can be done with docker-autocompose, giving you a Portainer-to-Compose path.
To get the best of both worlds, check out how stacks expose docker-compose through the Portainer UI.
4.6. Keep Containers Updated on Your Own Terms with Watchtower + Get Notified of New Releases
You can use Watchtower to get notified when containers have available updates without automatically updating them. Why not just auto update containers? The reason is that I’d like my setup to keep humming, and things might break with an update. I don’t want to have to perform maintenance in the middle of doing actual work, so I want to control if/when to update any given container. If it’s not broken, don’t fix it, so it’s a good habit to check on Github what the latest version is about and whether you really need to install it. In most cases you can skip minor updates and stick to stable, relatively infrequent ones.
Watchtower keeps an eye on container updates, it doesn’t tell you what the updates are about. To keep abreast of many repositories, you can use RSS. Simply add .atom at the end of the releases path for any Github repo, as in:
https://github.com/grafana/loki/releases.atom
An alternative is to use Bandito.re to get an RSS feed of releases for all the projects you’ve starred. Of course you could set up your own RSS reader (e.g. Tiny Tiny RSS) in its own container. You may want to follow both the original Github projects and their Docker container offshoots just to make sure that the latter are keeping pace, for instance:
Docker Hub doesn’t have notifications of new image releases, but there’s docker-hub-rss and Docker Image Update Notifier (DIUN).
If you don’t like RSS you can use another mechanism for release notifications such as github-releases-notifier to push to Slack.
4.7. What About Virtual Machines?
While Docker is a lightweight architecture, virtual machines let you run a different operating system within your NAS. See What’s the Diff: VMs vs Containers.
This is very powerful functionality but that comes at the cost of consuming more resources. While you could run, say, a Windows 10 virtual machine on a $400 consumer NAS, poor performance will make that of limited use in practice. I’m thus going to limit the rest of this entry to containers, but know that if you’re willing to invest more in your hardware, the possibilities are truly endless.
Some feedback I got on this entry is that NASes have a bit of an “identity crisis” and that at some point you might hit a performance wall, leading some people to go for a full-fledged server instead. To this I’ll make two counterpoints: 1) at the very least a NAS gives you a set of training wheels at a reasonable price before committing to a home server, if you need one at all, and 2) smartphones have shown us that as price/performance keeps being revisited, devices should not be pigeonholed forever based on their initial functionality and performance.
5. With Great Power Comes Great Responsibility: Security & Network Admin 101
“Hey check out my cool Plex movie server” creates a promise to your friends and family, but there’s a lot to unpack to fulfill that promise over the long run. The same goes with home automation and other self-hosted services that somehow never quite want to keep humming unless you keep them on a tight leash.
This is the object of a separate entry, which you should definitely read before going to deep down the rabbit hole of self-hosting: Home LAN + NAS Administration & Security 101 and Beyond. You might also need to set expectations as some people seem to believe free service from a friend somehow should be backed by concierge support.
6. Awesome Things You Can Do with Your NAS: Media Management Galore
Phew, that was a lot of prerequisites to get to the good stuff! If you’ve ignored everything above, you either know what you’re doing and you’re fine, or you don’t and you might regret later not having things in the more arduous but ultimately optimal order. Your call!
6.1. Media Servers: Different Media Types Require Different Tools
Handling all sorts of media libraries to share them 24/7 is one of the primary reasons why people buy a NAS. The main categories are audio/visual and written content. There’s no single piece of software that does it all, but you can consolidate around two or three core tools depending on your exact needs.
6.1.1. For Audiovisual Libraries, Plex Offers a Slick But Closed Experience, with a Variety of Competitors to Choose From
For movies, TV shows, documentaries, video tutorials, and to a lesser extent music and home videos/photos, Plex is a very strong contender, as you’ll see in my ultimate Plex guide. Plex offers a polished, all-encompassing experience that’s hard to beat, especially with a great media player such as the beloved Nvidia Shield.
The main directly comparable alternatives are Emby and Jellyfin, which are stronger in some areas and are respectively partly and entirely open source. That said, one size doesn’t always fit all and some people mix and match several audio/video servers. See Plex vs Emby vs Jellyfin vs Kodi.
There are some solutions dedicated to just music collections, such as as Navidrome, which has a nice enough UI but only partial support for metadata support. Music library apps rely for the most part of having a cleanly organized metatagged library with cover art, so I’m testing Bliss to get my music files back in shape. I’m just diving back into that world after years of neglect and will update this section in depth eventually. In the meantime, see this thread for options and considerations.
Likewise, if you have a massive photo collection or want features such as face recognition, the generalist media servers might not cut it for you. It’s not something I’m personally very vested in, so I’ll point you to the many threads discussing self-hosted options after the demise of unlimited Google Photos, starting with this megathread on /r/selfhosted.
6.1.2. Ebooks, Comics, Audiobooks Work Best in Dedicated Self-Hosted Servers
If you have both books and comic books, you could try handling all content types with one tool such as Calibre or Ubooquity. In practice Calibre is not great for comics, while Ubooquity has not been developed since 2018 and will struggle to handle bigger ebook libraries. Meanwhile audiobooks are obviously consumed very differently from written material. A much more pleasant experience that will handle even huge libraries can be obtained by combining:
- Calibre on your desktop computer as the backend to handle your ebook library (not comics, not audiobooks). You could run Calibre in a container but the user experience via the browser won’t feel as smooth as running the desktop app.
- Synology Drive Client handling an ongoing one-way upload process from your PC to your NAS to have an up-to-date backup of your library, as a) backups are essentials, and b) Calibre doesn’t like to work off network drives. We’re using one-way upload and not backup because the latter doesn’t let you control the destination folder. This works well, with one foible: you have to reauthenticate in the client whenever you update your SSL certificate on the NAS. I ended up installing Active Backup for Business since it can generate a 10-year certificate for you (hat tip to Constantin Razvan).
- Calibre Web running on the NAS, using the library backup (not the PC parent library), as the web frontend to serve books to your users. It’s not recommended to host your primary Calibre library on a network drive, and you want to back it up anyway so I like this architecture. Just to be clear, Calibre Web and Calibre are two completely separate software.
- Komga for comics, with files stored directly on the NAS. Komga looks very good and is under active development with a vibrant community.
- Audiobookshelf, which I found better than Plex for audiobooks.
- Dedicated ebook reader (e.g. Moon+ Reader), comic reader, and audiobook reader apps on your phone or tablet, as these require very different user interfaces. Note that Calibre Web and Komga come with in-browser streaming readers, though obviously they require you to be connected to their respective servers, whereas downloading book copies to mobile apps gives you offline reading.
One caveat of using Calibre Web in a container with a synchronized library is that it won’t automatically recognize and load database updates. You either have to click the “Reconnect to database” button manually or call the /reconnect endpoint. The latter opens the door for automation with an integration tool like Huginn or n8n. I did a quick and dirty job with the latter that executes a cron job once a day, calls the reconnect endpoint, and notifies me via PushBullet (or Gotify as a self-hosted alternative). I’ll investigate how I might instead trigger that any time the Calibre database timestamp is updated on my NAS as I plan to play with n8n more.
To monitor a folder for new books and add them to Calibre automatically, go to Preferences > Adding books > Automatic Adding, but be aware that this will move books from their original location. If you don’t want that behavior, for instance because you want to keep seeding these files, either set up some automation to copy your torrented books to the monitored folder or use the calibredb command line interface (probably via Task Scheduler on Windows, or the same automation container we just discussed if you’re running Calibre on your NAS). Make sure to check for duplicates otherwise you might wreck your library real fast.

Other items to be aware of with Calibre Web:
- It only handles a single Calibre library at a time, though you can run multiple instances on different ports.
- It doesn’t support Calibre virtual libraries either. You can create “shelves” based on search criteria, though I’m not sure new books matching these criteria are automatically added to an existing shelf.
- It’s not an actual Calibre content server, so it won’t be recognized as such by Readarr for integration purposes.
- Like with Plex libraries, the quality of your metadata in the backend (i.e. Calibre) will determine your user experience in the Calibre-web frontend. In case you import ebooks in bulk, you’ll want to remove duplicates, extract ISBNs, and download metadata/covers for the best results.
While combining Plex, Komga, Calibre Web, and Audiobookshelf may look like a lot of moving parts, they’re relatively fast to set up on the backend, and their user interfaces look very similar, making for a smooth learning curve for your users across all your libraries and content types. Hi, I’m Olivier, and I’m a data hoarder (“Hi Olivier”), but at least I’m a tidy one. Wait, that trait is part of the disorder!

6.2. Automating & Managing Content Downloads with *ARR Apps and Downloaders
There’s a whole range of *arr applications meant to automate the retrieval of content from sources such as (private) torrent trackers and Usenet. The main ones are Sonarr for TV series, Radarr for movies, Lidarr for music, Readarr for ebooks, and Mylar for comic books. This is worth pursuing if you want to keep abreast of new releases – an especially valuable thing to automate for serial content – fill in the gaps in your existing collections, or let your users send content requests.
It takes some effort to work out the kinks though, what with the multiple layers of integration, but the end result is very satisfying and makes official commercial offerings laughable in comparison.
The typical *arr software stack goes like this:
- Optionally you can put a request and discovery service such as Omby or the newer and quite slick Overseerr in front of your *arr app(s). Integration with Trakt is also very nice to have, see my Plex entry for details.
- The *arr application scans your existing library and manages your wish list. New releases matching your criteria get picked up. This is especially useful for serial content, which is why if you need to use one *arr to start, it should probably be Sonarr. Once you master Sonarr you’ll feel right at home in the other apps.
- The *arr app connects to a downloader either directly or via Jackett for more options. Depending on where you live, you might want to do that via a VPN. Download requests are sent to either your Torrent client – e.g. Transmission – or Usenet client (e.g. Sabnzbd).
- Complete downloads are then moved or copied to your library folders depending on whether you do long-term seeding and other personal preferences. If you’re torrenting, you’ll likely want to use hardlinks to keep seeding without duplicating storage space.
- Your media manager(s) of choice pick up the new content via scheduled or manual scanning.
- Along the way you might send notifications through the medium and to the users of your choosing.

For help on how to set up these tools, read TRaSH-Guides.
6.3. Where to Get Content to Feed to Your *Arr Stack?
If all you’re interested in is watching the latest Marvel movie, public torrent trackers are fine and you don’t need to chase private ones. For older, hard-to-find content, nothing beats the best private torrent trackers thanks to their wide selection, deep retention, selective curation, and manicured organization. But if you’re not already in them, prepare to wait months or even years before you can get in, if you can get in at all. I use Transmission for torrenting which is OK but not great at scale, and might prefer qbitorrent if had to start from the ground up again. Transmission Remote GUI gives you more options than the web UI (e.g. you can rename files within torrents) but it has a tendency to freeze for long periods, so I eventually moved to tremotesf2 which is more responsive and is actively maintained.
Unlike private trackers, Usenet doesn’t require you to jump through as many hoops, but indexers and servers are hit and miss if you’re looking for long-term retention of rare content.
In some categories such as comics and ebooks there are some decent direct download websites, but they tend to be a hassle to use with captchas, throttled free downloads, and fairly aggressive advertising. That makes them much less suited for automation. There are also some massive content dumps out there such as Libgen and The-Eye, if you know where to look.
For some content types a couple old-school options remain very relevant, though again not very automatable:
- DirectConnect via a client such as AirDC++. You’ll need to figure your way into the right private hubs but that’s still a great source for things such as comics and niche music.
- I havent used it in years, but I bet there are also some good IRC hubs still alive and kicking.
- Soulseek remains invaluable for serious music collectors. Nicotine+ is a sleeker and faster UI on top of Soulseek but I couldn’t resolve a couple of vexing issues.
7. More Awesome Stuff: Get Down to Business with Your Own Cloud
Media management is probably the gateway drug that gets most people into NASes, but there’s so much more you can do. I cannot possibly make an exhaustive list but I’ll canvas some of the main categories and how to approach them as a beginner.
7.1. Backup & Sync: Just Do It!
This is one area where you might find there’s more than enough to meet your needs within the packages built in DSM:
- Synology Drive – File sync and time machine functionality for files across your PCs and NASes.
- Cloud Sync – “seamlessly connect your local Synology NAS to public cloud services or on-premises storage.”
- Hyper Backup – Comprehensive solution with a bunch of settings for scheduling, versioning, rotation, integrity checks and more. Because of all these features, backup folders are not a direct copy of your source files as is.
- Active Backup for Business (ABB) – “Consolidate backup tasks for physical and virtual environments, and rapidly restore files, entire machines, or VMs when necessary – completely free of licenses on compatible NAS models.”
I like to have several layers of backup so I use Drive to sync from my PC to my NAS, then Cloud Sync to save again to Microsoft and Google clouds. I can easily check that my cloud backups work from mounted drives on my PC via the OneDrive and Google Drive desktop clients. There’s of course a long list of self-hosted services you might use instead, provided you have off-site storage to create your own cloud.
7.2. Home Automation
You’ve been warned, this category opens as big a can of worms as media management. I’m just getting started in this area so can’t offer much in the way of expertise, so for now I’ll point you in the direction of what seems to be the de facto hub for most people: Home Assistant. Like with media management, this category will involve your family at the very least as passive end users, so you need to figure out how to make it “just work.” Good luck with that!
With my nascent setup – Ring cameras and alarm system, Alexa Echos, and some smart lights – I’m still very much in the steep part of the learning curve trying to get to reliable results.
7.3. Almost Everything Else: There’s A Self-Hosted SaaS for That
The list here is almost limitless, as open-source alternatives to commercial SaaS are emerging in many categories, such as:
- Task management and Kanban (think Trello, Asana)
- Automation & Integration (think Zapier, IFTTT)
- Forums/IRC
- Document & Note management
And on and on, the idea is that in many cases you don’t have to share private data or pay ongoing fees to third parties if you’re willing to manage your own stack. For way more details, see:
- Awesome-Selfhosted
- awesome sysadmin
- What do you use your NAS for?
- /r/selfhosted
- SelfHosted.show
- Funky Penguin’s “Geek Cookbook”
7.4. Publishing Websites, Blogs, Portals & Providing a One-Stop-Shopping Option to Your Users
A large subset of the previous section could be dedicated to publishing websites of various flavors, from simple static sites to blogs to wikis to portals, starting with popular options like WordPress and Ghost, to many other niche solutions. I’m not going to cover this much here as you can explore that category from one of the links above.
Regardless of your desire to self-publish, if you’re going to host a variety of services for your friends and family, you might want to put them all under one roof with an application dashboard/portal such as Organizr, Heimdall, or Muximux. I’m only half-way into implementing Organizr and will update this section once I’ve taken the time to integrate tabs more fully in terms of using a reverse proxy and hopefully authentication consolidation, but I thought application portals would be a good way to wrap up this entry.
Some apps require you to finetune how the reverse proxy is set up for them to load in an iframe within Organizer, others such as Calibre Web refuse to load in an iframe.
The screenshot opening this post shows Portainer within Organizr, which has that Inception vibe that’s typical of modern software integration.
8. Related Entries of Interest
If you liked this post, make sure to check out:
- Home LAN + NAS Administration & Security 101 and Beyond for Remote Work and Media Management
- How to Bend Plex to Your Will to Handle Complex Libraries Without Losing Your Mind
- Things I Found Out the Hard Way to Get the Most of the Nvidia Shield
Links to products on Amazon include my affiliate tag. If you make purchases through them it helps pay for my hosting costs (no, this blog is not self-hosted!). Thanks for your support!
Thank you for a useful article! A ton of interesting information I’d had to find across the internet otherwise.
great article!
Great article. I run most of my containers (80ish) on ds918+, created using docker-compose. (yes i like pinapple on pizza) 🙂 Here’s a list of the ones i have tried out, along with some examples for standing these up with docker-compose. https://petersem.github.io/dockerholics/
great article.
What is the program/app/page that is on the first image ( the one with Portainer running on the right side)?
Thanks Andrew, that side navigation frame is Organizr.