I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.
If you're not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.
Based on your edit about getting the public IP: Most firewall/routers are not configured to do this operation by default (called Hairpinning). If you request your firewall/router's external IP address from the internal network you won't get a response unless Hairpinning is enabled and some devices don't allow you to do that. If you have an internal dns server, you should override the internal dns to return the private ip address so it goes to your nginx reverse proxy instead of the firewall/router.
There's a tutorial that leads towards the first dungeon boss of the game, but after that it looks like you make your own challenges. There's a few bosses around my level that I'll be taking on next, then I'm probably going to explore to see if I can find more dungeons.
Honestly I would describe it as Ark-lite. It has base building/taming and that's pretty fun, and you can also get random encounters at your base. The leveling system is a bit grindy. There are dungeons and bosses in the world to go find and explore. The map is huge, I think I've hardly explored a tenth of it.
Been playing about 15 hours or so and enjoyed it, but the game is definitely early access. I've had a number of crashes, fell through the world a few times, etc. I'd give it a month or two if that bothers you.
Based on your update you may need to bring the containers down and up to fix the database.
Sometimes when opening LinguaCafe the first time there is an error message about users database table. If this happens, just stop and start your containers again, it should fix the problem.
Since you've probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.
One thing to check is what your Shared Folder's NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it's set to "no mapping" then uids must match. The easiest setup is to "map all users to admin" but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.
Sounds about right, just be aware that your LAN and WAN networks need to be different, so you'll likely need to change your old router's dhcp subnet. E.g. 192.168.1.1/24 on the WAN and 192.168.0.1/24 on the LAN.
Easiest is to select the green Code button and click Download ZIP. You should be able to open the index.html page in your browser and use it like normal. YMMV if this uses content/data from other sources/sites.
Synology's support is also quite crazy. I'm still using my 8-bay NAS that I bought in 2015. It's been replaced twice by RMA. Just upgraded it to DSM 7.0 a few months ago. Almost unheard of in the era of planned obselecense.
A torrent is broken into pieces, and further into blocks. The torrent file contains hashes of all the pieces that make up the full torrent. The client validates each piece that is downloaded and will re-download from another peer if an invalid piece is encountered. The spec goes in to more depth if you're interested. https://wiki.theory.org/BitTorrentSpecification
I've been watching this guy's backlog on building a kernel and bootloader from scratch. A bit monotone but amazing technical knowledge. https://youtube.com/@nanobyte-dev
I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.
If you're not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.