• 0 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle
  • Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

    Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

    This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.

    Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.

    All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

    • When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

    • The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

    • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

    So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

    Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.


  • For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

    For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

    So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

    Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

    Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

    So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

    Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

    In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.

    I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

    A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.





  • Can you run more cat6? There are plenty of HDMI over cat6 adapters that work well over some fairly long distances.

    There are also plenty of extended length HDMI cables that are 50+ feet if you can fish through the HDMI end. They get a bit expensive at that length because they are hybrid fiber optic but no noise concerns.

    USB also has adapters to run over cat6. They are usually limited to USB2.0 but that should be plenty to plug a small hub in for mouse and keyboard.


  • Good catch on the redundancy, at the time posting this I didn’t realize I needed the physical space/drives to set up that safety net. 8 should be plenty for the time being. Say if I wanted to add another drive or two down the road, what sort of complications would that introduce here?

    With TrueNAS your underlying filesystem is ZFS. When you add drives to a pool you can add them:

    • individually (RAID0 - no redundancy, bad idea)
    • in a mirror (RAID1 - usually two drives, a single drive failure is fine)
    • raidz1 (RAID5 - any single drive in the set can fail, one drive’s worth of data does to parity). Generally a max of about 5 drives in a raidz1, if you make the stripe too wide when a drive fails and you start a rebuild to replace it the chances of one of the remaining drives you are reading from failing or at least failing to read some data increases quickly.
    • raidz2 (RAID6 - any two drives can fail, two drives worth of data goes to parity). I’ve run raidz2 vdev up to about 12 drives with no problems. The extra parity drive means the chances of data corruption, or of a other drive failing while you are rebuilding is much lower.
    • raidz3 (triple parity - any three drives can fail, three drives worth of data goes to parity). I’ve run raidz3 with 24 drive wide stripes without issues. Though this was usually for backup purposes.
    • draid (any parity level and stripe switch you want). This is generally for really large arrays like 60+ disks in a pool.

    Each of these sets is called a vdev. Each pool can have multiple vdevs and there is essentially a RAID0 across all of the vdevs in the pool. ZFS tends to scale performance per vdev so if you want it to be really fast, more smaller vdevs is better than fewer larger vdevs.

    If you created a mirror vdev with two drives, you could add a second mirror vdev later. Vdevs can be of diferent sizes so it is okay if the second pair of drives is a different size. So if you buy two 10TB drives later they can be added to your original pool for 18TB usable.

    What you can’t do is change a vdev from one type to another. So if you start with a mirror you can’t change to a raidz1 later.

    You can mix different vdev types in a pool though. So you could have two drives in a mirror today, and add an additional 5 drives in a raidz1 later.

    Drives in a vdev can be different sizes but the vdev gets sized based on the smallest drive. Any drives that are larger will be wasting space until you replace that smaller drive with a similar sized one.

    A rather recent feature lets you expand raidz1/2/3 vdevs. So you could start with two drives today in a raidz1 (8TB usable), and add additional 8TB or higher drives later adding 8TB of usable space each time.

    If you have a bunch of mismatched drives of different sizes you might want to look at UnRAID. It isn’t free but it is reasonably priced. Performance isn’t nearly as good but it has its own parity system that allows for mixing drives of many sizes and only your single largest drive needs to be used for parity. It also has options to add additional parity drives later so you can start at RAID5 and move to RAID6 or higher later when you get enough drives to warrant the extra parity.


  • My server itself is a little HP mini PC. i7, 2 TB SSD, solid little machine so far. Running Proxmox with a single debian VM which houses all my docker containers - I know I’m not using proxmox to its full advantage, but whatever it works for me. I mostly just use it for its backup system.

    Not sure how mini you mean but if it has spots for your two drives this should be plenty of hardware for both NAS and your VMs. TrueNAS can run VMs as well, but it might be a pain migrating from Proxmox.

    Think of Proxmox as a VM host that can do some NAS functions, and TrueNAS as a NAS that can do some VM functions. Play with them both, they will have their own strengths and weaknesses.

    I’ve been reading about external drive shucking, since apparently that’s a thing? Seems like my best bet here would be to crack both of these external drives open and slap them into a NAS. 16TB would be plenty for my use.

    It’s been a couple of years since I have shucked drives but occasionally the drives are slightly different than normal internal drives. There were some western digital drives that had one pin that was different from normal and worked in most computers, but some power supplies which had that pin wired required you to mask the pin before the drive would fire up.

    I wouldn’t expect any major issues just saying you should research your particular model.

    You say 16TB with two 8TB drives so I assume you aren’t expecting any redundancy here? Make sure you have some sort of backup plan because those drives will fail eventually, it’s just a matter of time.

    You can build those as some sort of RAID0 to get you 16TB or you can just keep them as separate drives. Putting them in a RAID0 gives you some read and write performance boost, but in the event of a single drive failure you lose everything.

    If 8TB is enough you want to put them in a mirror which give you 8TB of storage and allows a drive to fail without losing any data. There is still a read performance boost but maybe a slight loss on write performance.

    Hardware: while I like the form factor of Synology/Terramaster/etc, seems like the better choice would be to just slap together my own mini-ITX build and throw TrueNAS on it. Easy enough, but what sort of specs should I look for? Since I already have 2 drives to slap in, I’d be looking to spend no more than $200. Alternatively, if I did want the convenience and form factor of a “traditional” NAS, is that reasonable within the budget? From what I’ve seen it’s mostly older models in that price range.

    If you are planning on running Plex/Jellyfin an Intel with UHD 600 series or newer integrated graphics is the simplest and cheapest option. The UHD 600 series iGPU was the first Intel generation that has hardware decode for h265 so if you need to transcode Plex/Jellyfin will be able to read almost any source content and reencode it to h264 to stream. It won’t handle everything (i.e. AV1) but at that price range that is the best option.

    I assume I can essentially just mount the NAS like an external drive on both the server and my desktop, is that how it works? For example, Jellyfin on my server is pointed to /mnt/external, could I just mount a NAS to that same directory instead of the USB drive and not have to change a thing on the configuration side?

    Correct. Usually a NAS offers a couple of protocols. For Linux NFS is the typical filesystem used for that. For Windows it would be a Samba share. NFS isn’t the easiest to secure, so you will either end up with some IP ACLs or just allowing access to any machine on your internal network.

    If you are keeping Proxmox in the mix you can also mount your NFS share as storage for Proxmox to create the virtual hard drives on. There are occasionally reasons to do this like if you want your NAS to be making snapshots of the VMs, or for security reasons, but generally adding the extra layers is going to cut down performance so mounting inside of the VM is better.

    Will adding a NAS into the mix introduce any buffering/latency issues with Jellyfin and Navidrome?

    Streaming apps will be reading ahead and so you shouldn’t notice any changes here. Library scans might take longer just because of the extra network latency and NAS filesystem layers, but that shouldn’t have any real effect on the end user experience.

    What about emulation? I’m going to set up RomM pretty soon along with the web interface for older games, easy enough. But is streaming roms over a NAS even an option I should consider for anything past the Gamecube era?

    Anything past GameCube era is probably large ISO files. Any game from a disk is going to be designed to load data from disk with loading screens, and an 8tb drive/1gb Ethernet is faster than most disks are going to be read. PS4 for example only reads disks at 24MB/s. Nintendo Switch cards aren’t exactly fast either so I don’t think they should be a concern.

    It wouldn’t be enough for current gen consoles that expect NVMe storage, but it should be plenty fast for running roms right from your NAS.


  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldSharing Jellyfin
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Depending on how you setup your reverse proxy it can reduce random scanning/login attempts to basically zero. The point of a reverse proxy is to act as a proxy, as a sort of web router, and to validate that the http requests are correctly formatted.

    For the routing depending on what DNS name/path the request comes in with it can route to different backends. So you can say that app1.yourdomain.com is routed to the internal IP address of your app1, and app2.yourdomain.com goes to app2. You can also do this with paths if the applications can handle it. Like yourdomain.com/app1.

    When your client makes a request the reverse proxy uses the “Host” header or the SNI string that is part of the TLS connection to determine what certificate to use and what application to route to.

    There is usually a “default” backend for any request that doesn’t match any of the names for your backend services (like a scanner blindly trying to access your IP). If you disable the default backend or redirect default requests to something that you know is secure any attacker scanning your IP for vulnerabilities would get their requests rejected. The only way they can even try to hit your service is to know the correct DNS name of your service.

    Some reverse proxies (Traefik, HAproxy) have options to reject the requests before the TLS negation has even completed. If the SNI string doesn’t match the connection just drops it doesn’t even bother to send a 404/5xx error. This can prevent an attacker from doing information gathering about the reverse proxy itself that might be helpful in attacking it.

    This is security by obscurity which isn’t really security, but it does reduce your risk because it significantly reduces the chances of an attacker being able to find your applications.

    Reverse proxies also have a much narrower scope than most applications as well. Your services are running a web server with your application, but is Jellyfin’s built in webserver secure? Could an attacker send invalid data in headers/requests to trigger a buffer overflow? A reverse proxy often does a much better job of preventing those kinds of attacks, rejecting invalid requests before they ever get to your application.


  • Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.

    Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.

    The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.

    Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.

    I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.

    VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.


  • You are paying for reasonably well polished software, which for non technical people makes them a very good choice.

    They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.

    I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.

    Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.

    For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.

    I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.

    If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.


  • I’ve had one of these 3d printed keys in my wallet as a backup in case I get locked out for 5 years now. I certainly don’t use it often but yeah it holds up fine.

    The couple of times I have used it works fine but you certainly want to be a little extra careful with it. I’ve got locks that are only 5ish years old so they all turn rather easily, and I avoid my door with the deadbolt when I use it because that would probably be too much for it.

    Mine is PETG but for how thin it is, it flexes a lot. I figured flexing is better than snapping off, but I think PLA or maybe a polycarbonate would function better. A nylon would probably be too flexible like the PETG.


  • If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.

    Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.

    Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.


  • If you are just using a self signed server certificate anyone can connect to your services. Many browsers/applications will fail to connect or give a warning but it can be easily bypassed.

    Unless you are talking about mutual TLS authentication (aka mTLS or two way ssl). With mutual TLS in addition to the server key+cert you also have a client key+cert for your client. And you setup your web server/reverse proxy to only allow connections from clients that can prove they have that client key.

    So in the context of this thread mTLS is a great way to protect your externally exposed services. Mutual TLS should be just as strong of a protection as a VPN, and in fact many VPNs use mutual TLS to authenticate clients (i.e. if you have an OpenVPN file with certs in it instead of a pre-shared key). So they are doing the exact same thing. Why not skip all of the extra VPN steps and setup mTLS directly to your services.

    mTLS prevents any web requests from getting through before the client has authenticated, but it can be a little complicated to setup. In reality basic auth at the reverse proxy and a sufficiently strong password is just as good, and is much easier to setup/use.

    Here are a couple of relevant links for nginx. Traefik and many other reverse proxies can do the same.

    How To Implement Two Way SSL With Nginx

    Apply Mutual TLS over kubernetes/nginx ingress controller


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.


  • I’ve got about 30 zwave devices, and at first the idea of the 900mhz mesh network sounded like a really solid solution. After running them for a few years now if I were doing it again I would go with wifi devices instead.

    I can see some advantages to the mesh in a house lacking wifi coverage. However I would guess most people implementing zigbee/zwave probably have a pretty robust wifi setup. But if your phone doesn’t have great signal across the entire house a lightswitch inside of a metal box in the wall is going to be worse.

    Zwave is rather slow because it is designed for reliability not speed. Not that it needs to be fast but when rebooting the controller it can take a while for all of the devices to be discovered, and if a device goes missing things break down quickly, and the entire network becomes unresponsive even if there is another path in the mesh. Nothing worse than hitting one of your automations and everything hangs leaving you in the dark because one outlet three rooms over is acting up.

    It does have some advantages, like devices can be tied to each other (i.e. a switch tied to a light) and they will work even without your hub being up and running (zwave controller I think can even be down).

    Zwave/Zigbee also guarantee some level of compatibility/standardization. A lightswitch is a lightswitch it doesn’t matter which brand you get.

    On the security front Zwave has encryption options but it slows down the network considerably. Instead of just sending out a message into the network it has to negotiate the encrypted connection each time it wants to send a message with a couple of back and forth packets. You can turn it on per device and because of the drawbacks the recommendation tends to be, to only encrypt important things like locks and door controls which isn’t great for security.

    For Zwave 900mhz is an advantage (sometimes). 900mhz can be pretty busy in densely populated areas, but so can 2.4 for zigbee/wifi. If you have an older house with metal boxes for switches/plaster walls the mesh and the 900mhz penetration range may be an advantage.

    In reality though I couldn’t bridge reliably to my garage about thirty feet away, and doing so made me hit the Zwaves four hop limit so I couldn’t use that bridge to connect any additional devices further out. With wifi devices connecting back to the house with a wifi bridge, a buried Ethernet cable, etc can extend the network much more reliably. I haven’t tried any of the latest gens of Zwave devices which are supposed to have higher range.

    The main problem with wifi devices is that they are often tied to the cloud, but a good number of them can be controlled over just your LAN though. Each brand tends to have their own APIs/protocols though so you need to verify compatibility with your smart hub before investing.

    So if you go the wifi route make sure your devices are compatible and specifically check that your devices can be controlled without a cloud connection. Especially good to look for devices like Shelly that allow flashing of your own firmware or have standardized connection methods in their own firmware (Shelly supports MQTT out of the box)


  • Yeah like I said it only works if they don’t look too deep.

    My assumption would be that they aren’t going to bother to go to court if they can’t see any public records (vehicle registration, titles, etc) without liens that they would be able to get money out of.

    So if they don’t bother to check who the lien holder is they will likely just move on to the next person to squeeze money out of.

    And yeah if they do go to court the scheme will quickly fail, but the whole reason these people think their magic incantations work is because the courts/creditors/etc often just ignore them because it ends up costing more to fight them than to properly enforce the law/contracts.


  • If you owe money they can still sue you and the court can force you to turn over assets.

    There are some protections around the court taking away your primary residence, but I don’t think there is anything stopping them from taking away automobiles (likely varies state to state).

    So I wasn’t talking about the original lein holder repossessing th vehicle, but instead other creditors that now see assets that are open on the books and seeking legal action to get their money back. Unlikely they will want to pay lawyers to do that for your car, but still possible.


  • Presumably saying that if the loan is discharged that means there is no longer a lien on it. Putting one on it yourself (if that is possible?) might prevent creditors from using the courts to repossess it to get their money back. In reality the best it might do is to make them think it isn’t an asset they can come after you for if they don’t look close enough at the lien holder.


  • I am not a SAN admin but work closely with them. So take this with a grain of salt.

    Best practice is always going to be to split things into as many failure domains as possible. The main argument being how would you test upgrades to the switch firmware without potentially affecting production.

    But my personal experience says that assuming you have a typical A/B fabric that is probably enough to handle those sorts of problems, especially if you have director class switches where you have another supervisor to fail back to.

    I’ve personally seen shared dev/prod switches for reasonably large companies (several switches with ~150 ports lit on each switch), and there were never any issues.

    If you want to keep a little separation between dev and prod keep those on different VSANs which will force you to keep the zones separated.

    Depending on how strict change management is for your org keep in mind that tangling dev+prod might make your life worse in other ways. i.e. you can probably do switch firmware updates/zoning changes/troubleshooting in dev during work hours but as soon as you connect those environments together you may have to do all of that on nights and weekends.


OSZAR »