• 10 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • If you can afford one, I would strongly recommend going with a dual-conversion UPS. A line-interactive UPS like the one you posted essentially acts as a pass-through for your mains power until it detects a power loss or a brown-out. This works most of the time, but there’s a short delay during the switch from line to batteries (just guessing, but most likely on the order of milliseconds). This might not sound like much, but you’re counting on the capacitors in your server’s power supply to hold enough charge until the UPS kicks in.

    The other thing to consider is that a dual-conversion UPS also supplies “clean” power to your equipment. It essentially acts as a DC power supply connected to an inverter, so regardless of how bad your input power is, you’re always going to get the correct voltage and frequency out. I connected my old line-interactive UPS to a cheap generator at one point; the voltage and frequency regulation was so bad on the generator that my UPS continually switched on/off of battery (several times per second), and the equipment attached to it immediately shut down.

    I can connect my dual-conversion UPS to the same generator, and it keeps humming along as if it was connected to mains voltage. According to the datasheet, anything from 60VAC to 150VAC, it’s still going to output clean 120V/60hz power.

    They’re much more expensive. Mine is 1000VA, and if I remember correctly, I paid something like 600 or 700 USD for the UPS. An add-on rackmount battery pack was another $300 or so. It was well worth the cost, though.





  • In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.

    My rack only receives a single 120V circuit, but it’s backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.


  • Yes, a lot of my movies are 50GB or so. Not everything has a 4k repack available, though. I’d say the vast majority are around 20GB.

    1080p would just not be acceptable for me. There’s a clear difference between 1080p and 4k on a 4k screen, especially if the screen is large.

    If I’m in a situation where I don’t have connectivity to stream from my server, then I can always just start a Handbrake queue the night before and transcode a few videos to smaller size, or just dump a few onto an external drive. I have never been in a situation where I had to do this, though.


  • Any sort of media, including videos, I always go for the highest possible quality I can. I do have a number of 4k displays, so it makes sense to a certain extent, but a lot if it has to do with future-proofing.

    Here’s a good example: When personal video cameras were first starting to support 1080, I purchased a 1080i video camera. At the time, it looked great on my 1920x1080 (maybe 1024x768, not sure) monitor. Fast forward over 15 years later, and it the video I recorded back then looks like absolute garbage on my 4k TV.

    I remember watching a TV show when 1080p video first became available, and I was blown away at the quality of what was probably a less-than-1GB file. Now watching the same file even on my phone has a noticeable drop is quality. I’m not surprised you saw little difference between a 670MB and a 570MB file, especially if it was animation, which contains large chunks of solid colors and is thus more easily compressed. The difference between two resolutions, though, can be staggering. At this point, I don’t think you can easily find a 1080p TV; everything is 4k. 8k is still not widespread, but it will be one day. If you ever in your life think you’ll buy a new TV, computer monitor, or mobile device, eventually you’ll want higher quality video.

    My recommendation would be to fill your media library with the highest-quality video you can possibly find. If you’re going to re-encode the media to a lower resolution or bitrate, keep a backup of the original. You may find, though, that if you’re re-encoding enough video, it makes more sense to save the time and storage space, and spend a bit of money on a dedicated video card for on-the-fly transcoding.

    My solution was to install an RTX A1000 in my server and set it up with my Jellyfin instance. If I’m watching HDR content on a non-HDR screen, it will transcode and tone-map the video. If I’m taking a break at work and I want to watch a video from home, it will transcode it to a lower bitrate that I can stream over (slow) my home internet. Years from now, when I’m trying to stream 8k video over a 10Gb fiber link, I’ll still be able to use most of the media I saved back in 2024 rather than try to find a copy that meets modern standards, if a copy even exists.

    Edit: I wanted to point out that I realize not everyone has the time or financial resources to set up huge NAS with enterprise-grade drives. An old motherboard and a stack of cheap consumer-grade drives can still give you a fair amount of backup storage and will be fairly robust as long as the drive array is set up with a sufficient level of redundancy.



  • When I use OpenSpeedTest to to test to another VM, it doesn’t read or write from the HDD, and it doesn’t leave the Proxmox NIC. It’s all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn’t have any effect on this.

    I’m using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?


  • It was a good suggestion. That’s one of the first things I checked, and I was honestly hoping it would be as easy as changing the NIC type. I know that the Intel E1000 and Realtek RTL8139 options would limit me to 1Gb, but I haven’t tried the VMware vmxnet3 option. I don’t imagine that would be an improvement over the VirtIO NIC, though.







  • I certainly don’t think it will be easier in a few years, but I also think that after 19 years of using gmail, a few more years aren’t going to make a huge difference. It’s really kind of sad to think about how far Google has fallen. I started with gmail in 2005. At the time, Google was starting to become the “go-to” search engine. They had better results than Yahoo or AltaVista; the “do no evil” slogan was also a great “feel-good” factor. I don’t think anyone at the time expected how different things would be in 2024.

    I can host my own media on my own server. I use Nextcloud Talk for IMs (also hosted on my own server). Just about any online service can be self-hosted, except for email. I have certainly tried in the past, even hosting email on a VPS. You run into so many issues, though. Your server isn’t trusted, websites don’t recognize your domain, a whole litany of problems. Email is just one of those things that you really can’t self-host.

    Sure, I could switch to a new email provider, forward gmail, and slowly over time update my email address for everyone who’s sending to my gmail account. What happens then when my new email provider decides to start harvesting my data for profit? Email is one of those things where you can’t live without it, but you’re forced to use a service that isn’t your own and could fuck you at any time.



  • I did some research on this, and it turns out you’re absolutely correct. I was under the impression that ECC was a requirement for a ZFS cache. It does seem like ECC is highly recommended for ZFS, though, due to the large amount of data it Storrs in memory. I’m not sure I’d feel comfortable using non-ECC memory for ZFS, but it is possible.

    Anecdotally, I did have one of my memory modules fail in my TrueNAS server. It detected this, corrected itself, and sent me a warning. I don’t know if this would have worked had I been using non-ECC memory.


  • One thing to keep in mind if you go with an i5 or i7 is that you won’t have the option to use ECC memory. If you’re running TrueNAS, you’ll need ECC memory for the ZFS cache. A Xeon E5 v2 server is old, but still has a more than enough power for your use case, and they’re not particularly expensive.

    If you need something more powerful, you can find some decent Xeon Gold systems on eBay, but they’ll be a bit more pricey. The new Xeon W chips are also an option, but at least for me, they’re prohibitively expensive.



  • I decided to give up on it. Looking through the docs, they recommend that due to “reasons,” it should be restarted at least daily, preferably hourly. I don’t know if they have a memory leak or some other issue, but that was reason enough for me not to use it.

    I installed TubeArchivist, and it suits my needs much better. Not only do I get an archive of my favorite channels, but when a new video is released, it gets automatically downloaded to my NAS and I can play it locally without worrying about buffering on my painfully slow internet connection.



  • Like most of us, I have plenty of pictures and such that I don’t want to lose. The most important to me, though, are some of the documents that I’ve saved or scanned from years passed. While a tax return from 2003 or the title for a car I owned 20 years ago aren’t exactly useful, it’s fun to occasionally look at some of the old stuff and see how far I have come in my life since then.