- Assembled: 1200 USD
- Kit: 950 USD
Understandably frustrating, especially if you’re new to investing. But it’s expected that the market will have both ups and downs.
The best advice I can give is to choose a good investment allocation and then stick to it. Contribute as much as you can each pay period or month and avoid looking at your balance as much as possible. You should figure out a rebalancing strategy, and you’ll probably need to look at your account to do that. Also, see The Best Order of Operations For Saving For Retirement.
Right now you have unrealized losses, but you haven’t actually lost any money (i.e., you have no “realized losses”) until you withdraw it. As it’s a retirement account and you just started it, I assume you aren’t planning to retire in the next decade, much less the next three years.
Is this your only retirement account? If so, why have you not been continuing to add money to it? If you wait to do that until the market recovers, you’ll lose out on all the gains between now and then.
I know you haven’t said you’re considering selling, but I recommend you check out the “Maintain Discipline” section of the Bogleheads investment philosophy, just in case that’s on your mind. I also recommend that you read up on dollar cost averaging (if you’re investing in a retirement plan every pay period, you’re already doing this).
You pointed out that the entire market has been impacted. I haven’t personally been paying attention in enough detail to confirm that (and my accounts that I just checked have gone up about 10% over the past three years, not down), but if so, that means you could change your asset allocation without selling low and buying high. I’m not saying you should change it, but if you take the time to learn about different investment strategies and decide a different one works for you, it’s nice to not have to sell your current investments while they’re underperforming relative to your new investments. (On the other hand, you can always change the allocation for your future investments without worrying about that.)
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.
I recommend checking out Friendly Social Browser.
Do you memorize all of your passwords? If so, I take that to mean that you don’t use a password manager. Password managers - really, any app with 2FA - have this problem, too. But if you use a password manager and store your 2FA methods in it, then you only need to be able to regain access to your password manager.
If you use a cross-platform password manager with Passkey support, like Bitwarden, you can use it on any of your devices. In the event that you lose all of your devices, if you don’t have an Emergency Contact set up, you will need your password and one of the following to gain access to your account:
If you use security keys for 2FA, then you should have at least two - one that you keep with you and a backup that you keep in a safe place, like at home in a lockbox.
If you use a TOTP app to log in, or if you use security keys and want another backup, then making sure you’ll have access to the Recovery Code should be your priority. You can write it down and keep it in a few different places - at home, in your car, in your locker at work, etc… You can share it with someone you trust in person or over an encrypted channel (like Signal). You can store it on a flash drive, encrypted by a second password (which can be much easier than your primary password) or even unencrypted, if you generally keep the drive somewhere safe, disconnected from your computer. As long as you remember your password and can access your recovery code, you’ll also be able to regain access to your account, including all of your passkeys.
Emergency Access requires someone else to have access to their Bitwarden account, but assuming you don’t both lose access, it’s a pretty solid solution. When they request access, Bitwarden will send you an email allowing you to accept or reject their request. If you accept or don’t respond within the allotted “Wait Time” (which you configure: 1 day minimum, 90 days maximum) then they’ll be granted access. You also get a choice (when setting this up) to let them takeover the account (resetting your master password) or to just get read-only access.
Maybe you don’t like Bitwarden and want to use some other app, like 1Password, Dashlane, Roboforms, etc… Whatever your choice, familiarize yourself with how to restore access to your account in an emergency. Then you only need to worry about that and not about how to get access to your passkeys that are on your Windows laptop or only synced to your Apple devices.
But that is exactly what he recommends, using a password manager - with one time email authentication for the first login as an extra step, right?
Nope.
Using a cross-platform password manager with synced passkeys is different and much more secure than using a password manager with email TOTPs or sign-in links with emails that aren’t end-to-end encrypted.
And password manager adoption is much higher than PGP keyserver adoption, and if you can’t discover someone’s public key you can’t use it to encrypt a message to them, so sending end-to-end encrypted emails with TOTPs/sign-on links isn’t a practical option.
According to Statista, 34% of Americans used password managers in 2023 (a huge increase from 21% in 2022), so it’s not even like the best case scenario is rare.
The author mentions it: the QR code approach for cross device sign in. I don’t think it’s cumbersome, i think it’s actually a great and foolproof way to sign in. I have yet to find a website which implements it though.
The site doesn’t need to implement this; the browser handles that part.
I confirmed this works and logged into Github using Google Chrome on my work computer using a passkey stored in Bitwarden earlier today. I had to enable Bluetooth for Chrome, since I’d had it disabled, but then everything else was seamless.
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
The unicode standard has stated that U+2019 is the preferred character for apostrophes since at least the late 90s.
And it’s not like using a curved apostrophe in typesetting was novel even then.
as opposed to U+2019 being posthumously appropriated
U+0027 was also an ASCII character. The death of ASCII as a common format is the only one I can think of… what death are you referring to here?
From https://en.m.wikipedia.org/wiki/Right_single_quotation_mark
The Unicode character ’ (U+2019 right single quotation mark) is used for both a typographic apostrophe and a single right (closing) quotation mark.[1] This is due to the many fonts and character sets (such as CP1252) that unified the characters into a single code point, and the difficulty of software distinguishing which character is intended by a user’s typing.[2] There are arguments that the typographic apostrophe should be a different code point, U+02BC modifier letter apostrophe.[3]
In other words, U+2019 is the typographic apostrophe character. It’s also the right single quote character. There are people who think that the typographic apostrophe character should be something else (and having read their arguments, I agree), but in practice, it isn’t, and certainly wasn’t back in the 90s / early 2000s.
Can you elaborate on why this is mildly infuriating?
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
Expecting everything for free with no ads is just greedy.
In this case you’re not paying to not have ads. You’ll still get ads; they just won’t be personalized.
Personalized ads are more valuable to advertisers, so it still makes sense for them to charge a bit for it, but it’s not something I’ve seen before.
I’m guessing they charge a decent amount more than the difference, though - and probably even more than they make from personalized ads per person. On that note, I really wish ad free subscriptions were closer to the revenue providers get from serving ads - if they were, I’d be more willing to pay for them than just running an adblocker all the time. YouTube Premium, for example, costs 14 USD monthly, but annual ad revenue per non premium user was 1.21 USD.
If you work for a nonprofit, you can be paid a salary; if you control a nonprofit, you can pay higher salaries to yourself and/or your friends.
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.
Paired with allowing people who own the original to upgrade for $10 (and I’m assuming something similar in the UK) when they’re charging $50 for the remaster if you don’t have the original, that makes sense. They’re just closing a loophole.
I’d much rather they double the existing game’s price than for them to charge $25-$30 for the upgrade or to even just not have one outright.
It sucks for anyone who’d been planning to play the original and who just hadn’t bought it yet, but used prices for discs should still be low, so only the subset of those people who have disc-less machines are really impacted.
I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.
After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.
upper capacity
There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.
procedure
You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.
I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.
Basically, what I’d do is:
In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.
Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.
I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.
Which system(s) are you playing on?