Does it need to be accessible via API (e.g. SQL) or just a spreadsheet-style web interface?
Does it need to be accessible via API (e.g. SQL) or just a spreadsheet-style web interface?
You can use any port for SSH—or you can use something like Cockpit with a browser-based terminal instead of SSH.
If you didn’t map a local config file into the container, it’s using the default version inside the container at /app/public/conf.yml (and any changes will get overwritten when you rebuild the container). If you want to make changes to the configuration for the widget, you’ll want to use the -v option with a local config file so the changes you make will persist.
That varies by subreddit, which might actually help in training LLMs to recognize the difference.
That will remove your account from public view, but will it remove it from the data they use for AI training?
If not, you’re just enhancing the value of their proprietary data.
At some point someone’s going to train an LLM on material from successful scams to autonomously generate new scams, then wire the money to server farms to run more copies of itself.
If the other services are exposed on local ports, you can have NPM forward to those.
But how will we automate our trolley problems?
Would that make it a type of sapphire?
For anyone confused by “Nextcloud” in the title, it’s just the blog attribution—Nextcloud isn’t involved in the acquisition.
As a casual self-hoster for twenty years, I ran into a consistent pattern: I would install things to try them out and they’d work great at first; but after installing/uninstalling other services, updating libraries, etc, the conflicts would accumulate until I’d eventually give up and re-install the whole system from scratch. And by then I’d have lost track of how I installed things the first time, and have to reconfigure everything by trial and error.
Docker has eliminated that cycle—and once you learn the basics of Docker, most software is easier to install as a container than it is on a bare system. And Docker makes it more consistent to keep track of which ports, local directories, and other local resources each service is using, and of what steps are needed to install or reinstall.
Marching up to the next non-empty key would skew the distribution—pages preceded by more empty keys would show up more often under “random”.
There’s plenty of things that turned out to be useful to me in spite of my not recognizing their names or taglines when I first encountered them—so I don’t just assume that anything I’m not already familiar with isn’t “for” me. A brief explanation for non-insiders (or even a mention of what field it’s relevant to) would have been helpful in establishing that.
Skimming through the linked paper, I noticed this:
Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output.
So it sounds like the model parameters needed to decompress the file are included in the file itself.
Seems awful weird to me that twitter, facebook, and Reddit have all had similar types of issues recently and resulted in dramatic user loss.
I think the “enshittification” theory is a more likely explanation.
DIdn’t Intel stop making NUCs?
Was it RAID 0 (striped), or RAID 1 (mirrored)?
In general, a mirrored RAID is best for minimizing data loss and downtime due to drive failure, while separate volumes and periodic backups is best for recovering from accidental file deletion or malware. (I.e., if a RAID gets told to write bad data, it’ll overwrite the good data on both drives at once.)
If you want the best of both worlds with just two drives, try zfs—you can mirror the drives to protect against drive failure, and make snapshots to protect against accidental data loss. (This still won’t protect against everything—for that you should have some kind of off-site backup as well.)
I’ve been running two NC instances for over five years (linuxserver docker images)—one has been issue-free, and the other had sporadic issues like OP is describing… but not for the last year or so, so I assumed the issue had been fixed in an update. Or maybe the problem was the network configuration instead of NC.
It’s a set of plugins for standard MediaWiki. (It was originally intended to be part of Wikipedia, but there were performance issues on that scale. It’s used by many smaller organizations, though.)
A typical use case is to forward a single port to the proxy, then set the proxy to map different subdomains to different machines/ports on your internal network. Anything not explicitly mapped by the reverse proxy isn’t visible externally.