• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • Interestingly, whilst Wikipedia does say that, the language in RFC 1591 (Domain Name System Structure and Delegation) only says:

    There are a set of what are called “top-level domain names” (TLDs). These are the generic TLDs (EDU, COM, NET, ORG, GOV, MIL, and INT), and the two letter country codes from ISO-3166.

    Likewise, in ICANN’s PRINCIPLES FOR THE DELEGATION AND ADMINISTRATION OF COUNTRY CODE TOP LEVEL DOMAINS, they say:

    ‘Country code top level domain’ or ‘ccTLD’ means a domain in the top level of the global domain name system assigned according to the two-letter codes in the ISO 3166-1 standard

    In neither case do they actually limit two letter TLDs to being country codes, they only state that all country codes in ISO 3166-1 are ccTLDs. In the RFC, the author does suggest it is unlikely that any other TLDs will be assigned, but this has obviously been superseded with the advent of gTLDs. Thus I still consider it likely that the .io TLD will simply transition to being a commercial one, rather than a country one.

    Having said all that, it’s entirely possible I’ve missed some more recent rule that tightens this up and only allows two letter domains from ISO 3166-1. If I have I’d be glad of a pointer to it.



  • It’ll get eliminated as a country code, yes, but that leaves it available as a generic TLD. Seen as it will be available and is obviously lucrative, someone will register it and, presumably allow domains to be registered under it. Off the top of my head, I think it costs $10,000 and you have to show you have the infrastructure to support the TLD you register, so an existing registrar is the most likely. That figure is probably out of date, it’s been many years since I checked it, but the infrastructure requirement is the more costly part anyway.



  • It’s a non-starter for me because I sync my notes, and sometimes a subset of my notes, to multiple devices and multiple programs. For instance, I might use Obsidian, Vim and tasks.md to access the same repository, with all the documents synced between my desktop and server, and a subset synced to my phone. I also have various scripts to capture data from other sources and write it out as markdown files. Trying to sync all of this to a database that is then further synced around seems overly complicated to say the least, and would basically just be using Trillium as a file store, which I’ve already got.

    I’ve also be burnt by various export/import systems either losing information or storing it in a incompatible way.


  • I like it, this is clearly very enterprisey and solution focused, but I would like to suggest a couple of amendments if I may?

    • Namespaces We should make full use of namespaces. Make the structural tags be in a language specific namespace (to be referenced in every function spec, obviously) but change the in an out params to use the parameter name as the tag, namespaced to the function they’re for, with a type attribute.

    • In memory message queues Have all function invocations be marshaled as xml documents posted to an in memory message queue. Said documents should use a schema that validates the structure and a function specific schema to validate the types of arguments being passed. Namespace everything.

    I reckon we could power a medium sided country if we could generate energy from the programmers despair.







  • While I agree with most people here that finding a keyboard and screen would be the easiest option, you do have a couple of other options:

    • Use a preseed file A preseed lets the installer run completely automatically, without user intervention. Get it to install a basic system with SSH and take it from there. You’ll want to test the install in a VM, where you can see what’s going on before letting it run on the real server. More information here: https://wiki.debian.org/DebianInstaller/Preseed

    • Boot from a live image with SSH Take a look at https://wiki.debian.org/LiveCD in particular ‘Debian Live’. It looks like ssh is included, but you’d want to check the service comes up on boot. You can then SSH to the machine and install to the harddrive that way. Again, test on a VM until you know you have the image working, and know how to run the install, then write it to a USB key and boot the tsrget server from that.

    This all assumes the target server has USB or CD at the top of its boot order. If it doesn’t you’ll have to change that first, either with a keyboard and screen, or via a remote management interface sych as IPMI.



  • It’s the same problem with a drive like this, or any long term archive, you either store the data unencrypted and rely on physical security, or make sure you store the encryption key and algorithm for the same length of time, in which case you still need the physical security to protect that instead. In both cases you need to make sure you preserve a means to read the data back and details of the format its in so you can actually use it later.

    Paper is actually a pretty good way of storing a moderate amount of data long term. Stored correctly it’s unlikely to physically degrade, the data is unlikely to suffer bitrot and it can be read back by anything that can make an image in the visible spectrum. That means you can read it, or take a photo and use OCR to convert it into whatever format is current when the data is needed.





  • Ah, ok. You’ll want to specify two allowedip ranges on the clients, 192.168.178.0/24 for your network, and 10.0.0.0/24 for the other clients. Then your going to need to add a couple of routes:

    • On the phone, a route to 192.168.178.0/24 via the wireguard address of your home server
    • On your home network router, a route to 10.0.0.0/24 via the local address of the machine that is connected to the wireguard vpn. (Unless it’s your router/gateway that is connected)

    You’ll also need to ensure IP forwarding is enabled on both the VPS and your home machine.


  • Sort of. If you’re using wg-quick then it serves two purposes, one, as you say, is to indicate what is routed over the link, and the second (and only if you’re setting up the connection directly) is to limit what incoming packets are accepted.

    It definitely can be a bit confusing as most people are using the wg-quick script to manage their connections and so the terminology isn’t obvious, but it makes more sense if you’re configuring the connection directly with wg.


  • The allowed IP ranges on the server indicate what private addresses the clients can use, so you should have a separate one for each client. They can be /32 addresses as each client only needs one address and, I’m assuming, doesn’t route traffic for anything else.

    The allowed IP range on each client indicates what private address the server can use, but as the server is also routing traffic for other machines (the other client for example) it should cover those too.

    Apologies that this isn’t better formatted, but I’m away from my machine. For example, on your setup you might use:

    On home server: AllowedIPs 192.168.178.0/24 Address 192.168.178.2

    On phone: AllowedIPs 192.168.178.0/24 Address 192.168.178.3

    On VPS: Address 192.168.178.1 Home server peer: AllowedIPs 192.168.178.2/32

    Phone peer: AllowedIPs 192.168.178.3/32