• 0 Posts
  • 33 Comments
Joined 8 months ago
cake
Cake day: March 11th, 2024

help-circle


  • It attracts passionate and clever people, but as a result comes with a (rightful) reputation of being hard/expensive to hire for.

    Worked at a Scala shop for a while. It was interesting as an outsider to see exactly that play out (I’m a diehard Unix hacker type, love Go etc.). There were some brilliant minds who really seemed to “get” the Scala thing. Then there were others who were more run-of-the-mill Java developers. Scala and the JVM makes all that, and everything in between, possible. With so many Java projects around, the Java devs would come and go depending on team/company factors like job cushiness, salary, or number of days in the office. But the more Scala-leaning people hung around. They made a huge impact on how projects were run.

    The bosses would often talk with me about how hard it was to find those people. From a business perspective, they said it was absolutely worth the effort to find the Scala people despite operational overhead of the rotating door for the armies of Java devs.




  • But, since this particular set of data is so well-defined, and unlikely to change, roll your own is maybe not crazy.

    I think that’s the trick here. A relational database lets you do a whole bunch of complex operations on all sorts of data. That flexibility doesn’t come for free - financially nor performance-wise! Given:

    • engineering chops
    • a firm idea of the type of data
    • a firm idea of the possible operations you may want to do with that data

    then there’s a whole range of different approaches to take. The “just use Postgresql” guideline makes sense for most CRUD web systems out there. And there are architecture astronauts who will push stuff because they can, not because they should.

    Every now and then it’s nice to think about what exactly is needed and be able to build that. That’s engineering after all!








  • Great question. Short answer: yes!

    Long answer: I did this on a production system about 2 years ago.

    The system was using MySQL, which was served from 3 virtual machines. Nobody took responsibility for that MySQL cluster, so outages and crazy long maintenance windows were normal especially as there was no DB admin expertise. The system had been hobbling along for 3 years regardless.

    One day the company contracting me asked for help migrating some applications to a new disaster recovery (DR) datacentre. One-by-one I patched codebases to make them more portable; even needing to remove hard-coded IP addresses and paths provided by NFS mounts! Finally I got to the system which used the MySQL cluster. After some digging I discovered:

    1. The system was only ever configured to connect to one DB host
    2. There were no other apps connecting to the DB cluster
    3. It all ran on “classic” Docker Swarm (not even the last released version)

    My ex-colleague who I got along really well with wrote 90% of the system. They used a SQL query builder and never used any DB engine-specific features. Thank you ex-colleague! I realised I could scrap this insane not-actually-highly-available architecture and use SQLite instead, all in a single virtual machine with 512MB memory and 1vCPU. SQLite was perfect for the job. The system consisted of a single reader and writer. The DB was only used for record-keeping of other long-running jobs.

    Swapping it over took about 3 days, mostly testing. No more outages, no more working around shitty network administration, no more “how does the backup work again?” when DB dumps failed, no more complex config management to bring up, down DB clusters. The ability to migrate DB engines led to a significant simplification of the overall system.

    But is this general advice? Not really. Programming with portability in mind is super important. But overly-generic programs can also be a pain to work with. Hibernate, JDBC et al. don’t come for free; convenience comes at the cost of complexity. Honestly I’m a relational database noob (I’m more a SRE), so my approach is to try to understand the specific system/project and go from there. For example:

    I recently saw a project that absolute didn’t care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

    Things I would want to learn more about:

    • were those performance optimisations essential?
    • if so, is the database the best place to optimise? e.g. smarter queries versus fronting with a dumb cache
    • are there database experts who can help out later? do we want to manage a cache?

    Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, … and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

    Things I would want to learn more about:

    • how many stored procedures could we be managing? 1, 100, 1000? may not be worth taking on a dependency to avoid writing like 3 stored procedures
    • is that tooling depended on by other projects already?
    • how much would the vendor-specific datatype be used? one column in one table? everywhere?
    • does using vendor-specific features make the code easier to understand? or just easier to write? big difference!

    My shitty conclusion: it depends.





  • I imagine part of the challenge going forward would be the hordes of programmers brought up on designing UIs using a DOM, and all the associated tooling.

    My prediction is the situation could be similar to how today many text-only programs assumes a terminal-like device. Terminals have been obsolete for years but I personally feel it’s a ball-and-chain on text UI development. The web document model could persist long after web browsers are a kind of “terminal” to load and render web documents.


  • So-called “backend” I was OK with. HTTP is well-specified. It’s a too general of a protocol what it’s being used for, so you’re stuck implementing the same stuff over and over again. When using SMTP or NNTP you realise how much work the protocol does for you when building systems on top of it.

    But “frontend”… Jesus talk about abusing something that was never designed to be used like it is. Total nightmare in my opinion! UIs which are totally inconsistent in appearance and behaviour has somehow become the norm!



  • As someone who never did much web development, I was… surprised… at the amount of tooling that existed to paper over this issue. The headaches which stood out for me were JavaScript bundling (then you need to choose which tool to use - WebPack but then that’s slow so then you switch to esbuild) and minified code (but that’s hard to debug so now you need source maps to re-reverse the situation).

    Of course the same kind of work needs to be done when developing programs in other languages. But something about developing in JS felt so noisy. Imagine if to compile Java or Rust you needed to first choose and configure your own compiler, each presenting their own websites with fancy logos adorning persuasive marketing copy.