• 0 Posts
  • 102 Comments
Joined 2 years ago
cake
Cake day: December 29th, 2023

help-circle
  • yeah viable is such a variable concept

    i guess like… i have a friend that is a lab tech, and he vibe coded a stand alone HTML and JS page for their team to take CSV and filter it… the excel process they used before was horrendous… in that case, i guess that’s a viable product: it works, isn’t buggy (or at least bugs will become well known and able to be manually avoided or worked around)… i’d say that’s an MVP but wouldn’t be so if you wanted to productise it


  • vibe coding is trash for MVPs… it’ll get you there, but as always the achilles heel of vibe coding is maintenance and bugs

    vibe coding is great for a POC, but the defining difference between a POC and an MVP is that a POC is made to be thrown out, doesn’t have to work all the time (you can say “ah yup just need to give it a kick” when you’re showing it off and manually intervene)

    vibe coding is good to show a basic, unmaintainable, non-production version of a feature of function, but then you need to take that and manually build it into your MVP - perhaps by copying some minor parts of the POC, but verifying every step





  • Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

    pretty much every line of code we write in modern software isn’t unique… we use so many orders of magnitude more lines of other people’s code than our own, we’re really just plumbing pipes together

    most functions we write that aren’t business logic specific to the problem domain of our software (and even sometimes then) has been written before… the novel part isn’t in the function body: the low level instructions… the novel part is how those instructions are structured… that may as well be pseudocode, and that pseudocode may as well take the form of function headers

    Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.

    write tests, tests fail, write code, tests slowly start to pass until you’re done… this is how we’ve always done TDD because it ensures the tests fail when they should. this is a good idea with or without LLMs because humans fuck up unit tests all the time

    I’m not sure what you mean by that.

    for example, you have an external API of some kind with an enum expressed via JSON as a string and you want to implement that API including a proper Enum object… an LLM can more easily generate that code than i can, and the longer the list of values the more cumbersome the task gets

    especially effective for generating API wrappers because they basically amount to function some_name -> api client -> call /api/someName

    this is basically a data transformation problem: translate from some structure to a well-defined chunk of code that matches the semantics of your language of choice

    this is annoying for a human, and an LLM can smash out a whole type safe library in seconds based on little more than plain english docs

    it might not be 100% right, but the price for failure is an error that you’ll see and can fix before the code hits production

    and of course it’s better to generate all this using swagger specs, but they’re not always available and tend not to follow language conventions quite so well

    for a concrete example, i wanted to interact with blackmagic pocket cinema cameras via bluetooth in swift on ios: something they don’t provide an SDK for… they do, however document their bluetooth protocols

    https://documents.blackmagicdesign.com/UserManuals/BlackmagicPocketCinemaCameraManual.pdf?_v=1742540411000

    (page 157 if you’re interested)

    it’s incredibly cumbersome, and basically involves packing binary data into a packet that represents a different protocol called SDI… this would have been horrible to try and work out on my own, but with the general idea of how the protocol worked, i structured the functions, wrote some test case using the examples they provided, handed chatgpt the pdf and used it to help me with the bitbanging nonsense and translating their commands and positionally placed binaries into actual function calls

    could i have done it? sure, but why would i? chat gpt did in 10 seconds what probably would have taken me at least a few hours of copying data from 7 pages of a table in a pdf - a task i dont enjoy doing, in a language i don’t know very well


  • if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

    LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…

    type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

    let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

    fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

    let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

    there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks




  • many “unused” IP addresses are unused because they’re kinda like having spare parts: if you’re planning on extending your network in the futures, your IP block kinda should reflect your end state (ie the parts you need over time to replace or “build” new hosts)

    or for blue/green deployments where it’s likely that at least half the IP range will be used in terms of process, but unused most of the time in terms of reachability

    and then there’s weird things with splitting up IP blocks into subnets with a division of 3 (the minimum needed for dealing with net splits etc) - eg across availability zones… there are always “waste” IPs because you can’t divide multiples of 8 cleaning into 3







  • in the real world we actually use distribution centers and loading docks

    because we can pass packages in bulk between large distances… in routing, it’s always delivery boys: a single packet is a single packet: there’s no bulk delivery, except where you have eg a VPN packing multiple packets into a jumbo frame or something…

    the comment you’re replying to is only providing an analogy: used to explain a single property by abstraction; not the entire thing

    we can have staff specialise in internal delivery

    but that’s not at all how NAT works: its not specialising in delivery to private hosts and making it more efficient… it’s a layer of bureaucracy (like TURN servers and paperwork - the lookup tables and mapping) that adds complexity, not because it’s ideally necessary but just because of limitations in the data format

    routers still route pretty much exactly the same in IPv6 direct or NAT, but just at the NAT layer public IP and port is remapped to internal addresses and ports: the routing is still exactly the same, but now your router has to do extra paperwork that’s only necessary because of the scheme used to address


  • NAT is not much different to a firewall though… just because the address space is publicly routable does not mean that the router has to provide a route to it, or a consistent route

    NAT works by assigning a public port for the outgoing stream different to the internal port, and it does that by inspecting packets as they go over the wire: a private machine initiates a connection, assign an arbitrary free port, and sends that packet off to the router, who then reassigns a new port, and when packets come in on that port it looks up the IP and remapped port and substitutes them

    that same process can easily be true in IPv6 but you don’t need to do any remapping: the private machine initiates a connection, and the router simply marks that IP and port combination as “routable” rather than having to do mappings as well