• Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    7 days ago

    tldr

    1. Keep Commits Small: Ensure commits are small and manageable to facilitate easy reverts and debugging.
    2. Continuous Refactoring: Prioritize frequent, minor refactorings to simplify future changes.
    3. Deploy Continuously: Regularly deploy code to ensure it works in production, as working software is progress.
    4. Trust Frameworks: Avoid over-testing framework capabilities; focus on application-specific logic.
    5. Create Independent Modules: Place functions in new modules if they don’t fit existing ones, preserving code organization.
    6. Write Tests Early: Use tests to design APIs and guide development, but don’t be rigid about TDD.
    7. Avoid Duplication: Prevent repeated code by abstracting similar implementations after copying once.
    8. Embrace Change: Accept and adapt to evolving designs, understanding that change is intrinsic to software development.
    9. Manage Technical Debt: Focus on minimizing immediate blockers and plan for potential future issues.
    10. Prioritize Testability: Ensure that your code and design facilitate easy testing to maintain code quality and coverage.
    • xep@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      7 days ago

      Manage Technical Debt: Focus on minimizing immediate blockers and plan for potential future issues.

      This is a tough one the bigger the project gets. Might be the toughest one.

      • Technus@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        7 days ago

        It’s an ideal that’s only achievable when you’re able to set your own priorities.

        Managers and executives generally don’t give two shits about yak shaving.

        • nous@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          Just factor it into your estimates and make it a requirement to the work. Don’t talk to managers as though it is some optional bit of work that can be done in isolation. If you do frequent refactoring before you start a feature then it does not add a load of time as it saves a bunch of time when adding the feature. And helps keep your code base cleaner over the longer term leading to fewer times you need to do larger refactors.

        • FourPacketsOfPeanuts@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          In my experience when showcasing at the end of a sprint it pays to leave the visuals very unpolished and focus on functionality. Even if it’s trivial to use a UI library or other standard components. I deliberately make it look basic to help management / uses accept “it’s working but needs polishing”. That polish might then be me spending 10% of time on neatening UI, and 90% of time refactoring / fixing tech debt.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Re: trust frameworks

      I often find myself writing scratch work within tests because it’s just the easiest way to get stuff up and running. Sometimes I’ll leave these as a way to show that my assumptions about a less used feature (by my team) of a framework works the way I believe it does. But it’s rare.

    • MonkderVierte@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago
      1. Embrace Change: Accept and adapt to evolving designs, understanding that change is intrinsic to software development.

      Sounds like bad management to me.

      And sure, start your project flexible and adabtable from the ground up, but that needs planning and time!

  • nous@programming.dev
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    Big refactorings are a bad idea.

    IMO what is worst then big refactors are when people refactor and change behavior at the same time. A refactor means you change the code without changing its behavior. When you end up mixing these up it becomes very hard to tell if a behavioral change is intended or a new bug. When you keep the behavior the same then it is easier to spot an accidental change in behavior in what otherwise should be a no-op change.

    And if you can, try to keep your refactors to one type of change. I have done and seen many 100 or even 1000s of lines changed in a refactor - but was kept manageable because it was the same basic pattern changing across a whole code base. For instance. Say you want to change logging libraries, or introduce one from simple print statements. It is best to add the new library first, maybe with a few example uses in one PR. Then do a bulk edit of all the instances (maybe per module or section of code for very large code bases) for simply switching log instances to the new library.

    If you don’t know what an API should look like, write the tests first as it’ll force you to think of the “customer” which in this case is you.

    I think there is some nuance here that is rarely ever talked about. When I see people trying this for the first time they often think you need to write a full test up front, then get frustrated as that is hard or they are not quite yet sure what they want to do yet. And quite often fall back to just writing the code first as they feel trying to write a test before they have a solid understanding of what they want is a waste.

    But - you don’t need to start with a complete test. Start as simple as you can. I quite often start with the hello world of tests - create a new test and assert false. Then see if the test fails as expected. That tells me I have my environment setup correctly and is a great place to start. Then if I am unsure exactly what I want to write, I start inside the test and call a function with a name that I think I want. No need for parameters or return types yet, just give the function a name. That will cause the code to fail to compile, so write a stub method/class to get things working again. Then start thinking about how the user will want to call it and refactor the test to add parameters or expect return types, flipping back to the implementation to get the code compiling again.

    You can use this to explore how you want the API to look as you are writing the client side and library side at the same time. You can just use the test as a way to see how the caller will call the code. No need to start with asserting behavior yet at all. I will even sometimes just debug print values in the test or implementation or even just write code in the test that calls into a third party library that I am new to to see how it works. With no intention that that test will even be included in the final PR - I just use tests as a staging ground to test ideas. Don’t feel like every test you write you need to keep.

    Sometimes I skip the testing framework altogether and just test the main binary in a simple situation. I especially do this for simpler binaries that are meant to mostly do one thing and dont really need a full testing framework. But I still do the red/green/refactor loop of TDD. I am just very loose on what I consider a valid “test”.

    The second time you’re introducing duplication (i.e., three copies), don’t. You should have enough data points to create a good enough abstraction.

    This missed one big caviat. The size of the code being duplicated. If it is only a few lines then don’t worry so much. 5 or even 10 copies of a 2 line change is not a big issue and quite often far harder to read and maintain then any attempt at de-duping it. As the amount of code you need to copy/paste grows though then it becomes more advantageous to abstract it with fewer copies.

    if you find yourself finding it difficult to mock

    I hate mocks. IMO they should be the last resort for testing things. They bake far too many assumptions about the code being mocked out and they do it for every test you write. IMO just test as much real behavior as you can. As long as your tests are fast and repeatable then you dont need to be mocking things out - especially internal behaviors. And when you do need to talk to an external service of some kind then I would start with a fake implementation of the service before a mock. A fake implementation is just a simple, likely in memory, implementation of the given API/interface/endpoints or whatever.

    With a mock you bake assumptions about the behavior into every mock you write - which is generally every test you write. If your assumptions are off then you need to find and refactor every test you have that has that assumption. With a fake implementation you just update the fake and you should not need to touch your tests. And you can write a fake once and use it on all your tests (or better yet use a third party one if one is available (for instance I quite often use goflakes3 - a golang in memory implementation for aws s3 service).

  • Technus@lemmy.zip
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    7 days ago

    Tip #1: if you’re gonna post your programming blog to social media, make sure it can handle the traffic…?

  • heikkiket@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    6 days ago

    I 98% agree. Only thing I’m not sure about is if third copy/paste is always a bad thing. There are things like configuration and templates where independency is more important than DRY. Sometimes you do those through code.

    • KellysNokia@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 days ago

      Yeah in reality it’s even the fourth or fifth copy-paste that actually warrants generalizing, inevitably I find some tweak I need to make to two of the places to render the original generalization a waste of time

      But if you are going to copy and paste make sure you keep it as similar as possible to the other instances to make it easy to refactor if you revisit in future