• 0 Posts
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle



  • If the trademark is indeed on the wordpress.org foundation and not the wordpress.com company, I didn’t think that’s a fair argument.

    It is but the trademark is licensed to Automattic which handles all further commercial sub-licensing. And the CEO of Automattic sits on the board of the workpress foundation and is the creator of wordpress itself.

    I don’t think either is a cancer to the FOSS Wordpress ecosystem. Both seem to give back.

    I believe that this all started as the Automattic CEO did not think that WPEngine was contributing enough back to the wordpress ecosystem. Even after years of attempts to negotiate this. Seems he gave up trying and went after them for trademark rules as that was the only real leaver he had to pull. Since there is no obligation for WPEngine to contribute back to wordpress directly.

    WPEngine using the Wordpress trademark makes me think they’re using Wordpress

    Apparently this is contentious enough to be disputed in court not everyone thinks this and there are enough people that are confused over the matter that Automattic believe they can prove a trademark volition in court.

    Lots more details in this interview with automattic CEO.

    Dont know whos right here. Probably both sides are wrong to some degree. But worth hearing both sides of the argument before making a decision.


  • You should not be struggling most of the time when using the CLI. Basic uses is just as easy as any GUI. Learning the commands might be a bit more involved and you need to be a bit more proactive about it. Anything you need to do 30+ times a day you should be over the learning curve of and can just execute them just as quickly if not quicker than using GUI. Especially when you look at tab competition and the reverse history search.

    But what using the CLI more often does teach you is how to lessen that initial learning curve. Making you quicker at finding the new commands you need and how they work slowly building up your tool belt of knowledge about the commands you do look up.


  • I see nothing in this graphic that isn’t easy to do with a gui.

    I didnt say the GUI was not easy for the common stuff. But I think the CLI is also easy for the common stuff so there is not much advantage other than a bit of a learning curve with the CLI. But the big thing that GUIs make harder is automation of common things. For instance, when I want to create a PR I like to rebase to the latest upstream. In a GUI that is a bunch of button clicks. With the cli I just <CTRL+R>pus and that will autocomplete to git pull --rebase=interactive --autostash && git push && gh pr create --web and I am landed in a web browser ready to review and submit my PR. Doing the same thing in a GUI takes a lot longer with a lot more clicking.

    And that is a very common command for me.

    Like logging and diffing is just so much easier when I can just scroll and click as opposed to having to do a log command, scroll, then remember the hashes, and then write the command.

    Never found that to be a big issue. Most of the time when you want a diff you want to diff local changes or staged changes which is simply git diff and git diff --staged neither of those are hard or any real easier in the GUI (especially with bash history). For diffing specific commits I dont find that hard either just git log --oneline and find the commits (and you can use grep to filter things out easily as well here) - typically does not require scrolling at all. Then git diff <copy paste>..<copy paste>. In the GUIs you are often scrolling through commits you want to select at some point so I dont see how that saves you any real time here. I would not say the CLI or GUI is vastly easier in this case. And even in this case it is rare to need to do. Far more often is just branches which on a decent shell can be tab completed for convenience.

    And sometimes I watch beginners use the gui and I have to bite my tongue because I know it would be faster in the cli.

    This is why I prefer the CLI for common stuff. It is just faster.

    But, especially for a beginner, i strongly recommend a gui.

    And that is where I disagree. I think beginners should spend some time learning the tools they will need to use. IMO the CLI is critical for developers to learn and the sooner the better. So many things a vastly easier with the CLI than GUIs and a lot of stuff is near impossible with GUIs. Automation being a big one. I have not seen a good CI system that is GUI focused that you never need to know the cli for. Or when you have a repetitive task then it is quicker to write a quick script and run that then doing the same thing over and over in the GUI. Repeating actions is also easier in the CLI. All of these apply to more than just git as well.

    I have seen so many beginners start with GUIs that don’t really understand what they are doing in git. And quite often break things and then just delete and recreate the repo and manually make their changes again. I find people that never bother with the CLI always hit a ceiling quite quickly in terms of their ability and productivity.

    The only real thing that makes the CLI worst is that it has a steeper learning curve. Once you are over that hill I find it to be vastly better for more situations or at least not practically any worst than a GUI equivalent. So that hill is one well worth climbing.

    I can always use a GUI if I really needed to. But those that only know the GUI will have a very hard time on the CLI when they need to - which is required far more often than the other way round.






  • The known unknowns and especially the unknown unknowns never get factored into an estimate. People only ever think about the happy path, if everything goes right. But that rarely every happens so estimates are always widely off.

    The book How Big Things Get Done describes a much better way to factor in everything without knowing all the unknowns though - Just look a previous similar projects and look how long they took, take the average and bounds then adjust up or down if you have good reason to do so. Your project will very likely take a similar amount of time if your samples are similar in nature to your current task. And the actual time already factors in all the issues and problems encountered and even if you don’t hit all the same issues your problems will likely take a similar amount of time. And the more previous examples you have the better these estimates get.

    But instead of that we just pluck numbers out of the air and wonder why we never hit them.



  • Just factor it into your estimates and make it a requirement to the work. Don’t talk to managers as though it is some optional bit of work that can be done in isolation. If you do frequent refactoring before you start a feature then it does not add a load of time as it saves a bunch of time when adding the feature. And helps keep your code base cleaner over the longer term leading to fewer times you need to do larger refactors.


  • Big refactorings are a bad idea.

    IMO what is worst then big refactors are when people refactor and change behavior at the same time. A refactor means you change the code without changing its behavior. When you end up mixing these up it becomes very hard to tell if a behavioral change is intended or a new bug. When you keep the behavior the same then it is easier to spot an accidental change in behavior in what otherwise should be a no-op change.

    And if you can, try to keep your refactors to one type of change. I have done and seen many 100 or even 1000s of lines changed in a refactor - but was kept manageable because it was the same basic pattern changing across a whole code base. For instance. Say you want to change logging libraries, or introduce one from simple print statements. It is best to add the new library first, maybe with a few example uses in one PR. Then do a bulk edit of all the instances (maybe per module or section of code for very large code bases) for simply switching log instances to the new library.

    If you don’t know what an API should look like, write the tests first as it’ll force you to think of the “customer” which in this case is you.

    I think there is some nuance here that is rarely ever talked about. When I see people trying this for the first time they often think you need to write a full test up front, then get frustrated as that is hard or they are not quite yet sure what they want to do yet. And quite often fall back to just writing the code first as they feel trying to write a test before they have a solid understanding of what they want is a waste.

    But - you don’t need to start with a complete test. Start as simple as you can. I quite often start with the hello world of tests - create a new test and assert false. Then see if the test fails as expected. That tells me I have my environment setup correctly and is a great place to start. Then if I am unsure exactly what I want to write, I start inside the test and call a function with a name that I think I want. No need for parameters or return types yet, just give the function a name. That will cause the code to fail to compile, so write a stub method/class to get things working again. Then start thinking about how the user will want to call it and refactor the test to add parameters or expect return types, flipping back to the implementation to get the code compiling again.

    You can use this to explore how you want the API to look as you are writing the client side and library side at the same time. You can just use the test as a way to see how the caller will call the code. No need to start with asserting behavior yet at all. I will even sometimes just debug print values in the test or implementation or even just write code in the test that calls into a third party library that I am new to to see how it works. With no intention that that test will even be included in the final PR - I just use tests as a staging ground to test ideas. Don’t feel like every test you write you need to keep.

    Sometimes I skip the testing framework altogether and just test the main binary in a simple situation. I especially do this for simpler binaries that are meant to mostly do one thing and dont really need a full testing framework. But I still do the red/green/refactor loop of TDD. I am just very loose on what I consider a valid “test”.

    The second time you’re introducing duplication (i.e., three copies), don’t. You should have enough data points to create a good enough abstraction.

    This missed one big caviat. The size of the code being duplicated. If it is only a few lines then don’t worry so much. 5 or even 10 copies of a 2 line change is not a big issue and quite often far harder to read and maintain then any attempt at de-duping it. As the amount of code you need to copy/paste grows though then it becomes more advantageous to abstract it with fewer copies.

    if you find yourself finding it difficult to mock

    I hate mocks. IMO they should be the last resort for testing things. They bake far too many assumptions about the code being mocked out and they do it for every test you write. IMO just test as much real behavior as you can. As long as your tests are fast and repeatable then you dont need to be mocking things out - especially internal behaviors. And when you do need to talk to an external service of some kind then I would start with a fake implementation of the service before a mock. A fake implementation is just a simple, likely in memory, implementation of the given API/interface/endpoints or whatever.

    With a mock you bake assumptions about the behavior into every mock you write - which is generally every test you write. If your assumptions are off then you need to find and refactor every test you have that has that assumption. With a fake implementation you just update the fake and you should not need to touch your tests. And you can write a fake once and use it on all your tests (or better yet use a third party one if one is available (for instance I quite often use goflakes3 - a golang in memory implementation for aws s3 service).


  • Or just thing.age() which is fine and is fairly obvious it will return the age of the thing. And that is what is done in most languages that dont have computed properties. get_ on a method really adds no value nor clarity to things. The only reason foo() is ambiguous is because it is a bad name - really just a place holder. Missing out the brackets here adds no value either, just makes it hard to tell that you are calling a function instead of just accessing a property.


  • Make its usage cleaner? I don’t see how a property does that at all. We are talking about x.foo vs x.foo() really. And IMO the latter tells you this is a function that needs to do some work even if that work is very cheap. x.foo implies that you might be able to set the value as well. But with computed properties maybe not. Which IMO makes the program a bit harder to read and understand as you cannot simply assume it is a simple assignment or field access. It could be a full function call that does different things depending on other values or even if you are setting vs getting the value. I prefer things being more explicit.


  • It is such a weak smell though you might as well look at any bit of code you have and ask yourself if it is bad code. Lambdas are fine in a lot of places and the existence of them is not an indication of good or bad code. It is just a tool that can be used in lots of situations.

    A better smell here is excessive inlineing causing a loss of context. Does not matter if it is a lambda, or a parameter to a function call, or a field in a object creation. None of those are signs of bad code, but if you cannot understand what something anonymous is doing you might want to give it a name. This does not mean creating a named function, but might just be assigning the lambda or parameter to a variable so it is named.

    But on the flip side I find it you are struggling to name something then that can also be a smell that maybe it should just be inlined. Giving everything a name can create just as bad code as trying to inline everything. There is a balance in the middle somewhere. And the presence of a lambda does not really hint as to which way you want to go. So its existence is a very poor marker for code quality.

    it’s probably worth taking a second to ask yourself

    You can extend this to all code, not just lambdas. Any code you can take a second to ask yourself if you could write it better or more readable. If that is the bar then all code has a very weak code smell and singling out lambdas here seems arbitrary.


  • lambdas are actually a (very mild) code smell

    Lambdas alone are not a code smell. That is like saying Objects or Functions or even naming things is a code smell just because you can use them in bad ways. It is just to broad a statement to be useful. At best you might say that large/complex anonymous lambdas are a code smell - just like large/complex and badly named functions are. You need to be specific about code smells, otherwise you are basically saying code is a code smell.


  • Functions do something, while properties are something.

    This is my argument against them. Computed properties do something, they compute a value. This may or may not be cheap and adds surprising behavior to the property. IMO properties should just be cheap accessors to values. If it needs to be computed then seeing a function call can hint the caller may want to cache the value in a variable if they need to use it multiple times. With properties you need to look it up to know it is actually doing work instead of just giving you a value. That is surprising behavior which IMO I dislike in programs.


  • Functions can do all this. Computed properties are just syntactic sugar for methods. That is it. IMO it makes it more confusing for the caller. Accessing a property should be cheap - but with computed properties you don’t know it will be. Especially with caching as your example. The first access can be surprisingly slow with the next being much faster. I prefer things to not do surprising things when using them.