• Logh@lemmy.ml
    link
    fedilink
    arrow-up
    139
    ·
    4 months ago

    Funny how CrowdStrike already sounds like some malware’s name.

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      4 months ago

      Not too surprising if the people making malware, and the people making the security software are basically the same people, just with slightly different business models.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        4 months ago

        Reminds me of the tyre store that spreads tacks on the road 100m away from their store in the oncoming lanes.

        People get a flat, and oh what do you know! A tyre store! What a lucky coincidence.

      • Eylrid@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        Classic protection racket. “Those are some nice files you’ve got there. It’d be a shame if anything happened to them…”

  • Carighan Maconar@lemmy.world
    link
    fedilink
    arrow-up
    134
    ·
    4 months ago

    This is, in a lot of ways, impressive. This is CrowdStrike going full “Hold my beer!” about people talking about what bad production deploy fuckups they made.

    • KomfortablesKissen@discuss.tchncs.de
      link
      fedilink
      arrow-up
      26
      arrow-down
      3
      ·
      4 months ago

      I’m volunteering to hold their beer.

      Everyone remember to sue the services not able to provide their respective service. Teach them to take better care of their IT landscape.

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        26
        ·
        4 months ago

        Typically auto-applying updates to your security software is considered a good IT practice.

        Ideally you’d like, stagger the updates and cancel the rollout when things stopped coming back online, but who actually does it completely correctly?

        • KomfortablesKissen@discuss.tchncs.de
          link
          fedilink
          arrow-up
          22
          ·
          4 months ago

          Applying updates is considered good practice. Auto-applying is the best you can do with the money provided. My critique here is the amount of money provided.

          Also, you cannot pull a Boeing and let people die just because you cannot 100% avoid accidents. There are steps in between these two states.

            • KomfortablesKissen@discuss.tchncs.de
              link
              fedilink
              arrow-up
              7
              arrow-down
              1
              ·
              edit-2
              4 months ago

              I have. They are not mine. The dead people could be.

              Edit: I understand you were being sarcastic. This is a topic where I chose to ignore that.

              • ricecake@sh.itjust.works
                link
                fedilink
                arrow-up
                8
                ·
                4 months ago

                That’s totally fair. :)

                I work at a different company in the same security space as cloudstrike, and we spend a lot of time considering stuff like “if this goes sideways, we need to make sure the hospitals can still get patient information”.

                I’m a little more generous giving the downstream entities slack for trusting that their expensive upstream security vendor isn’t shipping them something entirely fucking broken.
                Like, I can’t even imagine the procedureal fuck up that results in a bsod getting shipped like that. Even if you have auto updates enabled for our stuff, we’re still slow rolling it and making sure we see things being normal before we make it available to more customers. That’s after our testing and internal deployments.

                I can’t put too much blame on our customers for trusting us when we spend a huge amount of energy convincing them we can be trusted to literally protect all their infrastructure and data.

                • KomfortablesKissen@discuss.tchncs.de
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  4 months ago

                  I can put the blame to your customers. If I make a contract with a bank they are responsible for my money. I don’t care about their choice of infrastructure. They are responsible for this. They have to be sued for this. Same for hospitals. Same for everyone else. Why should they be exempt from punishment for not providing the one service they were trusted to provide? Am I expected to feel for them because they made the “sensible choice” of employing the cheapest tools?

                  This was a business decision to trust someone external. It should not be tolerated that they point their fingers elsewhere.

                • bleistift2@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  4 months ago

                  You seem knowledgable. I’m surprised that it’s even possible for a software vendor to inject code into the kernel. Why is that necessary?

                • deadbeef79000@lemmy.nz
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  I’m actually willing to believe that CrowdStrike was actually compromised by a bad actor that realised how fragile CS was.

  • psycho_driver@lemmy.world
    link
    fedilink
    arrow-up
    50
    ·
    4 months ago

    The answer is obviously to require all users to change their passwords and make them stronger. 26 minimum characters; two capitals, two numbers, two special characters, cannot include ‘_’, ‘b’ or the number ‘8’, and most include Pi to the 6th place.

    • ulterno@lemmy.kde.social
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      Great! Now when I brute force the login, I can tell my program to not waste time trying ‘_’, ‘b’ and ‘8’ and add Pi to the 6th place in every password, along with 2 capitals, 2 numbers and 2 other special characters.

      Furthermore, I don’t need to check passwords with less than 26 characters.

    • arendjr@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      Sorry, I don’t understand. Do you mean there have to be 6 digits of Pi in there, or the sixth character must be π? I’m down either way.

    • GreyBeard@lemmy.one
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 months ago

      The modern direction is actually going the other way. Tying identity to hardware, preventing access on unapproved or uncompliant hardware. It has the advantage of allowing biometrics or things like simple pins. In an ideal world, SSO would ensure that every single account, across the many vendors, have these protections, although we are far from a perfect world.

        • GreyBeard@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          Effectively, the other option is passwords, and people are really, really, bad at passwords. Password managers help, but then you just need to compromise the password manager. Strong SSO, backed by hardware, at least makes the attack need to be either physical, or running on a hardware approved by the company. When you mix that with strong execution protections, an EDR, and general policy enforcement and compliance checking, you get protection that beats the pants off 30 different passwords to 30 different sites, or more realistically, 3 passwords to 30 different sites.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Yes, much better than 3-30 passwords.

            But I view SSO as enterprise password manager with a nice UI. I don’t trust it for anything super important.

  • Solemarc@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    4 months ago

    Maybe this is a case of hindsight being 20/20 but wouldn’t they have caught this if they tried pushing the file to a test machine first?

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Well, it is hindsight 20/20… But also, it’s a lesson many people have already learned. There’s a reason people use canary deployments lol. Learning from other people’s failures is important. So I agree, they should’ve seen the possibility.

    • Gsus4@programming.dev
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      4 months ago

      I saw one rumor where they uploaded a gibberish file for some reason. In another, there was a Windows update that shipped just before they uploaded their well-tested update. The first is easy to avoid with a checksum. The second…I’m not sure…maybe only allow the installation if the windows update versions match (checksum again) :D

    • undu@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      It’s a sequence of problems that lead to this:

      • The kernel driver should have parsed the update, or at a minimum it should have validated a signature, before trying to load it.
      • There should not have been a mechanism to bypass Microsoft’s certification.
      • Microsoft should never have certified and signed a kernel driver that loads code without any kind signature verification, probably not at all.

      Many people say Microsoft are not at fault here, but I believe they share the blame, they are responsible when they actually certify the kernel drivers that get shipped to customers.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    4 months ago

    Now threat actors know what EDR they are running and can craft malware to sneak past it. yay(!)

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      Smart threat actors use the EDR for distribution. Seems to be working very well for whoever owned Solar Winds.

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    Who says it was accidental?

    Netflix knew they were going to move from DVD rentals to streaming over the Internet. It is right in their name.

    CrowdStrike knew they were eventually going to _________. It is right in their name.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      47
      ·
      4 months ago

      I genuinely can’t tell at whom you are addressing this. Those claiming it is a Windows problem or those that say otherwise?

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      edit-2
      4 months ago

      You can not like windows, and also recognize that CrowdStrike isn’t from Microsoft - so a problem that CrowdStrike caused isn’t the fault of Windows.

      If that makes me a idiot by holding two different ideas in my head, so be it, but you are spending time with us, so thank you for elevating us!

      • Azzu@lemm.ee
        link
        fedilink
        arrow-up
        21
        ·
        4 months ago

        I’m sorry, but distinguishing between different concepts is forbidden here. You go straight to jail.

      • gnutrino@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        4 months ago

        I’m waiting for the post mortem before declaring this to not be anything to do with MS tbh. It’s only affecting windows systems and it wouldn’t be the first time dumb architectural decisions on their part have caused issues (why not run the whole GUI in kernel space? What’s the worst that could happen?)

        • jet@hackertalks.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 months ago

          I agree it’s possible. But if you’re a software as a service vendor, it is your responsibility to be in the alpha and beta release channels, so if there is a show stopping error coming down the pipeline you can get in front of it.

          But more tellingly, we have not seen Windows boot loop today from other vendors, only this vendor. Right now the balance of probabilities is in the direction of crowd strike

          • gnutrino@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            4 months ago

            I’m not sure how to break this to you but this is just an internet forum, not a court of law

            • jorp@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              4 months ago

              The reason courts use it is because they value having true opinions. But you’re welcome to not value that indeed

              • psud@aussie.zone
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                The reason courts have rules of how convinced one must be to declare guilt is because they dread punishing an innocent over allowing a guilty person free

                We aren’t in a position to hurt the probably guilty party so it doesn’t matter a bit of we jump to conclusions unfairly

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          51
          arrow-down
          1
          ·
          edit-2
          4 months ago

          Sure, but they weren’t patching a windows vulnerability, windows software, or a security issue, they were updating their software.

          I’m all for blaming Microsoft for shit, but “third party software update causes boot problem” isn’t exactly anything they caused or did.

          You also missed that the same software is deployed on Mac and Linux hosts.

          Hell, they specifically call out their redhat partnership: https://www.crowdstrike.com/partners/falcon-for-red-hat/

          • xtr0n@sh.itjust.works
            link
            fedilink
            arrow-up
            15
            arrow-down
            2
            ·
            4 months ago

            Crowdstrike completely screwed the pooch with this deploy but ideally, Windows wouldn’t get crashed by a bas 3rd party software update. Although, the crashes may be by design in a way. If you don’t want your machine running without the security software running, and if the security software is buggy and won’t start up, maybe the safest thing is to not start up?

            • MangoPenguin@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              22
              arrow-down
              2
              ·
              4 months ago

              Are we acting like Linux couldn’t have the same thing happen to it? There are plenty of things that can break boot.

              • [email protected]@sh.itjust.works
                link
                fedilink
                arrow-up
                21
                ·
                4 months ago

                CrowdStrike also supports Linux and if they fucked up a Windows patch, they could very well fuck up a linux one too. If they ever pushed a broken update on Linux endpoints, it could very well cause a kernel panic.

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              14
              ·
              4 months ago

              Yeah, it’s a crowd strike issue. The software is essentially a kernel module, and a borked kernel module will have a lot of opportunities to ruin stuff, regardless of the OS.

              Ideally, you want your failure mode to be configurable, since things like hospitals would often rather a failure with the security system keep the medical record access available. :/. If they’re to the point of touching system files, you’re pretty close to “game over” for most security contexts unfortunately. Some fun things you can do with hardware encryption modules for some cases, but at that point you’re limiting damage more than preventing a breach.

              Architecture wise, the windows hybrid kernel model is potentially more stable in the face of the “bad kernel module” sort of thing since a driver or module can fail without taking out the rest of the system. In practice… Not usually since your video card shiting the bed is gonna ruin your day regardless.

          • Kusimulkku@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            4 months ago

            Are the Mac and Linux machines having BSOD (-style) issues and trouble booting?

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      4 months ago

      “even on Lemmy”

      Like this is some highbrow collection of geniuses here?

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      arrow-up
      21
      ·
      edit-2
      4 months ago

      Because it isn’t. Their Linux sensor also uses a kernel driver, which means they could have just as easily caused a looping kernel panic on every Linux device it’s installed on.

      • YTG123@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        arrow-down
        10
        ·
        4 months ago

        There’s no way of knowing that, though. Perhaps their Linux and Darwin drivers wouldn’t have paniced the system?

        Regardless, doing almost anything at the kernel level is never a good idea

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          Also, it’s less about “their” drivers and more about what a kernel module can do.
          Saying “there’s no way to know” doesn’t fit, because we do know that a malformed kernel module can destabilize a linux or mac system.

          “Malformed file” isn’t a programming defect or something you can fix by having a better API.

          • deadbeef79000@lemmy.nz
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Having the data exposed to userspace via an API would avoid having to have a kernel module at all… Which when malformed wouldn’t compromise the kernel.

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              I mean, sure. But typically operating systems don’t expose that type of information to user space, instead providing a kernel interface with user mode configuration.

              It’s why they use the same basic approach on mac and Linux.

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          Security operations being one of the things that is often best done at the kernel level because of the need to monitor network and file operations in a way you can’t in user mode.