This is a rant about how so many apps on many different platforms (TVs, mobile devices, computers, etc…) have decided to not actually show detailed errors any more. Instead, we get something along the lines of:

Oops, somehting went wrong. Please try again later

… and then, well, we get to figure out what just happened and what in the world we need to do about it. And good luck with that, since you have no idea what just failed.

Why software developers?!? Why have you forsaken us?

EDIT 24 hours later: I feel like I need to clarify a few things:

I’ve worked for 8 software companies over 30+ years. I know why putting a DB error into the message users see is a bad idea. I know that makes me uncommon, but I still want more info from these messages.

You all are answering as if there are only two ways this can work: (a) what we have now (which is useless), and (b) a detailed error listing showing a full stack trace. I think the developers could meet me half-way.

What I want is either (a) “Something went wrong on the server, you can’t fix it, but we will” or (b) “Something on your end didn’t work. Check your network or restart the app or do something differently and then try the same thing again”. And if they’re blocking me because I’m using a VPN, fucking say so (but that’s a whole separate thing…)

Some apps do provide enough info so I have a clue what I should do next, and I appreciate the effort they put into helping me. I think what I am really ranting about is I want more developers to take the time to do this instead of reporting all errors with “Oops, try again”. (If the error is in their server, why should I try again?) Give me a hint as to the problem, so I have something to go on.

Cheers y’all. Still love you my techy brothers and sisters.

  • Cryophilia@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    1 day ago

    So what you’re saying is that your code is garbage and you’re hiding it from users because it’s too much work to fix it.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      What I’m saying is that error messages can be helpful or harmful. Knowing that and how to tell the difference is what makes you an expert. Just firing off any information to the user without thinking about it is what makes you a novice, and will eventually get you fired. We’re talking about systems with millions of daily users. If you cause 2,000 unnecessary support tickets or forum posts every day because you don’t know when to send what information to the user, you won’t get very far in tech.

      • Cryophilia@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        If you have 2000 daily people getting error messages, your code is garbage rofl

        And if your company would rather you avoid those tickets by not giving out error codes, your company is also garbage. Which to be fair, is a lot of tech companies.

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          30 minutes ago

          I feel like you really don’t understand how big tech works. There’s not some single server running every service perfectly. There are tons of different layers and services running on thousands or hundreds of thousands of hosts.

          Let’s say you make a request to something like Facebook. Say you’re liking a post. Here’s what happens:

          That request goes in through a PoP (point of presence). These are sometimes called edge servers or edge gateways, but at Facebook we called them PoPs. This is a server that’s physically close to you that’s used to terminate the TLS connection. It doesn’t have any user data. Its job is to take your encrypted request, decrypt it, then pass it on to Facebook’s regional data center on their internal network.

          The request enters a webby. These are usually called frontend servers, but again, at Facebook we called them webbies. This is a server that runs the monolithic Facebook web app. Again, it doesn’t have any user data. Its job is to take your request and orchestrate actions on deeper services to fulfill that request.

          First it’s going to check a local memory cache server for sitevars. These control system level switches, like AB tests, and whether certain services are brought down. That server returns the sitevars and the webby proceeds, now knowing which logic paths to take.

          For a like, which is a write request between your user account and a post, it will create two DB entries (you likes post, post liked by you). It needs to first get the data from the caching layer, so it will make two requests to TOA (Facebook’s caching layer), one for your account, and one for the post.

          TOA runs in the same regional data center, and if it doesn’t have the two data objects cached, it will request them from the regional db shards.

          These regional db shards also run in the same data center, and they’ll return the data.

          TOA returns the data back to the webby.

          The webby (after doing some permission checks, which probably hit TOA again) now creates the two relationships, likes and liked by, referencing the two data objects, you and the post. TOA is a write-through cache, so the webby sends the writes to TOA.

          TOA now needs to send the requests to the db primary shards, since they are the only ones that can handle writes. Your primary shard and the post’s primary shard are probably in different data centers, so TOA now passes the writes to the regional data centers for each primary shard.

          A host running TOA in each regional data center for each primary shard now passes the write to each shard.

          Each primary shard now writes the data to the local disk, and waits for the binary log to be written to the local journal before returning a success message.

          The success message is passed from the local TOA host back to the original region’s TOA host.

          When that TOA host gets both requests back successfully, it returns a success back to the webby handling your request.

          The webby then returns a success to the PoP you’re still connected to.

          The PoP then returns a success to the client running on your device.

          The client doesn’t notify you of anything, because it already showed you a filled in like button right after you pressed it.

          This was how it worked back in 2013 when I worked there. It probably hasn’t changed a whole lot, but this is also an extremely simplified overview (I didn’t even touch on any load balancing systems). That request will probably hit hundreds of services. Some of them can fail and the request could still succeed. But some are required to succeed for your request to be considered successful, like the db write operations. Something like a hardware failure on your primary db shard’s disk can’t be overcome with better code. Nor can a lightning strike taking out the cable connecting your PoP be overcome with better code.

          These systems are absolutely massive, and there are failures you wouldn’t even think of. When I worked at FB, we had an entire data center go down because the humidity got just high enough that the capacitors in each hosts’ power supplies all failed in a matter of a few minutes. Thousands of users probably got error messages that day, but the automatic failover systems moved all the traffic to a new region and promoted new primary db shards within about ten minutes. The fact that losing an entire data center was mitigated in about ten minutes is actually really impressive. You might think it’s still garbage code, since users got error messages, but I know enough about these systems to be very impressed by that.

          If you know a better way to make a system like this that works for billions of users across the planet, you should write a paper and submit it to a local conference. If they approve you for a talk, you can present your designs to an audience there. If the audience is really receptive, your designs could make a big impact in the tech sector. That’s basically what the highest level engineers at these big tech companies do when they design these multi-billion user systems, so it’s definitely possible for you to do it too.

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            47 minutes ago

            All I’m saying is that the vast majority of “oops” issues happen before step one. Client-side issues. For those, give an error code. All the stuff you talked about, there’s little to nothing users can do. And yeah, it could definitely be done better, but it would require abandoning the “ooh shiny new thing” mentality of tech companies. Updates just to boost resumes, deprecation of anything user friendly. It’s an endemic cultural problem.

            • hperrin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              37 minutes ago

              Why do you think the vast majority of these messages come from client side issues? I worked as a Site Reliability Engineer at Facebook. We had data on client side errors too. Crash logs are sent to the servers when a client side error happens. There’s not really one source that constitutes a “vast majority” of these error messages, but I can tell you that the plurality of them come from the caching layer.