SmegmaScript is just too close
SmegmaScript is just too close
I’m just waiting for someone to lecture me how the speed record in wheelchair sprint beats feet’s ass…
They do. Reality is not going to change though. You can enable a handicapped developer to code with LLMs, but you can’t win a foot race by using a wheelchair.
Reddit is free. Other people paying for your free service is a very weak argument to bring up. If Lemmy dies today, nobody but hobbyists and amateurs will care. Just like with LE.
I’ve been there. Not every CA is equal. Those kind of CAs were shit. LE is convenient. There are more options though.
I actually agree. For the majority of sites and/or use cases, it probably is sufficient.
Explaining properly why LE is generally problematic, takes considerable depth of information, that I’m just not able to relay easily right now. But consider this:
LE is mostly a convenience. They save an operator $1 per month per certificate. For everyone with hosting costs beyond $1000, this is laughable savings. People who take TLS seriously often have more demands than “padlock in the browser UI”. If a free service decides they no longer want to use OCSP, that’s an annoying disruption that was entirely not worth the $1 https://www.abetterinternet.org/post/replacing-ocsp-with-crls/
LE has no SLA. You have no guarantee to be able to ever renew your certificate again. A risk not anyone should take.
Who is paying for LE? If you’re not paying, how can you rely on the service to exist tomorrow?
It’s not too long ago that people said “only some sites need HTTPS, HTTP is fine for most”. It never was, and people should not build anything relevant on “free” security today either.
People who have actually relevant use cases with the need for a reliable partner would never use LE. It’s a gimmick for hobbyists and people who suck at their job.
If you have never revoked a certificate, you don’t really know what you’re doing. If you have never run into rate-limiting issues with LE that block a rollout, you don’t know what you’re doing.
LE works until it doesn’t, and then it’s like every other free service on the internet: no guarantees If your setup relies on the goodwill of a single entity handing out shit for free, it’s not a robust setup. If you rely on that entity to keep an OCSP responder alive for free so all your consumers can verify the validity of your certificate, that’s not great. And people do this to save their company $1 a month for the real thing? Even running the shitty certbot in compute has a larger cost. People are so blindly in love with this “free” garbage. The fanboys will never die off
Following along with the style of your own post: YAML doesn’t suck, because I feel so.
Thanks for asking.
https://discord.com/terms#5 is pretty permissive
Your content is yours, but you give us a license to it when you use Discord. Your content may be protected by certain intellectual property rights. We don’t own those. But by using our services, you grant us a license—which is a form of permission—to do the following with your content, in accordance with applicable legal requirements, in connection with operating, developing, and improving our services:
Use, copy, store, distribute, and communicate your content in manners consistent with your use of the services. (For example, so we can store and display your content.)
Publish, publicly perform, or publicly display your content if you’ve chosen to make it visible to others. (For example, so we can display your messages if you post them in certain servers or recommend that content to others.)
Monitor, modify, translate, and reformat your content. (For example, so we can resize an image you post to fit on a mobile device.)
Sublicense your content, to allow our services to work as intended. (For example, so we can store your content with our cloud service providers.)
They increased to 25 to encourage media uploads to train their own models with. They now have collected enough metrics to realize, most valuable content is below 10MB. Now they are optimizing. They won’t lose anything valuable to them and the users who are impacted might even buy Nitro now. Win-win for them
I wasn’t actively aware of this for most of my life until I recently visited a clients office. Buying someone a cup of coffee is an entire thing. There’s no free coffee. You have to purchase every single cup. And you first have to walk several minutes to the place where they sell the coffee. It blew my mind. I’m used to drinking one cup after the other without even giving it any thought. Coffee machine right next to me or around the corner. There, coffee incurs friction and cost.
So when you invite someone for a cup of free coffee, this can open doors for you. I’m not kidding. People get all excited when you offer them a coffee break on your dime. And there’s levels to it too. There’s the regular coffee, and there’s the premium one. For the premium you have to walk longer and wait in line until the barista serves you.
It’s a key component in office politics when coffee access is regulated.
Why anyone would restrict access to legal stimulants in the office is unclear to me though. Put espresso machines on every desk!
Depends on the product. It’s just something to think about when signaling errors. There is information for the API client developer, there is information for the client code, and there’s information for the user of the client. Remembering these distinct concerns, and providing distinct solutions, helps. I don’t think there is a single approach that is always correct.
I don’t necessarily disagree, but I have spent considerable time on this subject and can see merit in decoupling your own error signaling from the HTTP layer.
No matter how you design your API, if you’re passing through additional layers, like load balancers and CDNs, you no longer have full control over all responses your clients receive. At this point it may be viable to always signal a successful backend connection with a 200, even if the process resulted in a failure.
Going further, your API may include partial success scenarios, think batch processing, then the result could be a mix of success and failure that doesn’t translate to HTTP status.
You could even argue that there is really no reason to couple your API so tightly with a concept of the transport layer it uses.
Respect the Accept header from the client. If they need JSON, send JSON, otherwise don’t.
Repeating an HTTP status code in the body is redundant and error prone. Never do it.
Error codes are great. Ensure to prefix yours and keep them unique.
Error messages can be helpful, but often lead developers to just display them in the frontend, breaking i18n. Some people supply error messages in multiple languages, depending on the Accept-Language header.
Everyone who has ever heard of deno has read this irrelevant blog post. It was even stupid at the time he wrote it. People had long been containerizing their node payloads to solve most of his concerns and building ts-node into your js engine as a preprocessors was also beyond redundant. Everything is such a gimmick and people actually followed the marketing and went through years of unstable development for nothing. And now bun people are recycling the same hype approach to gain relevance
That’s not a standard library for JS. Those are builtin modules. A standard library should be available for inclusion in various consumers.
Now that is ancient js style
Kinda my point. And this is another garbage bag on the pile.
https://stdlib.io/ is just the most obvious thing to come to mind. Jeeze jQuery even sat on this chair.
Deno people are trying so fucking hard to be relevant. It’s embarrassing. Bringing nothing to the table has been their MO from day 1.
Node comes with a JS library? This thread is full of surprises.
Especially because TypeScript compiles down to
JavaScriptJS