What do you mean building our own systems?
What do you mean building our own systems?
The TSA press office said in a statement that this vulnerability could not be used to access a KCM checkpoint because the TSA initiates a vetting process before issuing a KCM barcode to a new member. However, a KCM barcode is not required to use KCM checkpoints, as the TSO can enter an airline employee ID manually. After we informed the TSA of this, they deleted the section of their website that mentions manually entering an employee ID, and did not respond to our correction. We have confirmed that the interface used by TSOs still allows manual input of employee IDs.
TSA: lalala i can’t hear you, everything is fine, no issue here
These are probably comments on this Veritasium video which actually pretty informative.
That being said, I also live in a “first past the post” country that forces a two-party system and penalizes voting your conscience unless it aligns with one of those parties. While there may be flaws in Ranked Choice Voting that could emerge in fringe cases, it is so obviously superior to our current system that it is hard for me to worry too much about the nuance of how it might not be 100% perfect 100% of the time. Any (democratic) system is better than what we have now.
https://gohugo.io/hosting-and-deployment/hosting-on-github/ https://www.roshanadhikary.com.np/2020/11/forward-your-freedns-custom-domain-to.html
Hope these guides are useful for you.
FWIW, if all you have is a truly static website (html, css, and js), then GitHub Pages is free and you can point a custom domain there from your registrar, and don’t have to worry about backups or server uptime.
please tell me you created a dick
object for the project which is exactly the same as a dict
object.
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
Have you tried turning it off and back on again?
I manage a stack like this, we have dedicated hardware running a steady state of backend processing, but scale into AWS if there’s a surge in realtime processing needed and we don’t have the hardware. We also had an outage in our on prem datacenter once which was expensive for us (I assume an insurance claim was made), but scaling to AWS was almost automatic, and the impact was minimal for a full datacenter outage.
If we wanted to optimize even more, I’m sure we could scale into Azure depending on server costs when spot pricing is higher in AWS. The moral of the story is to not get too locked into any one provider and utilize some of the abstraction layers so that AWS, Azure, etc are just targets that you can shop around for by default, without having to scramble.
It feels like this needs to be managed on an instance by instance level and not post to post.
Seriously, now that this is more widely known, it’ll for sure be taken advantage of a lot, to the point AWS will begrudgingly protect their customers once the damage is done.
No need to guess, it’s all outlined in the bill:
So basically, the law will not require ISPs to block access to TikTok domains and IP addresses. Google search results are also explicitly excluded from the term data broker, and exempt from the restrictions. The only requirement is for app stores to stop hosting the application, so existing installations of the app (after January 2025 assuming ByteDance doesn’t sell) will presumably persist and can be used, even if TikTok is banned.
If your services are not stateless, work to make them such so you can learn about scaling in the cloud, which can even be done w/ VM-based services. how much more agility using cloud vs a DC gives you
This can’t be understated. Embracing elastic idology to remove single points of failure and decoupling stateful aspects of applications has been the biggest takeaway of being part of several migrations of services to AWS. Implementing these into your practices as you grow is a huge benefit that may is worth the cost.
Over time, if the scale you’re operating at grows, using experience/knowledge from AWS and applying it to running services in a datacenter could be beneficial. In my experience, if you have a large, consistent, asynchronous workload which you’ve maxed out on reserved instances or savings plans, it is likely cheaper to operate on your own hardware than in the cloud (or get credits from GCP or Azure to migrate services to reduce costs). This is where avoiding vendor lock-in is key.
have y’all factored in all the time/money spent on maintaining the server hardware, power, DC cooling, etc. too?
For sure, this isn’t 2007 where you need to purchase servers and network equipment to start a website. For most startups and small businesses, operating in the cloud will be less expensive upfront and likely over the first 3 years. This isn’t a one size fits all approach though, and it’d be prudent to evaluate the cloud spend periodically and compare with what’d it’d cost to manage it entirely. Obviously you’d need a team competent enough to manage this, without it going to shit.
This 100%. I hate getting added to a PR for review with testing commits in the history, and I’m expected to clean those up before merging into main.
Just when I thought Facebook couldn’t go any lower.
<Incredibles Meme> Off means Off </meme>
Right, for junior devs or trivial changes, that’s fine. My take is if I’m going to make someone take the time to review my work, I take the time to make sure that it’s cleaned up and would be something I would merge if I were reviewing it. Most of this comes from working on some larger Open Source projects which still require patches be submitted via email which I know is a real “back in my day” moment, but it did instil good practices which I try to carry on.
If you’re using the CLI and cleaning up a branch for a PR, the interactive rebase is a godsend. Just run git rebase -i origin/main
(or whatever your target branch is) and you can reorder/squash/reword commits.
In my experience, I prefer to review or contribute commits which are logical changes that are compartmentalized enough that if needed, they could be reverted without impacting something completely differently. This doesn’t mean 1 commit is always the right number of commits in a PR.
For example, if you have a feature addition which requires you to update the version of a dependency in the project, and this dependency update breaks existing code, I would have two commits, being:
When stepping through the commits in the PR or looking at a git blame
, it’s clear which changes were needed because of the new dependency, and which were feature additions.
Obviously this isn’t a one size fits all, but if someone submitted a PR with 12 commits of trial and error, and the overall changes are like +2 lines -3 lines, I’d ask them to clean that up before it gets merged.
Wow TIL about the use of underscore in an interactive session.