Never used the Roku stick, but I used their high end wired Internet model for a long time and it was great for what it was. I haven’t tried any of the stick devices because I always questioned the performance they would have.
Never used the Roku stick, but I used their high end wired Internet model for a long time and it was great for what it was. I haven’t tried any of the stick devices because I always questioned the performance they would have.
That’s a good takeaway. AWS is the ultimate Swiss army knife, but it is easy to misconfigure. Personally, when you are first learning AWS, I wouldn’t put more data in then you are willing to pay for on the most expensive tier. AWS also gives you options to set price alerts, so if you do start playing with it, spend the time to set cost alerts so you know when something is going awry.
Have a great day!
So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let’s say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is ~$24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacier
The cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let’s say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn’t make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I’ve literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don’t have to worry about the cost per object, etc. iDrive (not an Apple service) let’s you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.
Thanks for posting. I just deployed to my container host in AWS ECS and it’s working well in my testing. Very easy deployment with docker.
It’s complicated. I gave the most expensive pricing, which is their fastest tier and includes stripping across three availability zones and guarantees 11 nines of data durability. Additionally, the easy integration with all other AWS services and the feature richness of S3 buckets makes it hard to do a fair apple to apple comparison unless you really have well defined needs. So I gave the highest price to keep it simple, and for someone who says they just have a few GB, any cost should be trivial.
AWS S3 has a free tier that covers the first 5Gb. I recommend it because the AWS cli is excellent, and gives you lots of options for how to sync your data. The pricing is $0.023/GB/month after the free tier. It can be overwhelming to get into AWS but it is worth it to have access to the ultimate IT service swiss army knife.
I run a lot of tech, containerized workloads in AWS, home firewalls running on protectli boxes for all my family around the country, wireless controllers to run APs for my family around the country, but as I got older one thing I stopped rolling my own instance of was data backups. My data backs up to OneDrive and iDrive, so two copies of my data. My wife has access to both via shared credentials in a 1password folder that she knows how to access and uses regularly.
As I got older and I had a family, the pictures of our kids, wills, financial records, insurance documents are all just too important. Every service that holds my data is paid annually for less than $200/year total and auto renews. She could call either company and prove ownership if she ever did need help getting access. Also, I can easily share folders to her.
It’s funny how getting older makes you think of the sorts of issues enterprise teams have. Don’t implement solutions where you will be one deep, have a succession plan, and complexity is the enemy. All the tech I run now is fun and helpful, but can be replaced with a trip to BestBuy. The data and pictures however must be easy to retrieve for her.
So I don’t have a good self hosted solution for you other than to say that at some point it’s ok to change your strategy. And if you are worried about privacy, you can encrypt subsets of your data locally before it is backed up.
Because they are referring to engineering disciplines that predate all of the stuff you mention. When mechanical, structural, civic, etc engineers sign off on a design (stamp it) the incur personal liability if there is a defect in the design that kills someone or causes damage. There are certifications for telecom design and processes that require them to stamp designs, but otherwise most of what is lumped together as technology doesn’t constitute engineering from a legal or historical perspective. However the titles sort of took off and created two sets of meanings.
If software engineering was treated as engineering in the way that mechanical or others forms are, you would get a degree, get an entry level job at a firm as a junior, and after a few years, study and get certified to stamp designs/code systems, etc.
Now, outside of places like code for flight systems, medical devices, power plants, etc there isn’t a need for that kind of rigor, but those are the areas that would require licensing if it was available.
Good example. There’s some domains that do carry some liability and weight to the title. Flight systems, medical devices, etc. Domains where failure can kill people and can’t easily be rectified.
As of 2013 I believe, but it was discontinued in 2019. Fairly rare to see in the wild outside of specific domains like medical device coding or other areas where failure isn’t acceptable.
You do have stamping engineers for telecom design. As far as I know that’s the only real engineering title from the perspective that the sign off of the work carries well defined legal liability. I was director of engineering for a large org and the only stamping engineers in the org were telecom designers, not the security, software, systems, cloud, network, etc folks. Nothing against then either, but historically engineer meant something very specific prior to the rise of information technology.
Edit: actually in 2013 NCEES added a PE cert for software engineering, but it was discontinued on 2019.
When writing basic business code, structuring the code well and having good naming standards means you shouldn’t need a ton of comments, but you should still have some. Plus, using structured function content blocks gives you intellisense in some languages and IDEs, which is important for code reuse in teams.
However, when I was doing scientific programming I’d have comments for almost every line at times where I put the mathematical formula and operations the line represents. Implementing a convolution neutral network with parameters to dynamically scale the layers or MPI stochastic simulations is much different than writing CRUD functions or basic business logic.
One thing I’ll add is I often found it helpful to glide them in which helps straighten the wires, then pull them out and trim the ends to be even. Then put back in connector, and make sure all pins touch all wire ends.
I agree it is people looking for reasons to criticize. However, I do think VPN or anything that modifies your route tables should be subjected to more scrutiny than other app features due to potential for abuse. I wish browsers wouldn’t bundle them at all, or install them as part of their base.
I do this but one thing to note is that it can break some wifi capture portals and auth loops, so you might have to disable specified Wi-Fi, connect, and enable. Some wifi has private view DNS records for their capture portal or auth server like clearpass. Additionally, if your phone switches days to WiFi, but you need data to query or resolve your DNS provider and Android doesn’t have it cached, then it can also fail.
I love the you can run bash on Windows 10 now.
I love docker but I swear docker and CI/CD pipelines are like catnip to the perfect in the way of the good crowd.
That’s interesting. Any chance your ISP could have been qos’ing streaming video? Although Singapore would be about the one place where a VPN concentrator would help; it is pretty much the big fiber hub in that local region for East, West, North connectivity.
I’ve only ever used Oracle cloud in an enterprise environment, so I don’t know what features you have available. I’m also much more familiar with AWS. But you should be able to create a proxy endpoint in your present region, and traverse the cloud providers internal network. That would likely improve your streaming. You could also create a VPN endpoint in your current region and terminate your traffic inside your cloud providers network, but that would add protocol overhead.
I would look at tools like iperf to look at your packet loss because being further from your server will increase latency, but shouldn’t impact the streaming unless you also have packet loss.
I need to start using old batteries in my bathroom scale.