

Of course compressing isn’t a good solution for this stuff. The point of the comment was to say how unremarkable the original claim was.
Of course compressing isn’t a good solution for this stuff. The point of the comment was to say how unremarkable the original claim was.
It all depends on the data entropy. Formats like JSON compress very well anyway. If the data is also very repetitive too then 2000x is very possible.
Rust was the important factor in this result. That’s why it’s in the headline. It wasn’t the hugely inefficient way of storing the data initially. Not at all.
FFS, you could just have run gzip on it probably.
as it’s spelled: im gur.
“I’m gur”?
Tony… Is that you?
Well the alternative was too heinous to consider.
Yes. Yes it is.
If they brought out new features and charged for those I think most would understand. However since the V2 they basically done nothing in R&D. That’s 6-7 years ago.
I’d rather they found a business model that made them stable, rather than exploiting their current customers. Fact is, if they go bust then there’s à bunch of people left high and dry.
Me too. It also handled some situations, like divergent lines in the same branch or obsolete changes, much better.
You can do exactly as you say, and you’re right - it makes code easier to reason about. However it all come down to efficiency. Copying a large data structure to modify one element in it is slow. So we deal with the ick of mutable data to preserve performance.
So much simpler than gitlab. An executable and a single config file. That’s all there is if you use sqlite as the database.
Gitlab was a farmyard of different things to worry about.
It’s a text editor you customise by programming it. Why do you think that’s appealing?
That’s obviously a cello.
“Recieving stolen goods” is prosecutable.
It’s a lesser crime than the original theft though.
ID seems to be quite ingrained into Spanish life already. It really surprised me when I needed an ID number to buy a metro ticket.
Python doesn’t have casts and is strongly typed.
They are under development, and there is a small market for development machines. It also allows the manufacturer to understand the issues they’ll get once the high performance processors are here.
Nobody has really made performance implementations yet. They’re all IoT level or low end mobile device.
Tenstorrent are probably the closest to having something serious.
Doesn’t actually say that 60k overheated his drive. He says that he ran a run on 60k, and that he couldn’t do the whole database due to overheating. Two unrelated statements except that 60k is the lower bound for what he could process.
Doesn’t mean he knows what he’s doing though, as pretty huge datasets are processable on quite modest hardware if you do it right.