

It doesn’t work with some projects, unfortunately.


It doesn’t work with some projects, unfortunately.


Ugh yes I hate the import system too. I have to look it up every time and still don’t understand it, and it’s a hair away from messing up existing projects to the point where sometimes it does.
I want to love uv, but:
It breaks some random compiled C packages. I ran into this the other day, and the associated issue on the package was basically “shrug we see it’s breaking, this dev is doing some weird shit”
I’d prefer to use the optimized/patched build of Python CachyOS provides (and the optimized Python compiled system packages), though this is largely OCD.
It’s not optimal for some PyTorch stuff, which is its own little ecosystem nightmare


It’s so ridiculous that many projects don’t even support pip+venv (much less system python packages. shivers). They literally check if that’s what you’re trying and pre-emptively fail.
Some projects become impossible to run in any combination because some dependency (looking at you, sentencepiece) no longer works outside a narrow set of circumstances, unless you hand build some obscure github PR, disable all the dependency checks and cross your fingers and hope the wheel builds.
And I’m sorry, but I don’t have 200GB of space, tons of spare RAM, and an intricate docker passthrough setup for every random python project. I just have like four different 3rd party managers (conda/mamba, uv, poetry… what’s the other one? Ugh)


Yeah.
I mean, npm deserves some healthy fear/skepticism, but everything is relative.


It’s kind of crazy how problematic pip is, though. There are enormous ecosystems like conda, poetry, arguably Docker all built around “pip not working right.”
I see so many people want to install vllm or something with like a 95% crash and burn rate if they aren’t already proficient with Docker, complete with the spare disk space to basically ship a whole other machine.
Meanwhile, massively complex Rust or Go or whatever packages… just work. With the native tooling, for me.
To be clear, I like Python, and I believe many issues can be smoothed with time (like improving its JIT and maybe encouraging more typing in codebases). But pip and its ecosystem are forever cursed.


It’s OpenCL, so it should even run on integrated graphics.
Yeah, it’s great! Extremely fast and marginally smaller than ffmpeg, last I checked, though I have not messed with those newer ffmpeg options. And cuetools has some other nice utilities anyway.


Heh, joke’s on you; you’re using the wrong library for obsessive FLAC compression anyway:
0.00017x sounds like a bug though. Maybe it balloons RAM usage enough to trigger swapping?
I’ll make my own LLVM, with blackjack and hookers.
Yeah, see, that makes sense. A random app and an optional account number are not reliable notification systems. They can’t just assume everyone will opt into those.


They already have a huge stake, don’t they?
They’re too late anyway, OpenAI is already enshittifying themselves.
Mobile 5090 would be an underclocked, binned desktop 5080, AFAIK:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_50_series
In KCD2 (a fantastic CryEngine game, a great benchmark IMO), at QHD, the APU is a hair less half as fast. For instance, 39 FPS at QHD vs 84 FPS for the mobile 5090:
https://www.notebookcheck.net/Nvidia-GeForce-RTX-5090-Laptop-Benchmarks-and-Specs.934947.0.html
https://www.notebookcheck.net/AMD-Radeon-8060S-Benchmarks-and-Specs.942049.0.html
Synthetic benchmarks between the two
But these are both presumably running at high TDP (150W for the 5090). Also, the mobile 5090 is catastrophically overpriced and inevitably tied to a weaker CPU, whereas the APU is a monster of a CPU. So make of that what you will.
through phone if you have a phone on your water account, through a system no one knew existed
I interpreted this as one system. So its:
Water website, you’d have to happen to stumble upon
Obscure opt-in phone system
If that’s the case, the complaint is reasonable, as the water service is basically assuming Facebook (and word of mouth) are the only active notifications folks need.
But yeah, if OP opted out of SMS warnings or something, that’s more on them.
Oh wow, that’s awesome! I didn’t know folks ran TDP tests like this, just that my old 3090 seems to have a minimum sweet spot around that same same ~200W based on my own testing, but I figured the 4000 or 5000 series might go lower. Apparently not, at least for the big die.
I also figured the 395 would draw more than 55W! That’s also awesome! I suspect newer, smaller GPUs like the 9000 or 5000 series still make the value proposition questionable, but still you make an excellent point.
And for reference, I just checked, and my dGPU hovers around 30W idle with no display connected.
Eh, but you’d be way better off with an X3D CPU in that scenario, which is both significantly faster in games, about as fast outside them (unless you’re dram bandwidth limited) and more power efficient (because they clock relatively low).
You’re right about the 395 being a fine HTPC machine by itself.
But I’m also saying even an older 7900, 4090 or whatever would be way lower power at the same performance as the 395’s IGP, and whisper quiet in comparison. Even if cost is no object. And if that’s the case, why keep a big IGP at all? It just doesn’t make sense to pair them without some weirdly specific use case that can use both at once, or that a discrete GPU literally can’t do because it doesn’t have enough VRAM like the 395 does.
Eh, actually that’s not what I had in mind:
Discrete desktop graphics idle hot. I think my 3090 uses at least 40W doing literally nothing.
It’s always better to run big dies slower than small dies at high clockspeeds. In other words, if you underclocked a big desktop GPU to 1/2 its peak clockspeed, it would use less than a fourth of the energy and run basically inaudible… and still be faster than the iGPU. So why keep a big iGPU around?
My use case was multitasking and compute stuff. EG game/use the discrete GPU while your IGP churns away running something. Or combine them in some workloads.
Even the 395 by itself doesn’t make a ton of sense for an HTPC because AMD slaps so much CPU on it. It’s way too expensive and makes it power thirsty. A single CCD (8 cores instead of 16) + the full integrated GPU would be perfect and lower power, but AMD inexplicably does not offer that.
Also, I’ll add that my 3090 is basically inaudible next to a TV… key is to cap its clocks, and the fans barely even spin up.
Rumor is it’s successor is 384 bit, and after that their designs are even more modular:
https://www.techpowerup.com/340372/amds-next-gen-udna-four-die-sizes-one-potential-96-cu-flagship
Hybrid inference prompt processing actually is pretty sensitive to PCIe bandwidth, unfortunately, but again I don’t think many people intend on hanging an AMD GPU off these Strix Halo boards, lol.
It’s PCIe 4.0 :(
but these laptop chips are pretty constrained lanes wise
Indeed. I read Strix Halo only has 16 4.0 PCIe lanes in addition to its USB4, which is resonable given this isn’t supposed to be paired with discrete graphics. But I’d happily trade an NVMe slot (still leaving one) for x8.
One of the links to a CCD could theoretically be wired to a GPU, right? Kinda like how EPYC can switch its IO between infinity fabric for 2P servers, and extra PCIe in 1P configurations. But I doubt we’ll ever see such a product.
Nah, unfortunately it is only PCIe 4.0 4x. That’s a bit slim for a dGPU, especially in the future :(
If you can swing $2K, get one of the new mini PCs with an AMD 395 and 64GB+ RAM (ideally 128GB).
They’re tiny, lower power, and the absolute best way to run the new MoEs like Qwen3 or GLM Air for coding. TBH they would blow a 5060 TI out of the water, as having a ~100GB VRAM pool is a total game changer.
I would kill for one on an ITX mobo with an x8 slot.
And the setting, while they’re at it!
Imagine what they could do with Changeling crewmates, mixed with a Picard style Android and maybe the Gamma Quadrant. They’d be perfect vehicles to explore more modern issues (trans stuff/AI, for instance. Maybe neurodivergence?) in a Star Trek coat of paint.