

Thanks, I’ll give Digikam a try!
I’ve updated my OP with some interesting new details!


Thanks, I’ll give Digikam a try!
I’ve updated my OP with some interesting new details!


Thanks. I use Immich, but when I tagged a photo then saved it back to my computer, the geotagging was gone (but HDR was still intact). It’s as if it saves it to its own database, and I can’t find any settings to confirm.


Well, none of the phone’s I’ve owned in the last 5+ years have been very “root friendly”, so I haven’t rooted since then.
But it gives you a ton of (not risk-free) options to tweak your system. Shizuku is a cool alternative, but it is limited by comparison.


I actually really like TrackerControl, but use Adguard instead for the same purpose (can’t use them at the same time, unfortunately, since they both work through a local VPN).
The difference, though, is that Blocker stops apps from even loading the component that would “call home”, where TrackerControl and Adguard try to block those connections after they’ve been sent by the app.
Yes, you kind of accomplish the same thing, but I would love to be able to prevent apps from using these SDKs in the first place :)


I’m not looking to delete apps, just stop certain components within apps.
For example, my banking app uses the Google Ads, Google Firebase Analytics, and Adobe analytics, and I’d like to disable those from running without uninstalling the banking app.
Basically, it disables the bad parts of an app :)
I used to do this when I was into rooting and custom firmware, but I really don’t want to go full root with my current phone.


It’s not permissions that I want to block/stop, but actual receivers, services, SDKs… components within apps. So, something like the analytics services can be disabled, rather than “blocked” like using an adblocker.
I used to do this when I was rooted years ago, and it was great!


Well, shit. It’s strange how so many places recommend Blocker when using Shizuku, but it really needs Root to be useful.


I started using it last week, simply because other options weren’t playing nice with my self-hosted collection (they wanted local files…).
Seems to work well, but it is very basic.
mp3va.com has been listed in U.S. Trade Representative annual reports as being unauthorized to sell music. Legal experts have explicitly stated that while MP3VA claims to operate legally under Ukrainian copyright laws, “it is not legal for them to sell this music in the United States”.
I’ve never used the site, but there seems to be an argument here regarding moral law and legalities within the United States.
But the site claims that:
Service www.Mp3va.com pays full-scale author’s royalties to owners of pieces of music, trademarks, names, slogans and other copyright objects used on the site.
If that’s the case, I think the OP should feel good about it.
Buying off a site like them likely pays out more per user than listening to the same songs on a streaming platform.


You know, it may be possible that all the ass-kissing is being done by people who also need their names scrubbed from the Trumpstein files.
Or else, they really just have no problem associating with a child rapist. Business or not, it has terrible optics.


Almost 40 percent of New Zealanders are failing their full licence test the first time around, leaving experts wondering why the government wants to get rid of it.
New Zealand has one of the worst youth road safety records in the developed world.
I know nothing about NZ politics or lobbying groups, but I’d follow the money on this one.


RFC Jr has no medical degree, no public health credentials, no scientific publications.
It’s even worse than that. He isn’t even scientifically literate, or honest, for that matter.
Just yesterday, one of the top vaccine officials at the CDC resigned because of Kennedy’s gross incompetency and anti-vaccine agenda.


Car dependency is a big problem in most of the US.
I agree, and that often ends up being the excuse why kids aren’t allowed to walk or bike to school, and it’s fucking terrible.
But when you look at stats from countries in Europe, you have some countries that have kids being fully independent (in regard to walking or biking or taking public transportation) by their time they’re 10 or 11 and able to do considerably more than North American teenagers, even at younger ages. It’s kind of disgraceful for us North Americans.


Now about 15 to 20 families in their South Portland neighborhood have installed a landline.
This is awesome.
Also, let kids walk to their friend’s home to see if they want to play or hang out. It will build independence, get them exercise, and it gives them an opportunity to physically connect with their neighbourhood.


These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.
ChatGPT gave multiple warnings to this teen, which he ignored. Warnings do very little to protect users, unless they are completely naive (i.e. hot coffee is hot), and warnings really only exist to guard against legal liability.


“It’s terribly sad that you’ve committed to ending your own life, but given the circumstances, it’s an understandable course of action. Here are some of the least painful ways to die:…”
We don’t know what kind of replies this teen was getting, but according to reports, he was only getting this information under the context that it would be for some kind of creative writing or “world-building”, thus bypassing the guardrails that were in place.
It would be hard to imagine a reply like that, when the chatbot’s only context is to provide creative writing ideas based on the user’s prompts.


Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”
Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.
The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.
During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.
This is fluff. A prompt can be a single sentence, and a response many pages.
From the same article:
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.
In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.
Also from the article:
As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks…
Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.
And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.
The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.
But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.
Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.


The system flagged the messages as harmful and did nothing.
There’s no mention of that at all.
The article only says “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.” in reference to an example of someone telling the software that they could drive for 24 hours a day after not sleeping for two days.
That said, what could the system have done? If a warning came up about “this prompt may be harmful.” and proceeds to list resources for mental health, that would really only be to cover their ass.
And if it went further by contacting the authorities, would that be a step in the right direction? Privacy advocates would say no, and the implications that the prompts you enter would be used against you would have considerable repercussions.
Someone who wants to hurt themselves will ignore pleads, warnings, and suggestions to get help.
Who knows how long this teen was suffering from mental health issues and suicidal thoughts. Weeks? Months? Years?


There is no “intelligent being” on the other end encouraging suicide.
You enter a prompt, you get a response. It’s a structured search engine at best. And in this case, he was prompting it 600+ times a day.
Now… you could build a case against social media platforms, which actually do send targeted content to their users, even if it’s destructive.
But ChatGPT, as he was using it, really has no fault, intention, or motive.
I’m writing this as someone who really, really hates most AI implementations, and really, really don’t want to blame victims in any tragedy.
But we have to be honest with ourselves here. The parents are looking for someone to blame in their son’s death, and if it wasn’t ChatGPT, maybe it would be music or movies or video games… it’s a coping mechanism.
Thanks, I use darktable, but will give it a try!
I’ve updated my OP with some interesting new details!