nymnympseudonym
“If man chooses oblivion, he can go right on leaving his fate to his political leaders. If he chooses Utopia, he must initiate an enormous education program - immediately, if not sooner.”
-R B Fuiler
- 1 Post
- 6 Comments
nymnympseudonym@piefed.socialto Technology@beehaw.org•Anonymity is dead and we’re all content nowEnglish2·1 month agoI was being a bit loose. You can video record people in public – just not audio. Any audio on your video recording device makes it illegal.
There are 13 US states like this
nymnympseudonym@piefed.socialto Technology@beehaw.org•Anonymity is dead and we’re all content nowEnglish18·1 month agoWhere I live, recording someone without their consent – even in public – is a crime.
I have personally argued both sides on this. Over time I have come to favor it.
EDIT: some background
nymnympseudonym@piefed.socialto Selfhosted@lemmy.world•Been seeing a lot of posts about replacing Spotify and such, so I wrote up a guide on how I did just thatEnglish10·1 month agoTBH I chose Funkwhale for my solution because it looked easy and out of the box, I just add a single Docker and subdomain to my existing site.
It wound up being more or less what you describe.
I may well follow OP’s guide and nuke my Funkwhale despite the work I put into it and the fact that it does basically work for its intended purpose
nymnympseudonym@piefed.socialto Selfhosted@lemmy.world•Been seeing a lot of posts about replacing Spotify and such, so I wrote up a guide on how I did just thatEnglish7·1 month agoQuestion: did you consider Funkwhale , and if so why did you choose this other stack instead?
EDIT: fix link sry
nymnympseudonym@piefed.socialto blueteamsec@infosec.pub•AI-powered ransomware named PromptLock - uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the flyEnglish11·2 months agoPerplexity prompt: “quotes from openai about safety testing for gpt-oss”
Reply below:
Safety Testing Measures
- OpenAI’s safety process for open-weight release included technical audits (robustness, bias detection), structured adversarial testing, and evaluation of prompt-injection and jailbreak vulnerabilities.
- The organization also employed both automated and human review to test for outputs that could cause harm, leveraging cross-disciplinary teams including outside researchers, ethicists, and cybersecurity experts.
- Feedback from these rounds led to incremental model improvements before the open weights went public.
Transparency and External Collaboration
- OpenAI has collaborated with third-party security and ethics researchers to validate its safety protocols and stress-test new models prior to release.
- The company acknowledges that “Releasing open weights is a significant responsibility due to risks of misuse. We want to be transparent about our process and invite the AI community to help report and address issues that may arise post-release.”
And it’s more of a video format than an image format, lots of juicy attack surface
https://nvd.nist.gov/vuln/detail/CVE-2023-4863