• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • I already planned on my next computer being Linux Mint, but it’s getting more and more desired as time goes on.

    I was playing Elden Ring when it began stuttering, turns out Windows Defender was just constantly reading the disk (I still have a hard drive). Finally turned off maximum priority (seemingly random) scans in task scheduler when I began stuttering again. This time it was Windows Compatibility Telemetry taking up 50% of the disk, until I finally found a way to turn that off.

    It’d be so nice to have an OS that doesn’t run random unnecessary things without your permission.




  • Unfortunately the spam arms race has destroyed any chance of search going back to the good ole days. SEO and AI content farms means we’ll need a whole new system to categorize webpages, as well as filter out human sounding but low effort spam.

    Point being, it’s no longer enough to find a page that’s relevant to the topic, it has to be relevant and actually deliver information, which currently the only feasible tech that can differentiate those is LLMs.



  • I’m curious, is there actually so many 42’s in the system? (more than 69 sounds unlikely)

    What if the LLM is getting tripped up because 42 is always referred to as the answer to “the Ultimate Question of Life, the Universe, and Everything”.

    So you ask it a question like give a number between 1-100, it answers 42 because that’s the answer to “Everything”, according to it’s training data.

    Something similar happened to Gemini. Google discouraged Gemini from giving unsafe advice because it’s unethical. Then Gemini refused to answer questions about C++ because it’s considered “unsafe” (referring to memory management). But Gemini thinks C++ is “unsafe” (the normal meaning), therefore it’s unethical. It’s like those jailbreak tricks but from its own training set.