Daemon Silverstein

I’m just a spectre out of the nothingness, surviving inside a biological system.

  • 0 Posts
  • 14 Comments
Joined 1 month ago
cake
Cake day: August 17th, 2024

help-circle
  • I’m a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I’ve been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to “Analyze and comment” my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.

    They’ve been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they’re becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:

    • Free version of Google Gemini is becoming lazy (short answers, superficial analysis, problems with keeping context, drafts aren’t so diverse as they were before, among other problems)
    • free version of ChatGPT is a bit better (can keep contexts, can issue detailed answers) but not enough (it does hallucinate sometimes: good for surrealist poetry but bad for code and other technical matters when precision and coherence matters)
    • Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (got a code or text that remotely mentions the word “explode” as in PHP’s explode function? “Sorry, can’t comment on texts alluding to dangerous practices such as involving explosives”, I mean, WHAT?!?!)
    • Bing Copilot got web searching, but it has a context limit of 5 messages, so, only usable for quick and short things.
    • Same about Bing Copilot goes for Perplexity
    • Mixtral is very hallucination-prone (i.e. does not properly cohere)
    • LLama has been the best of all (via DDG’s “AI Chat” feature), although it sometimes glitches (i.e. starts to output repeated strings ad æternum)

    As you see, I tried almost all of them. In summary, while it’s good to have such tools, they should never replace human intelligence… Or, at least, they shouldn’t…

    Problem is, dev companies generally focus on “efficiency” over “efficacy”, wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it’ll become XGH programming (Extreme Go Horse). And machines can’t do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).


  • While it offers a concurrent alternative to Google translate, it still lacks some features, as @[email protected] mentioned, many languages are missing. In my case, I sometimes experiment with terms across various languages, sometimes Hindi (“O param Devi Kaali”), sometimes latin (“Vita mortem manducat, Mors manducat vitam” is a latin phrase I wrote myself, following Latin grammar rules), sometimes Hebrew (especially for Gematria calculation using numerical values from Hebrew letters (Aleph is 1, Bet is 2, Gimmel is 3, and so on) after translating/transliterating a word/name such as “לילית”). For these kinds of experimentation, DeepL can’t really be of use, so I need either Google Translate or Bing Translate (both support the aforementioned languages).









  • They can’t without the given permission from the browser to do so. While they can indeed track the mouse, when they try to access mobile motion sensors (I’m considering a CAPTCHA inside a webpage being accessed through a mobile browser such as Firefox mobile or Chrome for Android), they need to use an HTML5 API that, in turn, will ask the user for permission, something like “This site wants to use sensor motion data. Allow or block?”


  • Nowadays there are some really annoying CAPTCHAs out there, such as:

    • “Click over the figures that are upwards/downwards” and various rotated bears
    • “Rotate the figure until it matches the given orientation” and a finger pointing to some random direction, as well as rotation buttons that don’t work the way you would mathematically expect them to work
    • “Select all the images with a bicycle until there are none left” and the images take centuries to fade away after you click them
    • “Select all the squares containing a bus” and there are squares with the very corner of the bus that make you wonder if they are considered as part of “squares containing A bus”
    • “Fit the puzzle piece”, although this is the least annoying one

    In summary, the CAPTCHAs seemingly are becoming less of a “prove you’re not a robot” and more of an forced IQ test. I can see the day when CAPTCHAs will ask you to write down a Laplacian transform for the solution f(x) to the differential equation governing the motion of a mass considering the resistance of air and aerodynamics, or write down a detailed solution to the P versus NP problem.



  • I’m not sure about the LinkedIn’s situation in other countries but, in the country I live in, except for those jobs who have the “Easy Apply”, most of the jobs from LinkedIn redirect LinkedIn’s users to third-party platforms (such as “Gupy”, a HR platform used by many brazilian corps). For every single job one wishes to apply, Gupy will require them to fill all their information again and again. It’s infuriating for those who’re seeking a job…

    The “Easy Apply” also doesn’t really help. Several jobs aren’t jobs at all, but designed to fill HR’s “talent pools”. Not to mention the AI-based filtering that HRs are using to select candidates based on “secret keywords”, without previous interviewing of such candidate. “Human” resources, as they call themselves. LinkedIn is just an echo chamber for such HRs.

    Lastly, I also deactivated my LinkedIn profile (“Hibernated” it), yet I keep receiving those emails (“Your account is still hibernating”)…


  • There are officially 193 countries, according to UN. Each country with their own laws, some of them (European) with common laws (EU laws). How is it humanly possible for a site to keep track of every single law or every single country? Laws are not a worldwide consensus. Also, who and what exactly defines what “misinformation” is? For example: the belief in the supernatural (such as the daemonic forces from Göetia and Luciferianism) is not a scientifically proven thing, so, if we consider “non-misinformation” the information that is capable of being strictly proven, then should absolutely every social network content regarding one’s belief be considered “misinformation”?