…well of course it’s beneficial to their product, and a detriment against everyone else, which is why I called it a conflict of interest.
Either that, or you used the wrong word. Care to elaborate?
I have peepee doodoo caca brains.
…well of course it’s beneficial to their product, and a detriment against everyone else, which is why I called it a conflict of interest.
Either that, or you used the wrong word. Care to elaborate?
The logic is a little problematic, because it also means you mislead ordinary users, and honestly the infosec industry has a conflict of interest as well.
Sun Microsystems was once the great hope of the computing world, and technically the JVM was first to normalise the use of VM’s, albeit from a containerised perspective. It was Docker before docker, in some sense.
This coupled with Solaris and the SPARC systems that were Java-native (whatever that means) enabled this type of containerisation from a hardware level, which again: was a huge thing.
But, Sun turned for the worse once the JVM hit browsers and server stacks. That’s when their SaaS model was envisioned, that was the precursor to the acquisition by Oracle.
So it started nicely, but hit the enshitification velocity somewhere in the early 2000s.
haha java is terrible, mostly because of who owns it.
Think of those poor data brokers… I mean “poor” in the figurative sense…
“…daughter dearest.”
“Wow, this guy programs.”
The funny thing is that prompts also have unwanted (or “negative”) parameters, like “weird hands”. You could easily just input “disadvantageous framing for police officers”.
This is why these parameters should be public knowledge, so no exceptions are made that clear cops of wrongdoing if they committed a crime.
I promote running neural networks, LLM’s, SLM’s and stable diffusion locally. Why?
The way I see it, there’s a curve when various forms of AI technology becomes so effective and so powerful that it poses a problem for society. People are afraid AI will take their jobs, and that’s a valid concern.
Why then do I promote the use of local AI? Because I think that human+AI will be what prevents centralisation of data, the centralisation of knowledge, the centralisation of power that big tech firms, venture capitalists and authoritarians would love to have.
It’s an uphill battle though, because much like the other boardroom buzzwords like “cloud”, crypto, blockchain, etc, AI is something that makes billionaires pants wet and something that people despise - which is fully understandable.
But, I also fear it is self-defeatist. If we allow AI technology to be centralised instead of learning to liberate ourselves from the central tech cabals that wish to control it, then we set our selves up for new forms of authoritarianism we never knew before.
If you see the cyberdystopia that is China, or the tech oligarchy of the US, if you are left leaning, socialist, anarchist, etc, then it should be your prerogative to take that power away from central authorities.
Please reply with actual arguments and not cathartic putdowns, because I do want to see another way, but just being a troll on Lemmy will not sway me.
Again, I am open to reproach, just be objective.
The company had avoided certain destruction, after having fired the previous CEO and putting a new one in it’s place. The new CEO had managed to bring a newfound calm to the company and it’s ranks, and brought an air of meditative discipline to board room meetings.
Some said it was crazy, but making the LectoFan EVO the new CEO was the best decision the company board had ever made.
Telegram… a rechid hive of scum and villainy.
I’m betting countries that do qualify will have a massive uptick in phones sold, but also a massive uptick in phones being smuggled out of the country.
No, I’m sorry. Zionists gotta project everywhere, now that people in the mainstream are aware of what’s truly happening.