You’re absolutely correct. It would need to be done manually and I understand people not wanting to put that effort in, I just feel bad about the information being lost.
You’re absolutely correct. It would need to be done manually and I understand people not wanting to put that effort in, I just feel bad about the information being lost.
Indeed. I at least think they should repost the helpful information here to Lemmy so that users still have access to it, I understand driving traffic away from Reddit but we should keep that useful information open to the community.
Ah yeah, that’s the type of thing where you only find it from luck.
This is why a paper trail is so important. When shit hits the fan they will always try to blame you, so you need written or audio proof that they issued the order.
To be fair, in most Capitalist nations, literally any decision made will favor the rich because the system is automatically geared that way. I don’t think the solution is trying to come up with more jobs or prevent new technology from emerging in order to preserve existing jobs, but rather to retool our social structure so that people are able to survive while working less.
I think that copyright laws are fine in a vacuum, but that if nothing else we should review the amount of time before a copyright enters the public domain. Disney lobbied to have it set to something awful like 100 years, and I think it should almost certainly be shorter than that.
If the models were trained on pirated material, the companies here have stupidly opened themselves to legal liability and will likely lose money over this, though I think they’re more likely to settle out of court than lose. In terms of AI plagiarism in general, I think that could be alleviated if an AI had a way to cite its sources, i.e. point back to where in its training data it obtained information. If AI cited its sources and did not word for word copy them, then I think it would fall under fair use. If someone then stripped the sources out and paraded the work as their own, then I think that would be plagiarism again, where that user is plagiarizing both the AI and the AI’s sources.
I think the key there is that ChatGPT isn’t able to run its own code, so all it can do is generate code which “looks” right, which in practice is close to functional but not quite. In order for the code it writes to reliably work, I think it would need a builtin interpreter/compiler to actually run the code, and for it to iterate constantly making small modifications until the code runs, then return the final result to the user.
Yeah, I largely agree. I know there are people that are concerned about Mastadon’s growth, but honestly the biggest thing preventing Mastadon from growing is that normies find federation to be confusing and stupid and small Mastadon instances have bad discoverability.
I’m not really heartbroken for Threads, just identifying market trends. Right-wingers aren’t bad people inherently, but right-wing policies and political organizations are absolutely far worse than their centrist and left-wing equivalents.
Absolutely, though I’m a believer that what they are is in part defined by their actions. Any label prescribed onto people is a mixture of identity and actions, and only the actions impact other people, and I personally believe that someone’s moral worth is determined by the effects of their actions on other people.
Twitter and Truth Social have already captured the right-wing market, kowtowing to their demands is not going to earn Threads new users and will discourage moderates from signing up.
I think a lot of us have been there before, it’s part of the learning process. Jimmy will find his way.