Yes siree, the excitement never stops!

  • 0 Posts
  • 29 Comments
Joined 8 months ago
cake
Cake day: December 7th, 2023

help-circle









  • No, I am not welcoming an artist apocalypse, that would obviously be bad.

    I am noting that I find it amusing to me on a level I already acknowledged was petty and personal that many, many mediocre artists who are absolutely awful to other people socially would have their little cults of fandom dampened by the fact that a machine can more or less to what they do, and their cult leader status is utterly unwarranted.

    I do not have a nice and neat solution to the problem you bring up.

    I do believe you are being somewhat hyperbolic, but, so was I.

    Yep, being an artist in a capitalist hellscape world with modern AI algorithms is not a very reliable way to earn a good living and you are not likely to be have such a society produce many artists who do not have either a lot of free time or money, or you get really lucky.

    At this point we are talking about completely reorganizing society in fairly large and comprehensive ways to achieve significant change on this front.

    Also this problem applies to far, far more people than just artists. One friend of mine wanted her dream job as running a little bakery! Had to set her prices too high, couldn’t afford a good location, supply chain problems, taxes, didn’t work out.

    Maybe someone’s passion is teaching! Welp, that situation is all fucked too.

    My point here is: Ok, does anyone have an actual plan that can actually transform the world into somewhere that allow the average person to be far more likely to be able to live the life they want?

    Would that plan have more to do with the minutiae of regulating a specific kind of ever advancing and ever changing technology in some kind of way that will be irrelevant when the next disruptive tech proliferates in a few years, or maybe more like an actual total overhaul of our entire society from the ground up?


  • Well, off the top of my head:

    Whole Brain Emulation, attempting to model a human brain as physically accurately as possible inside a computer.

    Genetic Iteration (not the correct term for it but it escapes me at the moment), where you set up a simulated environment for digital actors, then simulate quasi-neurons, quasi-body parts dictated by quasi-dna, in a way that mimics actual biological natural selection and evolution, and then you run the simulation millions of times until your digital creature develops a stable survival strategy.

    Similar approaches to this have been used to do things like teach an AI humanoid how to develop its own winning martial arts style via many many iterations, starting from not even being able to stand up, much less do anything to an opponent.

    Both of these approaches obviously have drawbacks and strengths, and could possibly be successful at far more than what they have achieved to date, or maybe not, due to known or existing problems, but neither of them rely on a training set of essentially the entirety of all content on the internet.


  • It meets almost none of the conceptions of intelligence at all.

    It is not capable of abstraction.

    It is capable of brute force understanding similarities between various images and text, and then presenting a wide array of text and images containing elements that reasonably well emulate a wide array of descriptors.

    This is convincing to many people that it has a large knowledge set.

    But that is not abstraction.

    It is not capable of logic.

    It is only capable of again brute force analyzing an astounding amount of content and then producing essentially the consensus view on answers to common logical problems.

    Ask it any complex logical question that has never been answered on the internet before and it will output irrelevant or inaccurate nonsense, likely just finding an answer to a similar but not identical question.

    The same goes for reasoning, planning, critical thinking and problem solving.

    If you ask it to do any of these things in a highly specific situation even giving it as much information as possible, if your situation is novel or even simply too complex, it will again just spit out a non sense answer that is basically going to be very inadequate and faulty because it will just draw elements together from the closest things it has been trained on, nearly certainly being contradictory or entirely dubious due to being unable to account for a particularly uncommon constraint, or constraints that are very uncommonly faced simultaneously.

    It is not creative, in the sense of being able to generate something novel or new.

    All it does is plagiarize elements of things that are popular and have many examples of and then attempt mix them together, but it will never generate a new art style or a new genre of music.

    It does not even really infer things, is not really capable of inference.

    It simply has a massive, astounding data set, and the ability to synthesize elements from this in a convincing way.

    In conclusion, you have no idea what you are talking about, and you yourself literally are one of the people who has failed the reverse Turing Test, likely because you are not very well very versed in the technicals of how this stuff actually works, thus proving my point that you simply believe it is AI because of its branding, with no critical thought applied whatsoever.



  • The flip side of this is that many artists who simply copy very popular art styles are now functionally irrelevant, as it is now just literally proven that this kind of basically plagiarism AI is entirely capable of reproducing established styles to a high degree of basically fidelity.

    While many aspects of this whole situation are very bad for very many reasons, I am actually glad that many artists will be pressured to actually be more creative than an algorithm, though I admit this comes from basically a personally petty standpoint of having known many, many, many mediocre artists who themselves and their fans treat like gods because they can emulate some other established style.


  • Yep, completely agree.

    Case in point: Steam has recently clarified their policies of using such Ai generated material that draws on essentially billions of both copyrighted and non copyrighted text and images.

    To publish a game on Steam that uses AI gen content, you now have to verify that you as a developer are legally authorized to use all training material for the AI model for commercial purposes.

    This also applies to code and code snippets generated by AI tools that function similarly, such as CoPilot.

    So yeah, sorry, either gotta use MIT liscensed open source code or write your own, and you gotta do your own art.

    I imagine this would also prevent you from using AI generated voice lines where you trained the model on basically anyone who did not explicitly consent to this as well, but voice gen software that doesnt use the ‘train the model on human speakers’ approach would probably be fine assuming you have the relevant legal rights to use such software commercially.

    Not 100% sure this is Steam’s policy on voice gen stuff, they focused mainly on art dialogue and code in their latest policy update, but the logic seems to work out to this conclusion.


  • Here is the bigger picture: The vast majority of tech illiterate people think something is AI because duh its called AI.

    Its literally just the power of branding and marketing on the minds of poorly informed humans.

    Unfortunately this is essentially a reverse Turing Test.

    The vast majority of humans do not know anything about AI, and also a huge majority of them can also barely tell the difference between, currently in some but not all forms, output from what is basically a brute force total internet plagiarism and synthesis software, from many actual human created content in many cases.

    To me this basically just means that about 99% of the time, most humans are actually literally NPCs, and they only do actual creative and unpredictable things very very rarely.




  • Detonate is actually more precise, implying an explosion that accelerates at or faster than the speed of sound, often causing a visible blast wave in air that is humid and dense enough as the pressure wave compresses the air and squeezes it into visible semi cloud like formations momentarily.

    RUD is a general term that can cover any number of events which cause a craft to generally lose structural integrity in a small amount of time.

    For example, a craft could hit max q either at a non optimal angle, or due to structural integrity flaws, more or less violently tear itself apart.

    Or, a craft could enter the atmosphere at a non optimal angle, or at too extreme a velocity, and be ripped apart, again, violently and quickly. This is generally referred to as ‘Burning Up’.

    Or a craft could have a parachute or landing system related problem and impact the ground at such speeds it disassembles itself. Jokingly referred to as ‘lithobraking’.

    Or, a craft could have an accidental triggering of some kind of abort system that results in the craft tearing itself apart.

    Or, at any point while airborne, a problem with either the integrity of a fuel tank or the fuel pumps and plumbing could cause a rupture, which could then cause the craft to crumple, deform, and then rip itself apart /without/ the loose fuel igniting, or perhaps /with/ the loose fuel igniting, which may merely conflagrate or detonate depending on other factors.

    While many of these more specific chains of events have more specific terms to describe them… they are /all/ Rapid Unplanned Disassemblies.

    All that that term means is for some reason your craft went from being more or less one piece to more or less a large number of pieces very quickly.

    For example the Challenger disaster was a RUD. But not a detonation. Detonation is more specific and I used the term for a reason.