• 0 Posts
  • 20 Comments
Joined 11 months ago
cake
Cake day: August 11th, 2023

help-circle



  • Yep, exactly.

    As a doctor who’s into tech, before we implemented something like AI-assisted diagnostics, we’d have to consider what the laziest/least educated/most tired/most rushed doctor would do. The tools would have to be very carefully implemented such that the doctor is using the tool to make good decisions, not harmful ones.

    The last thing you want to do is have a doctor blindly approve an inappropriate order suggested by an AI without applying critical thinking and causing harm to a real person because the machine generated a factually incorrect output.





  • “I could be a severe bastard,” he writes. “My experiences at the Royal Shakespeare Company and the National Theatre had been intense and serious … On the TNG set, I grew angry with the conduct of my peers, and that’s when I called that meeting in which I lectured the cast for goofing off and responded to Denise Crosby’s, ‘We’ve got to have some fun sometimes, Patrick’ comment by saying, ‘We are not here, Denise, to have fun.'”

    “In retrospect,” Stewart continues, “everyone, me included, finds this story hilarious. But in the moment, when the cast erupted in hysterics at my pompous declaration, I didn’t handle it well. I didn’t enjoy being laughed at. I stormed off the set and into my trailer, slamming the door.”

    Stewart then details how Frakes and Spiner came to his trailer for a heart-to-heart chat.

    “People respect you,” Spiner told him. “But I think you misjudged the situation here.”

    Recalls Stewart: “He and Jonathan acknowledged that yes, there was too much goofing around and that it needed to be dialed back. But they also made it clear how off-putting it was — and not a case study in good leadership — for me to try to resolve the matter by lecturing and scolding the cast. I had failed to read the room, imposing RSC behavior on people accustomed to the ways of episodic television — which was, after all, what we were shooting.”

    In short, he became angry because he was used to theater acting and tried to hold a tv production to theater’s standards.


  • At the crux of the author’s lawsuit is the argument that OpenAI is ruthlessly mining their material to create “derivative works” that will “replace the very writings it copied.”

    The authors shoot down OpenAI’s excuse that “substantial similarity is a mandatory feature of all copyright-infringement claims,” calling it “flat wrong.”

    Goodbye Star Wars, Avatar, Tarantino’s entire filmography, every slasher film since 1974…


  • The tools would be integrated into things we already use.

    I’m a doctor, and our EMR is planning to start piloting generative text for replies to patient messages later this year. These would be fairly informal and don’t need to be super medically rigorous, needing just a quick review to make sure the AI doesn’t give dangerous advice.

    However, at some point AI may be used in clinical support, where it may offer suggestions on diagnoses, tests, and/or medications. And here, we would need a much higher standard of evidence and reproducibility of results as relying on a bad medical decision here could lead to serious harm.

    These are already in two different sections of the medical chart (inbox vs. encounter, respectively) and these would likely be two separate tools with two separate contexts. I would not need to memorize two tools to use the software: in my inbox, I’ll have my inbox tools, and in my encounter, I’ll have my encounter tools, without worrying about exactly what AI implementation is being used in the background.