dorchadas: (Dark Sun Slave Tribes)
[personal profile] dorchadas
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
-Frank Herbert, Dune
Strange coincidence that I simultaneously saw this article entitled We must declare jihad against AI and also went to a work seminar today about the use of AI in healthcare. The latter mentioned the recent JAMA article about physician vs AI empathy.

I've been an AI skeptic for most of the last decade and for most of the last decade I've been right--I remember arguing eight years ago with someone who laughed at my insistence that we would absolutely not have direct brain interfaces within five years--but I think that's less likely in the future. Nothing is going to constrain AI because there's too much money to be made from out-of-control AI and the social consequences be damned. Look at how much damage has already been done through social media algorithms designed simply to keep people's eyes on the platform--the promotion of outrage engagement, Instagram Face, depression and anxiety in Gen Z, people at risk of having a single bad interaction go viral, etc--and imagine it with programs more sophisticated than the stuff on Amazon that says "Ah, I see you bought a toaster, would you like to buy a dozen more?" That was the point of the Dune quote I posted above, since AI isn't going anywhere so we'll need to manage its social effects.

People talk about AI destroying whole swaths of jobs and that may happen in the future but it's not going to happen currently due to AI's tendency to confidently make up nonsense. It's like Laila--she has a lot of words she says, but she also makes sounds that I'm sure mean something to her but which aren't English (or Japanese, or Hebrew, or French, or any other language she's been exposed to). Like, she's lately switched to pronouncing banana as "bahlahlah." She says it consistently, she knows exactly what it means, but who knows where she got it from. AI is like that. I remember reading a rebbe.io answer where the AI confidently stated that eating poultry and milk together was allowed, which isn't true under any interpretation of halakhah. I just asked it about eating kiniyot during Pesaḥ and it told me this was a matter of individual custom and to consult your rabbi, which I'm a little unsure is the answer an actual Chabad rabbi would give. But it gives those answers confidently and seems to provide reasoning to support them, and it reminds me of that scene in the sealed chamber in 2001 where Dave and Frank decide that if HAL was wrong about the antenna, what else is it wrong about, and if it's wrong about anything, then they need to shut it down to prevent misinformation from jeopardizing the mission. Which is exactly the same reasoning HAL uses to shut them down, which is part of the point of the movie (who is more robotic, the literal computer, or the humans?) and part of the point of the presentation I went to. AI needs to be guided in right thought by humans to really be useful.

To pick an example, if you look at the example interactions that were rated in that JAMA article, you'll notice the most obvious unifying point is that each of the AI responses is twice as long as the human responses. No wonder they were voted more empathetic! There's an immediate perception that the AI is spending more time on the response because it's longer, and since Americans have an extremely common complaint that they barely get to see their doctors and when they do, it's more only a few minutes, that the AI seems to be more attentive. But what if the AI was used as an aid, listening to the doctor-patient conversation and taking notes, filling in the EMR in the background, and the doctor checked the results at the end but otherwise spent the time talking to the patient instead of looking at a computer? Wouldn't that make the doctor seem much more empathetic? And this could be an aid to doctors elsewhere too, since the average of twenty minutes per visit in America is actually on the extreme high end globally. In Bangladesh people see their doctors for an average of less than a minute!

Sure, AI will still sometimes produce nonsense. I remember reading a teacher saying they took psychic damage from reading an AI-written essay that the student had gone at with a blind-idiot thesaurus afterwards, leading to phrases like
Unused York City
but also human medical error kills at a higher rate than anything other than disease or cancer. If AI can assist humans in reducing that even a little...

I don't think we're in danger of getting paperclip-maximized any time soon. It's lazy thinking, or not thinking at all, that'll do us in.