NewsLab
Apr 29 13:00 UTC

Refuse to let your doctor record you (buttondown.com)

167 points|by speckx||223 comments|Read full story on buttondown.com

Comments (223)

120 shown|More comments
  1. 1. josefritzishere||context
    This is a hard no.
  2. 2. pclowes||context
    I understand the concerns and I am not sure I would allow myself to be recorded until I knew more.

    However, I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin, forced to see evermore patients in the same number of hours, and yet for every attempt to improve efficiency there is a “no, not that way“ response.

  3. 3. johndhi||context
    yes, this!
  4. 4. bluefirebrand||context
    > for every attempt to improve efficiency there is a “no, not that way“ response

    They've tried everything except "train and hire more doctors" and they're just all out of ideas aside from "erode patients rights and lower overall quality of care"

  5. 5. pclowes||context
    The economics of medical school cost, time, and capped residency spots (some would argue this is price-fixing with artificial scarcity) make it hard to just “make more doctors”. Combine this with a highly litigious society that always demands a full doctor (when for 90% of things an NP or PA would do) plus inverted population pyramid all exacerbate the problem.

    We need more doctors now and it takes 12 years to make a doctor and by then the boomer cohorts aging and medical needs will peak.

    Finally, even if we could do that, the top of the funnel candidate is substantially weaker with lower test scores and higher need for remedial classes. And for the good candidates, the ROI of medical school is not as good as it once was.

  6. 6. bluefirebrand||context
    Sure. All of that can be true and yet it's still something we need to do better about

    Just saying "it's really hard so we won't do it" isn't exactly an option when it comes to providing healthcare. :/

  7. 7. pj_mukh||context
    Yes, and also almost all of these issues could be ascribed to all digital medical record-keeping. The fact that AI transcribed it matters relatively little.
  8. 8. winrid||context
    "healthcare company lowers cost instead of absorbing new found profits" sounds like an Onion headline
  9. 9. awakeasleep||context
    news on inter-dimensional cable
  10. 10. reaperducer||context
    news on inter-dimensional cable

    Is that channel available on Blippo+?

  11. 11. midtake||context
    Is that why healthcare costs are up, or is it because of the insurance mafia?
  12. 12. jimbokun||context
    It’s the doctors.

    Insurance company profit margins are capped by law and if anything their incentives are to pay the hospitals less.

    US physician salaries are astronomical compared to anywhere else in the world.

  13. 13. cpburns2009||context
    Profit margins are capped by percentage. That creates the perverse intensive for insurance companies to pursue ever increasing costs in order to increase profits.
  14. 14. ars||context
    You are forgetting about competition, increasing costs means directly increasing premiums, and higher premiums means lower business.
  15. 15. cpburns2009||context
    What competition? When enrollment comes around or you switch jobs you get maybe two insurance providers to choose from.
  16. 16. jimbokun||context
    But they don’t do this.

    They fight tooth and nail to keep the claims paid to doctors and hospitals low.

  17. 17. gosub100||context
    How do you know it's not the other way around? Give consent to incorporate another technology that will keep wages the same but allow them to treat more patients and extract more profit for the shareholders?
  18. 18. hyperific||context
    I definitely agree that medical professionals are spread too thin and automation seems like it would be a boon but, as the article points out, the introduction of automation likely won't translate to more doctor-patient time it'll translate to doctors seeing more patients.

    The solution not only introduces a problem (decreased privacy) but could reinforce the existing problem it's trying to solve.

  19. 19. hamandcheese||context
    > it'll translate to doctors seeing more patients.

    This is also a good thing. Even in supposedly developed parts of the world like San Francisco it can be difficult to find a PCP that is taking new patients.

  20. 20. reptilian||context
    Where healthcare is concerned, America is not what anyone considers "first world". Your healthcare system is more backward than most third world nations. I would rather leave the US than receive medical treatment there. I have never even considered trusting the US healthcare system. When I lived there I would rather fly home and get treated (in a third world country) than lose all my savings getting inadequate care in the US. I know people who have been through large and expensive treatment plans in the global south, who paid less for the complete treatment than Americans pay for the ambulance getting you to the hospital.
  21. 21. mmonaghan||context
    I think its two systems masquerading as one - employed-and-insured and everyone else.

    If you're the former, it works great. If you're the latter, it can be mediocre to BRUTAL. Medical debt is our #1 or 2 cause of bankruptcy iirc.

    Regardless of which class you are, if you can access the care, our outcomes are the best in the world for most things.

  22. 22. kelnos||context
    > If you're the former, it works great.

    I don't think that's true at all. "Insured" doesn't mean just one thing. There are many different kinds of insurance, levels of plans, etc. Most insurance companies will do their best to deny claims or push more responsibility onto the patient.

    My insurance is very good, but I see a therapist weekly and my insurance only covers about 40% of the cost. I'm fortunate that ~$500/mo isn't a problem for me, but many people in the US would find that impossible.

    A few months ago I went to the ER for what turned out to be gallstones, and was still on the hook for $200 of that visit. And I took a Lyft the the hospital; I don't want to think about what my out-of-pocket cost had been if I'd needed an ambulance.

    Last summer I hurt my hand in a bicycle accident, and went to PT once a week for 6 weeks. I had to pay a $35 co-pay for each visit; that's $210 for a single injury.

    And this is with fairly good insurance. Many, many insured Americans just have so-so insurance. From what I hear of most healthcare systems in countries that do this right, most (if not all) of this stuff would have been completely free.

    > If you're the latter, it can be mediocre to BRUTAL

    Yup, and in a way that's an even worse indictment, that really puts us in worse-than-third-world territory.

  23. 23. reaperducer||context
    Your healthcare system is more backward than most third world nations. I would rather leave the US than receive medical treatment there.

    And yet the wealthiest people in the world, who can have the best healthcare anywhere they want on the planet, even with private doctors, routinely choose to be treated in Rochester, Minnesota; Boston, Massachusetts; Houston, Texas; Baltimore, Maryland; and Los Angeles, California.

    The U.S. is by no means perfect, but there's a reason that there are entire medical facilities in the U.S. that cater exclusively to people from other countries. Just listen to local radio in Palm Springs and you'll hear commercials along the lines of "Tired of waiting, or simply can't get the medical care you need in Canada? Come to our hospital!"

    Meanwhile, if I wanted to have my recent surgery in Canada, I'd have to wait almost a year for a slot to open up. Here I waited all of two weeks. And the newspaper headlines in the UK are full of horror stories of patients dying in hospital hallways while doctors are on strike because everything is so great.

  24. 24. dheera||context
    How about this:

    1. I have health insurance

    2. The point of insurance is they're supposed to pay for shit

    3. You figure out how to get them to pay for shit, sign an agreement that removes me of any patient responsibility of the balance bill, and assure me in writing that I will owe $0 no matter what

    Then you can record me.

  25. 25. gedy||context
    Insurance, like a lot of subsidies turns into "we will take all of that, and still make sure your share is at the limits of your carrying capacity"
  26. 26. marricks||context
    > I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin

    The problem is over optimization AND lack of people. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.

    Having in person or remote notetakers is a great entry level job to do before you become a doctor. It could be boring but at least the terms are familiar and you get to know the person you're working with.

    It's not like healthcare is an impossible problem to solve that needs more tech, we just refuse to spend money on people and (inexplicably) cannot help but dump tons of money into tech.

  27. 27. toast0||context
    > The problem is over optimization not lack of people or resources. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.

    At least in my area, it seems like lack of people is a problem. Sometimes it's lack of people because the pay is too low, but more of it it's lack of people because the pool of qualified people is too small. And increasing pay increases healthcare costs, and healthcare costs are already very high. If digital tools allow the available staff to see more patients while delivering the same level of care (and without burning out the providers), then that means more capacity and less times people want to see a doctor, but can't. Similar arguments for same number of patients ans greater level of care. If it's more patients, but worse level of care, then it becomes tricky.

  28. 28. jimbokun||context
    The lack of people is too low because the organization tasked with accrediting new doctors has a financial incentive to its current members to keep the pool of doctors low.
  29. 29. marricks||context
    Holy wow, I meant to say lack of people is the problem. Edited to reflect that.
  30. 30. pclowes||context
    I don’t necessarily disagree with you here. However, there is a timing concern. Training doctors takes too long and the boomers are aging now.
  31. 31. kelnos||context
    The best time to fix that was 20 years ago. The next-best time to fix that is today.

    But we're still not doing that, and that's a huge oversight. (Or is intentional, to protect the doctor-training to hospital-slot pipeline cartel.)

  32. 32. hansvm||context
    If I paid all my doctors $1200/hr and doubled how much time they spend with or on me, that'd still pale in comparison to healthcare expenditures attributed to me between actual insurance payments and actual money leaving my bank account. Doctors being spread too thin is very much a separate issue.
  33. 33. vlovich123||context
    One massive way to reduce healthcare costs is to remove caps from becoming a doctor; as long as you pass the tests and meet the requirements, why are we turning doctors away? So that existing doctors can be paid well above the market rate. There's a reason there's so many doctors in politics - it's very important for them to protect this business model.
  34. 34. bonsai_spool||context
    > There's so many doctors in politics - it's very important for them to protect this business model.

    Uh... politics is almost uniformly lawyers and business people.

    Also tests are the table-stakes to being a doctor (like leet code and programming).

  35. 35. vlovich123||context
    Tests are table stakes but quotas are how they ensure there’s fewer doctors than is needed to meet demand to ensure doctors get paid large salaries.

    While you’re not wrong, there are far more doctors in politics at all levels (including influential fundraising) than engineers and teachers.

  36. 36. bonsai_spool||context
    > While you’re not wrong, there are far more doctors in politics at all levels (including influential fundraising) than engineers and teachers.

    I don't think you're right about this (not that it matters) but what's your source of data?

    What is an 'engineer'? A PE? Someone who coded once?

    The 'quotas' for doctors don't exist, this is one of the stories people on the internet tell themselves.

  37. 37. kelnos||context
    The problem is that (as addressed by the article), any efficiency wins end up pushing more patients on the provider. So if you used to have a 15-minute appointment, and five minutes of that were spent with the doctor writing down notes, with AI transcription, now you'll have a 10-minute appointment, and the doctor will be forced to see two more patients per hour.
  38. 38. sys_64738||context
    > I understand the concerns and I am not sure I would allow myself to be recorded until I knew more.

    Which is your choice obviously. But your doctor can also drop you as a patient and that will happen eventually if you say No too many times.

  39. 39. scrawl||context
    > The false promise of efficiency [...] that is extremely unlikely to mean more time with each patient. Instead, it will mean more patients.

    nit: that is a real efficiency gain. seeing more patients sounds better on the face of it.

  40. 40. woopwoop||context
    That's not a nit. There's not much left of the articles point once you take this on board.
  41. 41. apparent||context
    Exactly. It means that if you've ever tried to get an appt and been told there's a 4 month waiting list, AI could help get you in sooner. That is a real win.
  42. 42. walrus01||context
    It's interesting how lots of service providers of all sorts will insist that you agree to their Terms of Service, Acceptable Use Policy, End User License Agreement (or whatever they want to call it) before engaging with you, but when the consumer insists on enforcing their own personal policy in the opposite direction such as refusing consent to recording or feeding your PII into some opaque AI system, suddenly it's a problem.
  43. 43. k2xl||context
    I think the post conflates two issues:

    1. AI-generated charting. 2. The existence of a reliable record of the visit.

    I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.

    My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.

    This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.

    That is a care quality problem, not just a convenience problem.

    The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.

    I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.

  44. 44. ranger_danger||context
    Have you tried recording the interactions with doctors for your own benefit?
  45. 45. k2xl||context
    Yes. It was great for when I had a major surgery last year and had a bazillion questions for the surgeon. But I don't always remember to. My parents definitely don't even think about it.
  46. 46. nubinetwork||context
    Dead link (or it was?)
  47. 47. adit_ya1||context
    Almost every point follows the same structure:

    > "Here is a real concern about implementation" → "Therefore you should refuse entirely"

    This skips the middle step of "therefore we should implement it well."

    I'm not convinced that we should be allowing doctors to record patient visits at this stage yet, but I'm really not convinced by these points, which largely don't hold up under closer examination.

    A few that stuck out:

    "Privacy" - Labs are routinely sent to third-party companies, and we don't do informed consent for that. The third-party argument isn't unique to recording.

    "False promise of efficiency" - This doesn't really have anything to do with patients at all. It's a criticism of medical office management, not of physician-patient interactions. Telling patients to refuse a tool because management might exploit the productivity gains is asking patients to fight a labor battle on the provider's behalf.

    "Consent can't be revoked mid-visit" - Consent typically can't be revoked in the middle of an appendectomy, or halfway through administering a vaccine either. Practical irrevocability is a normal feature of informed consent, not a special problem unique to recording. Proper consent processes in medical offices are a broader issue than consent about voice recordings specifically. Had the authors made the point that providers are being asked to obtain consent for tools whose technical implementation and privacy risks fall outside the provider's own domain knowledge — that would be a stronger argument. But that isn't quite the point they made, and their current framing doesn't wholly convince.

  48. 48. afandian||context
    I think the "therefore we should implement it well" is not forgotten, it's elided because we don't think it's likely to happen.

    Tech-naïve people think that we can build super duper encryption systems.

    The more jaded amongst us know that people can get sloppy or complacent, it's rare to see a regulatory system that truly incentivises good practice, data breaches will happen eventually, and no-one will be held accountable.

    This is a big one in recent memory: https://www.theguardian.com/uk-news/2020/jun/10/babylon-heal...

  49. 49. parliament32||context
    > Labs are routinely sent to third-party companies

    Labs are real businesses that do real things, and would have actual impact for a breach. Meanwhile any idiot can vibe-code a thin shim between a microphone and ChatGPT in a weekend, promise they're HIPAA-compliant, and start selling. Medical professionals have no obligation to do any diligence, and there's no reason for them to not just buy whoever-is-cheapest. They're not even close to the same thing.

  50. 50. kube-system||context
    There might be some real concern about the cognitive and patient-interaction impacts of speech recognition being used... but on the other hand, it's more likely that details are missed when information is captured manually.

    And the privacy/informed consent concerns here are silly, they apply to any of your charted data... and if you're going to any office that doesn't use the latest technology, your patient information is probably being sent between offices over fax anyway.

  51. 51. burnte||context
    I'm a healthcare CIO of 12 years, and I've evaluated 4 and deployed 2 of these tools, one of which is currently deployed at my currently healthcare employer. I am very measured on AI but the results I've seen from these virtual scribes is HUGE. In every case we have IMMEDIATELY seen improvements in patient NPS scores, provider satisfaction, and note quality. Notes are more standardized as well as more verbose and detailed, which makes it easier for future providers to understand the case. These better notes reduce our claim rejection rate.

    And what converted me was direct patient response. Across the board patient feedback is extremely positive, with the most common comment being along the lines of "I really felt like the doctor connected with me better and they were more present in the visit."

    These AI scribes really DO improve patient care, I've seen it with my own eyes.

  52. 52. t-kalinowski||context
    Counterpoint from a doctor: https://substack.com/inbox/post/189714240

    Scribes _feel_ good in the short-term, but it's not clear if they're actually good on longer time horizons.

  53. 53. bonsai_spool||context
    > Counterpoint from a doctor: https://substack.com/inbox/post/189714240

    That article is clearly LLM-assisted if not vibe-written, which is the height of irony given the context.

    Note that the CIO is talking about patient satisfaction, which is a distinct target. I agree about the long-run benefit being unclear.

  54. 54. sigmar||context
    "I am not saying ambient scribes are bad technology."

    is this a counterpoint? he just seems to be wary of the risk, without a firm position and decided to personally stop using it. people often overestimate their own skills and think their own charting is better than that of others, that doesn't mean the tech doesn't work.

  55. 55. jimbokun||context
    In an article critiquing over-use of AI assistants, the author confesses at the end this article itself was authored partly by Claude that introduced errors in the citations, lol.

    Nonetheless, I come away from this article with the sense the ambient devices automating documentation of an encounter are still a net win, with caveats about the need for the doctor to polish the note ti reflect his or her own narrative voice.

  56. 56. razingeden||context
    the two places they come in handy:

    1) in the event you find yourself partially or totally disabled but the records don’t really make a good case for it and your provider has a dismissive attitude about filling out additional documentation to substantiate what they failed to in your records.

    You’re not necessarily going to get approved for FMLA, STD, LTD, SS etc based on a diagnosis or test results alone. They will nitpick over say, heart failure, as if that’s magically and spontaneously going to go away. If you’re telling your provider that you’re limited by things like oh I don’t know, “I’m only awake for 2-4 hours before I need to sleep again” or “some days I just can’t do it and sleep 20 hours” but it’s not in your chart… expect denials and clarifications and a huge burden on you to prove why it’s limiting.

    2) continuity of care, so you don’t end up explaining everything from the top to a specialist or having them run all these tests and procedures from square one — when there’s months long backlogs , and we already did all this and you need treatment - but - there wasn’t much to work with in your referring chart.

    You might not appreciate the “intrusion” if you’re healthy and just worried about your privacy.

    If/When things go south and you find yourself fighting these entities for a year or two or three while they nitpick and delay and deny and drag their feet , you’ll be glad an “AI” kept up meticulous records because this is phenomenally stressful and an endless burden on you when they don’t.

    So, their AI slop can vomit out all this extra info on why insurance companies should pay them or why your condition is in fact disabling, and now their AI slop can comb through it looking for all that. Because they will try to avoid paying or approving any kind of leave or benefits if it’s not there

    And god forbid you hand them a form where they’re being asked to explain themselves. 50/50 on them being eager to help out or rolling their eyes and saying something really nasty about the imposition. And then even when they do that, they almost never file a copy in your chart so your chart STILL doesn’t substantiate your claims. I’m all for an “ai” doing the progress notes in a case where the facility or provider can’t be fucked to do so.

    Happily that’s not true of my current provider, who just, does that anyway (?) But I’ve been around enough to know they’re an exception. Even when providers are on your side and mean well, and want to bend over backwards to help you in any way they can — and I want to just acknowledge that’s the situation I’m in today — honestly , sometimes they just forget some of the details when they do their notes.

    That’s why some places make the provider do it in real time while they’re talking to you, so they didn’t forget something relevant thirty minutes later. The other side of the coin here may be that some providers find that distracting or off putting to be typing away like a stenographer while they’re examining you…

    I think it would be fair to say this can all be tedious and a burden for both patients and providers. There’s just a world of difference between a provider who wants to do this to provide excellency in care, and a provider who wants to do this because they resent it and think it’s beneath them.

  57. 57. burnte||context
    I think every single provider should evaluate them for themselves. Some providers are absolutely better of without them and we don't make anyone use them.
  58. 58. mmooss||context
    If I allow it, is the data from my meeting sent offsite at any stage, for example to an LLM service (e.g., Anthropic, OpenAI, etc.)? Or do the LLM vendors (or any others) have access to the internal data at any stage?
  59. 59. burnte||context
    > If I allow it,

    Which is your right, every patient can ask the provider to not use it.

    > is the data from my meeting sent offsite at any stage

    Yes, no one stores medical records on-prem any more. EMR systems are not like Quickbooks running on an 8 year old terminal server.

    > for example to an LLM service

    Yes, that's literally what an AI transcriber is, an LLM.

    > (e.g., Anthropic, OpenAI, etc.)?

    No. The recording goes (in realtime) to our vendor's infra where it is live transcribed, then summarized and returned. When complete only the finished note is saved, never the recording or transcript.

    > Or do the LLM vendors (or any others) have access to the internal data at any stage?

    Obviously, you can't pricess data you can't access, but the contractual and regulatory environment means that data can't be used for additional training without lots of consents. We do not participate in training activities at all. I won't allow it.

  60. 60. mmooss||context
    Most of your responses are uncharitable readings of the questions - as if you are looking for targets for your contempt, which shows up in all but one answer. If you are contemptuous of questions and questioners, it looks like you don't take these issues seriously. I didn't think that before your response but I do now.
  61. 61. burnte||context
    I have no idea what you're talking about, I found your questions to be excellent and worthy of answering at every level of detail. I have no contempt at all, I assure you. I'm genuinely confused by your reply here.
  62. 62. mmooss||context
    OK. Sorry if I misunderstood. Here are examples of what I perceive:

    > EMR systems are not like Quickbooks running on an 8 year old terminal server.

    Obviously that's literally true, so why write it? It suggests to me that you think I have an antiquated, ignorant view of IT, and a sarcastic, exaggerated response is appropriate.

    > that's literally what an AI transcriber is, an LLM.

    That's an uncharitable reading of the question, as if I don't know the meaning of AI or LLM.

    > Obviously, you can't pricess data you can't access

    Again, it reads to me like you think the questions are simplistic.

  63. 63. yding||context
    When you evaluated the tools, what stood out between which ones were better or worse?
  64. 64. burnte||context
    A few things. I'm price sensitive, so pricing was huge for me. The worst company also had the worst prices. I tried to ask them questions about how their backend works and they refused to answer. I spoke with the CEO and he said he couldn't reveal their "secret sauce". I said, "if you secret sauce is what infra providers you use and not your proprietary code, then you don't HAVE secret sauce and you're just reselling [Cloud Provider's Product]." Turns out that's exactly what they were doing. They were using Google Cloud for recording capture, and AWS for speech to text and then summary generation. I told them we would not ever be working with them.

    For me the big things are price, ease of use, and data protection policies. I need to know the data never leaves the US, and I need to know what processors will touch it. Then if it meets those needs we'll do clinical demos and tests to get provider feedback. That's where we learn if it is clinically accurate. About half of them suck in the accuracy department.

    What stands out to me the most is that the best companies have tended to be the small guys who have a strong grasp ion the entire stack and have somewhat simple apps. They focus on the tech, and have a minimal UI that just focuses on the main tasks and they don't spend engineering time on fancy pretty bells and whistles. If you see a simple UI, that's a good sign to me. Once you hit the big guys the quality goes down. Dragon Medical One is great for straight text to speech, but Dragon with Copilot for medical is really bad.

  65. 65. cromka||context
    But WHY not do this on premises? WHY?
  66. 66. dsr_||context
    Money.
  67. 67. reaperducer||context
    It's strange to me that it's not already on-prem.

    I work in healthcare, and we spend oodles of time and money making sure every technology that can possibly be on-prem is.

    Maybe it's just not technically possible yet?

  68. 68. dsr_||context
    You had it 20 years ago: doctors spoke into recorders, transcriptionists turned that into notes, the docs reviewed them.

    The first study I cited replaces the "spoke into recorders" stage with non-AI voice recognition.

    The second study replaces the "spoke into recorders" stage with LLM voice recognition, and... crucially... also replaces the educated transcriptionist step with nothing.

    I imagine that the real problem is that the voice recognition can be classic or LLM and it just doesn't matter as much as having two humans in the loop instead of one. But that's not a story which gets you to replace cheap voicerec with expensive AI.

  69. 69. quantumwoke||context
    A pretty insightful viewpoint I heard recently from a doctor friend: doctors and hospitals believe that only a corporation could possibly implement this, so they fall into the SaaS trap and lose data sovereignty.

    Under the hood, a lot of the companies are Llama or Gemma wrappers connected to whisper.

  70. 70. 16bytes||context
    Why would you want to have anything on prem?

    Have you seen what that looks like in a hospital system?

  71. 71. burnte||context
    We're not prompt engineers or app developers. In a year or two when I can buy an on-prem hosted version I'll do that.
  72. 72. jubilanti||context
    I still don't want a fucking audio recorder in my doctor's office or a fucking AI that sits in between me and my doctor.

    I am intentionally cursing to express my anger at this casual betrayal of medical trust.

  73. 73. kube-system||context
    It is standard practice to ask patients whether or not they want the scribe used, and in many cases required by law.
  74. 74. jubilanti||context
    For now. It always begins as voluntary. But then doctors will start to treat people who opt out the way TSA treats me when I opt out: a hostile adversary.

    I already get glares and sighs when I dare to actually read every word of a multipage form I am expected to sign without reading. Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages because I could not be checked in until I signed. Other patients are waiting, your exercise of your human rights is inefficient.

    Then soon I'll have to pay a higher copay to opt out. Then I won't be able to opt out at all.

    All in the name of optimizing patient NPS scores and patient throughput.

  75. 75. ryandrake||context
    > Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages.

    I'd be finding a new doctor at that point. Ridiculous. I love it how doctors can be 30 minutes late for their appointments because they're running late and all their appointment delays are cascading, but if the patient reads a document for 5 minutes, they're the problem!

  76. 76. kube-system||context
    I've never had this problem. IME every doctors office recommends showing up 15-20 minutes early to a new-patient appointment for the explicit reason of filling out paperwork.
  77. 77. jeffbee||context
    Right, doctors and CIOs get to use AI transcripts but you, a lowly patient, will write your name, address, and insurance policy number fifteen times with an exhausted Bic pen.
  78. 78. tclancy||context
    >For now. It always begins as voluntary. But then doctors will start to treat people who opt out the way TSA treats me when I opt out: a hostile adversary.

    You sure this is a privacy issue?

  79. 79. burnte||context
    There is no legal requirement to inform patients about the use of scribes, human or AI. If a telehealth session is recorded many states are two party and require telling the patient, but AI scribes are treated the same way other electronic tools are are are covered by your general informed consent policy. We inform patients in writing, their providers make the patient aware, and they are given the opportunity to opt out of the use. No recordings are kept, the session goes directly to transcription, that transcript is deleted after the note is saved.
  80. 80. kube-system||context
    I'm referring to recording laws, as you allude to.
  81. 81. burnte||context
    Yes, but you SAID scribe, and so that statement was false.
  82. 82. oliwarner||context
    Notes need writing though.

    You can do that by recording and transcribing (many methods) or your doctor has to write on the fly, or worse, has their head in their computer while you talk in their general direction.

    Letting doctors talk and examine and not write is a wholly better experience.

    Offsite third parties are the problem here. If this was done automatically without data leaving the room, is there a problem? Do you have the same objections to how your digital notes are stored?

  83. 83. slumberlust||context
    We agree on the desired outcome, but couldn't we also give doctors more time to do that job without AI? Feels like the blame is in the wrong place.
  84. 84. alistairSH||context
    Maybe it's a regional thing, but in my last 3 appointments, 2 had an assistant doing the note-taking (as prompted by the treating physician or PA). The third was a virtual appointment, so no idea what notes were taken, if any.
  85. 85. oliwarner||context
    Sounds cushy, but not everywhere can afford 2:1 healthcare for every primary contact. It's not a thing here until you get to a ward or hospital-based clinic and you're seeing a team.

    I don't like off-site data vacuums. Palantir can get fucked. But good ML transcription tools don't have to be run off-site. Even to get you 90%, or serve as a backup. And as I've said in other threads here, it's hard to be angry about consented audio recording and AI transcription when my entire medical history is floating around in a database that could be hacked, or its data deliberately passed through (eg) a Palantir tool. I think audio of me complaining about lower back pain is the very least of our worries.

    Personally, I'd prefer AI and better doctor availability. To have that admin time back as consultation time, or more appointments, or just less overworked doctor.

    But also, there have to be weapons grade consequences for people that leak patient data. Loss of registration, never allowed to work with sensitive data again and jail.

  86. 86. tclancy||context
    This feels wild to me. I think I am pretty well privacy obsessed, but I don't see it here (fwiw, my wonderful doctor has been using these services for years; originally with overseas human labor, now with AI). First off it presupposes some level of privacy with one's GP that I would only want from a therapist. I don't want health information going beyond my doctor? What about him talking to specialists or getting another opinion in the break room?

    Ship's sailed on that level of privacy anyway the second you bill an insurance carrier in the US. I am willing to take this particular risk if something I said two years ago pops up to help explain what I am currently experiencing. I understand not everyone is me and I am lucky to be in relatively good health and not have anything going on that might put employment, etc at risk so I can understand where some people may want to refuse. But the knee-jerk "FUCK NO BECAUSE PRIVACY" is almost as bad as writing a post based on a side plot in The Pitt when said side plot was 110% heightening the stress between Dr. Robby and Dr. Al Hashimi, not a goddamn double-blind study of the effectiveness of AI transcripto-bots.

    And if you're going to take lessons from The Pitt about medical record transcription, why isn't it Dr. Santos repeatedly falling asleep while transcribing records?

  87. 87. EvanAnderson||context
    > I still don't want a fucking audio recorder in my doctor's office ...

    If I got a copy of the raw recording I might consider it. Maybe. Having that audio recording would be valuable to me.

    It's very irksome medical providers I visit have signs posted prohibiting audio and video recording by patients. My medical appointments aren't exceedingly complex, but a reference audio recording would be handy.

    I suppose I could exercise civil disobedience and just record anyway since it's not illegal in my state. Still, it irks me.

  88. 88. burnte||context
    > If I got a copy of the raw recording I might consider it. Maybe. Having that audio recording would be valuable to me.

    We wouldn't be able to provide it because it's never kept. It's transcribed directly, and then only the note summary is kept. This is to ensure the recording and transcript can't leak (because they don't exist). This was one of my first questions for all of these tools. Where does the data go, how is it processed, what happens. One company refused to talk about it, so I refused to talk to them.

  89. 89. OptionOfT||context
    So how can you verify correctness of transcription and summary in a way that is repeatable over time?
  90. 90. EvanAnderson||context
    Agreed. That sounds like a recipe for "we don't know how 'the algorithm' came up with what it did" kinds of excuses when, inevitably, inaccuracies are found. It also seems, conveniently, to make the processing system practically unimpeachable.
  91. 91. defrost||context
  92. 92. burnte||context
    You're not the first person to focus on the transcript, but you're forgetting that the person checking the note, the doctor, was also in the session and remembers what happens. This isn't an issue.
  93. 93. burnte||context
    That's the job of the provider. There's no other way to actually verify the accuracy of the note. You can't actually engineer humans out of the loop, the loop revolves around humans.
  94. 94. EvanAnderson||context
    How does the provider verify the accuracy if they don't have the transcript or the original recording?
  95. 95. burnte||context
    They read the note. They were in the session, if they can't remember what happened minutes before then we have bigger problems than a lack of transcript.
  96. 96. what||context
    You said you evaluate the error rate every month. How can you do that if you don’t have the recording or transcript?
  97. 97. kstrauser||context
    > I still don't want a fucking audio recorder in my doctor's office

    Why? Doctors have the strictest privacy regulations I know of. It's the one place where I'd be least uncomfortable with a recording, because there's nothing they can do with it other than use it to provide healthcare to me.

    > or a fucking AI that sits in between me and my doctor.

    The expected arrangement is that the AI would be alongside you and your doctor, so that your doctor can spend time interacting with you instead of playing transcriptionist and dictating your statement into your chart.

  98. 98. burnte||context
    > I still don't want a fucking audio recorder in my doctor's office

    Which would you prefer, your doctor remembering everything, or making verbal notes into a microcassette tape recorder that is transcribed by a human later (sometimes the doctor, sometimes someone else)? What if your doctor had a medical assistance in the room and spoke out loud and that medical assistant wrote down everything, is that ok?

    > or a fucking AI that sits in between me and my doctor.

    It sits next to the doctor helping them focus on you by transcribing the session, it doesn't do anything the doctor can't and definitely doesn't do anything the doctor SHOULD. No decision making is done, only transcription and summarization which is then checked by the doctor. We do not let AI make decisions.

  99. 99. defrost||context
    I'd prefer a doctor's brain being actively engaged in the second pass summary checking phase that follows the first pass infomation gathering phase.

    You know, keeping a skilled human actively in the oversight loop and not being encouraged by time pressures or apparent conveniences to slide further and further out of the active loop.

    ie. Always catching that passing jokes about Coke don't end up as cocaine usage notations etc.

    ---

    I'd seriously suggest / trial delibrately injecting (with doctor's knowledge) some N +/- 2 significant (meaning reversed) transcription errors in either each transcript or in the run of transcripts for a shift.

    Now it's a game for a doctor to pick out the {N} known errors as they check the transcription points with penalties for missing known errors and a bonus for finding unknown not delibrately made errors.

    Don't allow the doctors to easily fail into the trap of trusting transcription and don't fall into he trap of making easy to spot obvious errors that can be auto hind brain ticked off.

  100. 100. what||context
    > It sits next to the doctor helping them focus on you by transcribing the session, it doesn't do anything the doctor can't and definitely doesn't do anything the doctor SHOULD

    You said the transcript isn’t available, only the notes/summary. The notes is what the doctor should do, the AI should only transcribe for the doctor’s review.

    https://news.ycombinator.com/item?id=47895868

  101. 101. Suppafly||context
    >making verbal notes into a microcassette tape recorder that is transcribed by a human later

    This one, along with them actually typing their notes as they work.

    >What if your doctor had a medical assistance in the room and spoke out loud and that medical assistant wrote down everything, is that ok?

    That would be ok too.

  102. 102. childintime||context
    I'm kind of the exact opposite: I don't want a doctor between me and my medical AI. Because he limits my agency to heal myself and his value add should be optional, when I need him, after I explored self-treatment options and realize I need a non-patronizing third party to step in.

    In my whole life I have experienced the mzungu paradox happening: (mzungu) professionals promise to do a good job, get well paid regardless of results, and in the end most often I end up having to solve everything myself.

    Mzungu is the word for white people, though here it is used in the sense of white collar people, which is appropriate as they are all exponents of the white collar financial tribe, the faith in professionalism now vying for world power. Note: power, not competence.

  103. 103. wl||context
    I got an erroneous Type II diabetes diagnosis dropped into the note by the AI scribe at my last appointment because my PCP discussed the A1C test he was ordering. Would not recommend. That isn't to say that manually typed notes or speech to text dictated notes are perfect (dot phrases have ended up "documenting" plenty of conversations that never happened), but a false diagnosis of a chronic disease seems like a really bad failure.
  104. 104. burnte||context
    > got an erroneous Type II diabetes diagnosis dropped into the note by the AI scribe at my last appointment because my PCP discussed the A1C test he was ordering.

    No, you got an inaccurate diagnoses because your doctor didn't do their job. It's the provider's job to check notes, and this would have gotten that provider a visit with their clinical director at my org.

  105. 105. stephenbez||context
    By that line of thinking, even if AI scribes are terrible, you can only blame the doctor because they didn’t check their notes.

    In this case as the patient, all you care is there was an inaccurate diagnosis in your notes. If the doctor were typing them up by hand, presumably that would not have happened.

    Similarly if Tesla Self Driving cars got into collisions at 3x the rate of non-self driving, would you defend Tesla because all issues are the drivers fault who are supposed to have their hands on the wheels and paying attention?

  106. 106. burnte||context
    > By that line of thinking, even if AI scribes are terrible, you can only blame the doctor because they didn’t check their notes.

    Same for any profession. If you use bad tools expect bad outcomes. Yes, I work in a company that expects employees to do their work well, and there are consequences to bad performance.

    > In this case as the patient, all you care is there was an inaccurate diagnosis in your notes. If the doctor were typing them up by hand, presumably that would not have happened.

    Doctors can absolutely mischart by hand. Human error is one of many reasons we moved to electronic charting. We have providers who love the tool and we see benefits from them having it, and some providers don't want it, so they don't have to use it. I've seen people say they feel it's slower, didn't like the output, and one provider simply enjoys charting. They're all good providers too.

    > Similarly if Tesla Self Driving cars got into collisions at 3x the rate of non-self driving, would you defend Tesla because all issues are the drivers fault who are supposed to have their hands on the wheels and paying attention?

    Bad analogy, since FSD is supposed to work without the human in the loop. This tool EXPLICITLY is to be operated and checked by a human.

    That said, I wouldn't defend Tesla at all, however yes I would state that if you are supposed to monitor a "self driving" car and get into an accident, you failed. In that case I'd say it's safer to just manually drive, and many of our providers choose to chart manually.

  107. 107. dsr_||context
    Pre-AI voice recognition (2018), followed by 2 human reviews

    https://jamanetwork.com/journals/jamanetworkopen/fullarticle...

    => the error rate was 7.4% in the version generated by speech recognition software, 0.4% after transcriptionist review, and 0.3% in the final version signed by physicians. Among the errors at each stage, 15.8%, 26.9%, and 25.9% involved clinical information, and 5.7%, 8.9%, and 6.4% were clinically significant, respectively.

    AI "scribes" in a perfectly replicable best-of-all-worlds scenario (2025): https://bmjdigitalhealth.bmj.com/content/1/1/e000092

    => Omissions dominated error counts (83.8%, p<<0.001), with CAISs varying widely in error frequency and severity, and a median of 1–6 omissions per consultation (depending on CAIS). Although less frequent, hallucinations and factual inaccuracies were more often clinically serious. No tested CAIS produced error-free summaries.

    On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.

  108. 108. joshstrange||context
    > On the gripping hand,

    It’s been a year or so since I last read The Mote In Gods Eye/The Gripping Hand but I randomly was thinking of this morning. Very funny that I would see a reference to it the same day.

  109. 109. kube-system||context
    Errors can be a significant problem in manual charting as well.

    I know a medical professional who does a similar evaluation process to what is outlined in your second link to human written charts. They then use that feedback to guide the department on how to improve their charting.

    So, don't presume that those error rates cited in those studies should be compared to a baseline rate of zero. If you review human-written charts, you will often also not have an error rate of zero.

  110. 110. fc417fc802||context
    Has anyone considered simply asking the patient to sign off on these things as well? I realize many wouldn't but at least some would.
  111. 111. kube-system||context
    In the US, HIPAA gives patients a right to access and have corrections added to their medical record.

    But in my conversations with a person I know who does this work -- I don't think that the typical problems with patient charts are anything that would be remotely noticeable to a patient -- they're often deficiencies of a technical and/or clinical significance.

  112. 112. Paul-Craft||context
    I don't think anyone mentioned comparing AI error rates to a base rate of zero. What has been mentioned is significant numbers of clinically significant omissions, and outright hallucinations. Blatant fabrications should never happen with a human scribe, and one would expect clinically significant omissions to be rarer, because a human has clinical judgement that an AI can't have.
  113. 113. justbees||context
    My dad likes to joke around and his doctor uses some kind of transcription service. Time for fun!

    His doctor asked him about using drugs and he made a joke that was something like "I only use coke" - meaning coca-cola. Of course his doctor knew he was kidding about drinking too much soda because he eats/drinks too much sugar. So they had a little laugh and moved on.

    BUT now it's in his medical transcripts. My mom said it "transcribed" it as something like "the patient responded he has used cocaine recently".

    I guess his doctor doesn't go in and actually fix things or even read over what the transcription says...

    Also both of my parents have accents and have reported really weird transcriptions that don't match what they actually said.

    So now my mom has told my dad he can't make jokes with the doctor anymore because even if the doctor knows he's joking it's going to get noted down as a "fact".

  114. 114. oliwarner||context
    This feels like a compelling reason to joke around more.

    If inaccuracies make it to your patient record, it's defamatory. Your doctor must sign off on the transcript and if they're letting through poor results, make it their problem to fix. That'll either force the tech to get better or to fall back on better note taking practices.

  115. 115. justbees||context
    Yeah my parents thought it was funny and I was like... yeah not actually. You need to get that fixed.
  116. 116. fc417fc802||context
    Might be immature but personally once I knew this was possible I'd go for the high score. Try to get every substance I can think of listed plus a supposed admission of murder and whatever other ridiculous stuff I can come up with.

    "Well you know me doc, I keep my drugs in the deep freezer with the bodies waiting for disposal so I'm quite confident in their shelf life." I wonder what an AI scribe would make of such a remark.

  117. 117. defrost||context
    Initially nothing, but then two weeks later you'll start getting more push ads for high end chest freezers.
  118. 118. biomcgary||context
    Your username is uncanny for this comment. Well played.
  119. 119. erentz||context
    Be warned though that life and disability insurance will absolutely use errors in your medical records to refuse your coverage or claims.
  120. 120. maxerickson||context
    How do we make those markets more competitive?