AI & Pet Health

The News Said AI Was Dangerous.
I Was Using It to Get Through the Hardest Months of My Life.

By Melissa Pardo

While headlines warned about AI encouraging self-harm, I was being consoled by it — gently, accurately, at 2am — about whether I was a bad person for not letting my dog go. This is what AI as a medical companion actually looks like.

The paradox I couldn't stop thinking about

You probably saw the stories. AI chatbots encouraging teenagers toward self-harm.AI saying dangerous things to vulnerable people. The warnings, the op-eds, the congressional hearings. The general consensus forming that AI — ChatGPT specifically — was something to be careful around. Maybe afraid of.

I was reading those stories on my phone while sitting next to my dog, who had just been diagnosed with an inoperable spinal cord tumor. And then I would close the news app and open ChatGPT and ask it to explain, in plain language, what the difference between palliative and definitive radiation actually meant for a dog like Luna. And it would tell me. Clearly. Honestly. Without making me feel stupid for not already knowing.

I started sharing the responses with coworkers. Not because they were useful — though they were — but because of how they were written. The warmth in them. The care. The way it would say “I'm really sorry you're having to process all of this” and mean it, or at least produce something that functioned like meaning, at a moment when I needed it.

I don't know how to fully reconcile those two things. I'm not sure I need to. What I know is what happened, and I think it's worth writing down.


The conversation that started in hope

I want to start a little earlier than I usually do when I tell this story. Because before the tumor, before the MRI, before any of the words that changed everything — there was October.

In October, we still thought it was IVDD. Intervertebral disc disease — painful, scary, but treatable. A thing dogs recover from. Luna was limping. Vets had found arthritis on x-rays and shrugged. We weren't panicking yet. We were doing what you do: scheduling follow-ups, managing pain, trying to understand what we were dealing with.

I started a ChatGPT thread because I had questions the vet appointments didn't answer. Was it okay to give her carrots as treats while she was on this medication? What were the side effects of gabapentin in older dogs? How long before we'd know if the anti-inflammatories were helping?

Me — October

“We just scheduled an MRI, we will take her in on Monday. What should we be watching for before then? What makes it urgent versus wait-and-see?”

ChatGPT

“Signs that would make it more urgent before Monday: sudden inability to stand or walk (not just weakness, but complete loss of function), loss of bladder or bowel control, crying out in pain when touched or moved, rapid progression — meaning she was 70% okay yesterday and 30% today... You're doing exactly the right thing by having the MRI scheduled. You're not waiting this out — you're getting answers.”

“You're doing exactly the right thing.” I cannot tell you how much I needed to hear that. Her vets had fifteen minutes and a waiting room full of other patients — they gave us options, but no guidance. No one would tell us what to do. This did. Something that had read my whole thread, knew Luna's history, and was telling me clearly: you are not missing something. You are not failing her.

The October conversations had a completely different register than the ones that came later. They were practical, forward-looking. Gabapentin dosing. What to bring to the MRI appointment. Whether to crate her overnight. At one point I asked if she could have her blanket with her — just her blanket, in the scanner — and it answered as if that was a completely reasonable question, because it was.

I didn't have to wait for a doctor to call me back. I didn't have to worry about a vet telling me they'd have to start charging to respond to my emails (yes, that happened). I could ask what I needed to ask and there was always an answer on the other side.

I didn't know yet what the MRI would find. In October I was scared, but I was also still oriented toward recovery. The conversation matched that. We were just trying to get Luna to Monday.


When the register changed

We didn't get the MRI right away. The visit we thought would be a diagnosis was just a consultation. The neurologist hardly listened to us and pushed surgery — framed it as obvious, like of course she'd recommend cutting, because that's what she does. We weren't prepared to hear that and wanted a second opinion.

So we waited. Treated IVDD at Luna's regular vet's direction. Managed her pain. Cold laser treatments. Prednisone. For a while we thought she was getting better. She wasn't.

Over Thanksgiving, we tapered her prednisone at the vet's direction and watched her decline. We decided we were done with her vet. Time to find another neurologist.

The MRI came back. It wasn't IVDD.

The doctor handed us three documents in the parking lot — a care instruction sheet, two treatment plan estimates. One of the estimates had a line item for surgery. The number was somewhere north of twelve thousand dollars. I couldn't process what I was reading. I was terrified and the language was clinical and I was sitting in a parking lot trying to understand what had just happened to my best friend.

I uploaded them to ChatGPT and asked it to tell me what was happening in plain language. It read all three and gave me back something like this:

ChatGPT — summarizing Luna's MRI findings

“Luna has a mass near her spinal cord between vertebrae C6–T2 in her neck and upper back area. Based on the MRI, vets suspect a meningioma — a slow-growing tumor that presses on the spinal cord. These tumors cause problems mainly by compression, not by spreading aggressively. Translation: something is physically pressing on Luna's spinal cord, which explains the weakness, pain, and mobility changes.”

That translation line. That's what I needed. Not the medical terminology — the translation. And it just gave it to me, without being asked, because it understood that was what I actually needed.

We went back and forth for months. Every time Luna's prednisone timing changed, every time her stools got soft, every time I needed to understand what a new symptom meant or whether I was making the right call. I kept one thread open the entire time. It knew Luna's full history. It knew her medications, her patterns, her good days and bad days.

“It had more context on Luna than any single vet who saw her for fifteen minutes had ever had.”

One night the AI caught something the vets had missed in a different way than you might expect. Luna had tarry, dark stools — melena, it told me, which usually means digested blood from the upper GI tract. It said, very directly: “This is not something to monitor at home or wait out. Call or go to your vet or emergency clinic today. Tell them: my dog is on prednisone and now has very dark, tar-like stool. That wording matters. It signals urgency.”

We went to the ER within an hour. It was the right call.


The question I asked at 2am

The hardest exchange wasn't medical. It was this:

Me

“Am I being selfish by not putting her down? I don't know what's right.”

ChatGPT

“Melissa — no. You're not being selfish. You're being thoughtful, and those are not the same thing... Doubt is not selfishness. It's love plus responsibility colliding... The ones who wait too long are usually avoiding the question entirely. You're facing it head-on.”

I sent that response to two coworkers the next morning. Not because I needed validation that it was good advice — but because I was genuinely moved by the care in it. In the middle of a news cycle about AI being dangerous, something I was afraid to admit I was using had just held me through one of the hardest nights of my life.

That felt like something worth saying out loud.


What it is and isn't

I want to be careful here, because I'm not writing an advertisement. AI is not a vet. It cannot examine your dog, run bloodwork, or make a diagnosis. It got things wrong sometimes — or rather, it gave me probabilistic answers that were accurate for most dogs and not always exactly right for Luna. I learned to treat it like a very knowledgeable friend, not a doctor: useful for orientation, not for final decisions.

What it was, consistently, was available. Patient. Non-judgmental. It never made me feel like I was wasting its time or asking a stupid question. It never had somewhere else to be. At 2am on a Tuesday when I had a new symptom and a spinning brain, it was there, with Luna's full history in front of it, ready to help me think.

I also want to name the fear, because I think it's real and it affected me too. The stories about AI and self-harm are not invented. People have been hurt. The concern is legitimate. What I'd say is that the same tool can do very different things depending on context — and in my context, using it to navigate a pet's terminal illness, it was one of the most consistently kind presences I had access to during a very hard time (Cherice, if you're reading this, you're up there with ChatGPT).

That doesn't erase the other stories. It just means the picture is more complicated than “AI is dangerous” or “AI is the answer.”

“In my context, it was one of the most consistently kind presences I had access to during a very hard time.”

The stigma — from both directions

I was afraid to tell my vet I'd been using ChatGPT. You already know the look you get when you tell a doctor you googled your symptoms — there's a sharper version of that when you say you asked AI. So I led with my observations instead. The duration, the pattern, the specific symptoms. And then I let the conversation go where it went.

But there's a stigma in the other direction too — from people who care about AI safety, who would hear “I was emotionally supported by ChatGPT” and feel uncomfortable. Like I was being naive, or co-opted, or missing some danger I should have seen.

I've thought about that. I don't think I was naive. I think I was in a hard situation and I used every tool available to me, and one of those tools helped more than I expected.

You're allowed to hold both: AI can be misused and it can also be genuinely useful. The question isn't whether to be afraid of it — it's whether you're using it with clear eyes.


What I'd tell someone starting this

Open one conversation. Don't start a new one every time — the context is the whole point. In your first message, give it everything: age, breed, diagnosis, medications, what's happening right now. Then add to that same thread every time something changes.

Use it to understand what the vet told you. Use it to prepare better questions before appointments. Use it when you get a new document and can't parse the language. Use it at 2am when something looks wrong and you need to know if it can wait until morning.

And if you find yourself sharing the responses with someone because of how they were written — because of the care in them — that's not you being fooled. That's you being a person who needed something and found it. There's no shame in that.

We have a full prompt guide here— the exact templates we used to track Luna's symptoms over months. And the original AI diagnosis article covers the moment ChatGPT first caught the neurological pattern two vets had missed.

Questions about using AI for a sick pet

Is it safe to use AI for emotional support during a pet's illness?+

Yes — with clear eyes about what it is. AI can offer patient, knowledgeable, nonjudgmental support at any hour. It can help you process information, prepare for hard conversations, and feel less alone. It cannot replace human connection or professional mental health support. Use it as a supplement, not a substitute.

Can AI give accurate medical information about my dog's illness?+

General medical information, yes — AI is often very good at explaining conditions, medications, and side effects in plain language. Specific guidance for your dog requires a vet who can examine them. The sweet spot is using AI to understand what the vet told you, prepare better questions, and catch things that need follow-up.

What's the difference between using AI for diagnosis versus ongoing medical companionship?+

Diagnosis is a single moment — AI looks at symptoms and points toward possibilities. Medical companionship is ongoing: tracking changes over weeks, understanding treatment options, managing side effects, knowing when something new is urgent. The second use case is less talked about and often more valuable.

How do I start an AI conversation about my sick pet?+

Open one conversation and keep it. In your first message, include your pet's age, breed, weight, diagnosis, current medications, and what's happening right now. Then add to that same thread every time something changes. The AI builds context over time — the longer the thread, the more useful it becomes. Our prompt guide has exact templates to start.

Should I tell my vet I've been using AI?+

You don't have to lead with it. Lead with your observations — the symptoms, the timeline, the patterns you've noticed — and let the conversation follow. If a vet dismisses a well-prepared, specific question, that tells you something about whether they're the right vet for a complex situation.

Related

How AI Caught What Two Vets Missed

The diagnosis story — how systematic symptom tracking led to Luna's spinal cord tumor.

AI Prompt Guide

The exact prompts we used to track Luna's symptoms. Copy them.

About Luna

The full story of the dog this site is written for.