Woman warns people not to blindly follow ChatGPT after asking it for advice and the answer proving nearly fatal for her best friend

Published on Feb 01, 2026 at 6:06 PM (UTC+4)
by Henry Kelsall

Last updated on Jan 30, 2026 at 5:37 PM (UTC+4)
Edited by Amelia Jean Hershman-Jones

A woman has issued a warning about blindly following the advice of ChatGPT, after asking it for its take on a situation, and the result proved nearly fatal for her friend.

YouTuber Kristi encouraged others to check the information proffered by the AI service after she nearly poisoned herself when ChatGPT identified a plant incorrectly.

Kristi took to Instagram (rawbeautykristi) to voice the concerns she had about AI, and to try to warn her followers that its advice isn’t always as reliable as it seems.

ChatGPT, along with other services like Google Gemini, has become incredibly popular recently – but as an emerging and evolving technology, its teething issues could have large consequences

DISCOVER OUR SUPERCAR AUCTION SITE – View live auctions on SBX Cars

What advice did ChatGPT give Kristi’s friend?

The advice given was related to a plant that Kristi’s friend had in her backyard.

She had sent the chatbot a photo of the plant, asking: “What is this?”

ChatGPT responded, telling her that it was carrot foliage.

Screenshots provided by Kristi even show the chatbot explaining why it thought it was correct.

This included the plant’s ‘finely divided and feathery leaves.’

Chat GPT described these as a classic for carrot tops.

It then gave her friend lookalikes of the plant, one of which was poison hemlock.

Kristi’s friend asked ChatGPT if it was poison hemlock, but she was met with reassurance that it was just carrot foliage.

Click the star icon next to supercarblondie.com in Google Search to stay ahead of the curve on the latest and greatest supercars, hypercars, and ground-breaking technology

The AI Chatbot had it incredibly wrong

Per the Cleveland Clinic, poison hemlock is a plant that can kill you if you ingest it, and causes a painful rash for some when touched.

The AI said it wasn’t hemlock as the photo ‘did not show hollow stems with purple blotching’.

Incredibly, the photo actually did show exactly that.

Kristi then put the photo through Google’s own AI, Google Lens, to see what it thought of the plant.

Google came back by saying it was indeed poison hemlock.

Her friend then used a different phone to put the image through ChatGPT, which then said it was poisonous!

Kristi was grateful that her friend had asked her for advice, rather than blindly following the AI service.

Had she not done so, then the results could have been fateful.

In an Instagram video, Kristi said: “This is a warning to you that ChatGPT and other large language models and any other AI, they are not your friend, they are not to be trusted.”

On this occasion, ChatGPT certainly did get things very wrong – users beware.

DISCOVER SBX CARS: The global premium car auction platform powered by Supercar Blondie

Henry joined the Supercar Blondie team in February 2025, and since then has covered a wide array of topics ranging from EVs, American barn finds, and the odd Cold War jet. He’s combined his passion for cars with his keen interest in motorsport and his side hustle as a volunteer steam locomotive fireman at a leading heritage steam railway in England.