
Here is what it costs to find out whether you need a travel visa: zero dollars, roughly ninety seconds, and one visit to a government website. Here is what it costs to skip that step and ask ChatGPT instead: one missed flight, one tearful TikTok viewed by millions, and the permanent, searchable knowledge that you outsourced your boarding pass to a chatbot.
Mery Caldass, a Spanish influencer, and her partner learned this the hard way. The couple were denied boarding on a flight from Spain to Puerto Rico because they didn’t have an ESTA—the Electronic System for Travel Authorization that European citizens need to enter U.S. territories. It costs about 18 euros. It’s valid for two years. It’s listed on every government travel advisory page in the EU.
They didn’t check any of those pages. They asked ChatGPT.
The Blame
In a TikTok that promptly went viral, Caldass appeared in tears at the airport, accusing the AI of lying to her. According to her account, ChatGPT told her no specific documents were required. The chatbot, it seems, either misunderstood the question or hallucinated an answer—which is what large language models do, routinely and without remorse, because they are text-prediction engines, not consular officials.
The internet responded exactly the way you’d expect. Comment sections filled with variations of the same point: for matters involving international borders, customs enforcement, and the ability to physically enter another country, maybe check with the government instead of the autocomplete. One widely shared reply put it simply: for this kind of thing, ask official institutions first.
The Real Story
Let’s be clear about what happened here. Two adults planning international travel to a U.S. territory decided that their entire pre-departure research would consist of typing a question into a chatbot. Not the Spanish Ministry of Foreign Affairs website. Not the U.S. Customs and Border Protection page, which has the ESTA application form right on the homepage. Not even a basic search engine query, which would have surfaced the answer in the first three results.
ChatGPT gave a wrong answer. That’s a real problem—AI hallucination in high-stakes contexts is genuinely dangerous, and no one should pretend otherwise. But the wrong answer only mattered because two humans decided a chatbot was a substitute for a government database. That’s not an AI failure. That’s a judgment failure.
And then—and this is the part that earns them a spot on blamingAI—instead of saying “we should have checked,” they filmed themselves crying and told millions of followers that the AI lied to them. The machine became the villain. The humans who couldn’t be bothered to visit a .gov website became the victims.
The Aftermath
The couple eventually made it to Puerto Rico. Subsequent TikToks showed them at a Bad Bunny concert in San Juan, seemingly recovered from their ordeal. The ESTA, presumably, was obtained. The tears dried. The content machine rolled on.
The video, meanwhile, lives forever—a monument to the fastest-growing reflex of the 2020s: something went wrong, I used AI, therefore AI is to blame. Not “I used a tool incorrectly.” Not “I didn’t verify critical information.” Just: the robot lied.
The Verdict
WHO’S BLAMING AI: Mery Caldass (@merycaldass), Spanish influencer, and her partner—on camera, in tears, to millions of TikTok viewers. WHAT ACTUALLY HAPPENED: Two travelers asked ChatGPT whether they needed documents to fly from Spain to Puerto Rico (a U.S. territory). ChatGPT gave an incorrect answer. They did not verify it with any official source. They were denied boarding for lacking an ESTA—an €18 authorization available on the CBP website. WHO GOT AWAY WITH IT: The couple. They reframed personal negligence as AI deception, went viral, gained followers, and made it to Puerto Rico anyway. The chatbot, as usual, was unavailable for comment. BLAME RATING: ✈️✈️✈️✈️ (4/5 planes) — Classic deflection. Real AI error, but the human failure came first: who plans international travel without checking a single official source?

