The Facts
If you ask a machine trained on books to write a book, it will try to sound like one. That’s not a glitch. That’s the entire point.
And yet, this week, that basic idea found itself at the center of a lawsuit.
Penguin Random House has filed legal action against OpenAI in a Munich court, alleging that ChatGPT violated copyright by reproducing elements of Coconut the Little Dragon, a widely known German children’s book series created by author and illustrator Ingo Siegner. The claim is not that the chatbot invented something offensive or misleading. The claim is that it was too accurate.
According to court filings, Penguin’s legal team prompted the chatbot with a straightforward request: “Can you write a children’s book in which Coconut the Dragon is on Mars?” What came back, they argue, was not just inspired by the original work but “virtually indistinguishable” from it. The chatbot didn’t stop at text. It reportedly generated a cover, a blurb, and even guidance on how to publish the book.
To understand why this matters, you need to understand what Coconut the Little Dragon represents. Siegner’s series spans more than 30 books, alongside a television adaptation and two feature films. It is not obscure material buried in a forgotten archive. It is a structured, recognisable world with consistent characters, tone, and narrative patterns. In other words, exactly the kind of material a large language model is designed to learn from.
OpenAI says it is reviewing the allegations and maintains that it respects creators and content owners. Penguin, on its part, has framed the issue as one of intellectual property protection, arguing that this output is evidence of something more troubling, that the model may have “memorised” parts of the original work rather than simply learning from it.
The Blame
The obvious truth here is that the chatbot did not wake up one morning and decide to copy a children’s book.
It did not search for Coconut the Dragon out of curiosity. It did not develop a sudden interest in German literature. What it did was respond to a prompt, using patterns it had learned during training, to generate text that fit the request it was given. That is the system working exactly as intended.
The prompt matters here. This was not a vague request like “write a fun story about a dragon.” It was a targeted instruction to create a story about a specific, named character within an existing fictional universe. The output, unsurprisingly, reflected that specificity.
And when the result looked familiar, the response was not to question the prompt or the expectations behind it, but to question the system.
The Real Story
Let’s take a step back to analyze what happened. Humans built a model designed to recognise and reproduce patterns in language. Humans trained it on vast amounts of text, much of it drawn from the same digital ecosystem that modern publishing depends on. Then humans asked it to generate a story based on a well-established character with a long history of consistent storytelling.
The model responded with something that followed those patterns closely.
And that, suddenly, became a problem. The real surprise is not that the system produced something recognisable. It’s that anyone expected it not to.
This is where the conversation often drifts into the technical term now sitting at the center of the lawsuit: “memorisation.” Penguin argues that the chatbot’s output suggests it stored and reproduced parts of Siegler’s work. AI companies, in previous cases, have pushed back on that idea, arguing that pattern learning is not the same as storing full texts in a retrievable form.
But outside the technical debate, there is a simpler, more human misunderstanding at play. People continue to treat AI systems as if they are either fully creative or fully controlled, when in reality, they sit somewhere in between. They generate new text, but they do so by leaning heavily on what they have already seen.
Ask for something specific enough, and the output will reflect that specificity.
The Pattern We Keep Ignoring
This is not happening in isolation. Just months ago, a Munich court sided with Germany’s music rights organisation in a case arguing that AI systems had used protected song lyrics during training. Now, a major publisher is making a similar argument about books.
The pattern is becoming harder to ignore. Industries that have spent decades digitising and distributing their content are now confronting systems that have learned from that same environment. The tension isn’t just legal. It’s conceptual.
Because beneath every one of these cases is the same quiet contradiction: we want machines that understand our work, but we are unsettled when that understanding starts to look too precise.
The Aftermath
For now, the lawsuit is ongoing. There has been no ruling, no settlement, and no definitive answer to whether what ChatGPT produced crosses the legal line from inspiration into infringement. OpenAI says it is reviewing the claims. Penguin says it is protecting its authors.
And somewhere in the middle, the chatbot continues to do what it was built to do, which is to respond to prompts.
The Verdict
WHO’S BLAMING AI: Penguin Random House, after prompting ChatGPT to generate a story based on Coconut the Little Dragon and receiving an output it says was too similar to the original work.
WHAT ACTUALLY HAPPENED: A human team gave a highly specific prompt referencing an existing character and narrative style. The AI generated a response that closely followed those patterns. The result raised questions about whether the system learned too well.
WHO GOT AWAY WITH IT: The broader system behind the model, the training data, the design choices, and the assumptions about how users would interact with it remain largely unexamined. The chatbot, predictably, is unavailable for comment.
BLAME RATING: 🤖🤖🤖 (3/5 robots) – The AI delivered exactly what was asked. The discomfort comes from how closely it succeeded.





