1
 
 
Account
In your account you can view the status of your application, save incomplete applications and view current news and events
April 08, 2026

Why AI Hallucinates – and What Technical Writing Can Do About It

What is the article about?

Technical documentation is often a tedious chore that gets put off indefinitely. But in an era where AI is rapidly integrating into our workflows, the quality of our content is taking on a whole new meaning. What if the “AI hallucination problem” is actually a "content problem" at its core? This post provides insights into how the special skills of technical writers are no longer just “nice to have,” but essential for the successful and reliable use of artificial intelligence.

What makes technical documentation good? For decades, the answer was clear: it must work for humans. Now, Artificial Intelligence is on the rise. And suddenly, technical writing is clearly becoming an efficiency factor.

Because when AI encounters poor documentation, what everyone knows but no one wants happens: AI hallucinates. Someone asks the AI assistant for the deployment process. What comes back is an instruction cobbled together from three different, partly outdated documents. It sounds plausible, but it doesn't work. This isn't an AI problem. This is a content problem.

Better content for everyone

The principles that technical writers have always applied to make documentation usable for humans are precisely what AI systems need to work accurately: consistent terminology, clear structure, comprehensible hierarchies, and explicit metadata.

But what is good for humans doesn't automatically work the same way for AI. As technical writers at OTTO, we are currently seeking the balance: What should content look like so that humans want to read it and machines can process it? We are still experimenting, but we are heading in the right direction.

What technical writers do and why AI loves it

Consistent terminology

Consistent language helps humans read and AI find information. If one document speaks of "user," the next of "client," and a third of "customer" and "end-user" – all meaning the same thing – then the AI will only find a fraction of the relevant information. Consistency is key: one term, one concept, consistently applied. Inconsistent terms are one of the most common drivers for AI hallucinations, because the model doesn't recognize that four words mean the same thing.

Structured hierarchies

Without a clear structure, documentation is useless, no matter how good the content is. Structured hierarchies mean: meaningful headings, logical sections, and a well-thought-out document design. Each topic gets its own section.

Example: Instead of "Our service does A, B, and C and is deployed this way, and if there are problems, you do X," it's better like this:

  • Purpose: What the service does
  • Architecture: How the service is built
  • Deployment: How to deploy the service
  • Troubleshooting: What to do in case of problems

This allows humans to easily skim through the text instead of having to read everything laboriously. AI can process the context and jump precisely to the relevant section.

Standardized formats and metadata

The AI's nightmare: freestyle Markdown here, a Word document there, screenshots in chat, and all without a version number or timestamp. Instead, defined templates, for example, for READMEs, decisions, and runbooks, work well. If all READMEs have the same structure (e.g., Purpose - Structure - Requirements - Development), humans know where to look, and AI can specifically find the relevant information.

In addition, there are metadata: timestamps, status labels such as "outdated" or "to be reviewed," and thematic tags. Without this information, AI is left in the dark. With metadata, it can classify: Is this still current? Does it fit the use case? Humans use it for filtering, AI uses it to decide.

Precise code examples with context

It’s not: "Run Terraform, replace the values",
but instead: A structured code block with syntax highlighting, explanations, and precise placeholders:

# Run Terraform with the desired environment
terraform apply -var="ENVIRONMENT={YOUR_ENV}"

AI understands syntax, context, and can provide correct code suggestions. Humans can use "copy-paste."

What we are learning at OTTO

Documentation is often an afterthought in the development process and, like in many other companies, frequently neglected. It exists, sometimes well, sometimes not. Each team has its own structure, its own storage location, its own processes, and its own writing styles. Over the years, people have found their ways: they knew whom to ask, where to search, and how to piece together information the hard way. That somehow worked.

Of course, many want to use AI as an aid and efficiency boost. But it doesn't work that easily yet. AI hallucinates, doesn't find information, or searches in places where nothing is stored. What people with enough experience could still manage, AI mercilessly exposes: the gaps, the inconsistencies, the fragmentation.

Technical writers use their skills to make technical documentation AI-ready. Some things already work: clear structures, consistent terms, and current content. Others don't: Short sentences? Good for humans, but AI sometimes needs more context. Avoiding repetitions? Annoying when reading, but helpful for AI. How do we reconcile this? These are open questions we are working on.

What this means in practice

An example from our daily work as technical writers: We originally created our how-tos and templates for humans. First insight: AI can already work really well with them. So we are currently testing building prompts and agents based on this to do preliminary work for us: gathering information, suggesting structures, and pre-filling templates. Authors can then concentrate on the technical content instead of starting from scratch. Boom: time saved!

We also notice that structure and documentation are correct through things like this:

  • Familiarization with a new service: Devs ask GitHub Copilot for a setup process; AI finds the structured README and provides precise answers.
  • Retrieving knowledge: Instead of chat archaeology, the AI finds the information in a central location in a document.
  • Consistency: AI generates according to uniform standards because the guidelines are clearly written.

Technical writers create a solid foundation here.

Is your AI still hallucinating, or have you already started structuring?

The question is not whether AI will become part of our daily lives. It already is. The question is: Is the content ready for it? These signs indicate a need for action:

  • AI systems deliver inconsistent results, even though relevant knowledge exists within the company.
  • Onboarding and information gathering take weeks instead of days because documentation is fragmented.
  • Knowledge transfer depends on individuals instead of accessible documentation.

Technical writing is not a fix for bad AI models. But it is a lever: Do you remember the bewildered person at the beginning of the post who was looking for the deployment process? With structured documentation, they no longer get a hallucinated patchwork, but the current instructions. Not because the AI got smarter, but because the content already was.

Technical writers don't just write. They develop knowledge that works – for humans and AI. Those who invest in structured documentation today will be ready for what comes tomorrow.

Do You want to work with us?

0No comments yet.

Write a comment
Answer to: Reply directly to the topic

Written by

Birgit Bader
Birgit Bader
Expert Technical Writing

Similar Articles

Saved!

We want to improve out content with your feedback.

How interesting is this blogpost?

We have received your feedback.