What if tomorrow's SEO goes through llms.txt?

Search engines no longer really redirect to sources.
They generate answers.
And they decide on their own what deserves to be cited.

That changes everything.

The Problem

Since the arrival of Google's AI Mode, SEO no longer plays out only in classic SERPs. It plays out in generated answers, syntheses, extracts that AIs produce without the user going any further.

robots.txt tells crawlers what not to read.
llms.txt does the opposite.

Here's what I want you to read, understand, analyze.

It is a file placed at the root of the site that explicitly lists the content to highlight to language models. You can also create an llms-full.txt with a complete Markdown version of each page, to give even more context.

What I Put in Place

On this site, I use both.

llms.txt lists the important pages.
llms-full.txt contains the full content of each page in Markdown.

llms.txt
# Blog
- [Building a Composable Audio Player: My Journey in Web Audio Development](/posts/building-a-composable-audio-player)
- [What if tomorrow's SEO goes through llms.txt?](/posts/what-if-seo-ran-through-llms-txt)

No plugin, no dependency. Two static files. Total control over what you expose to AIs.

Where Things Stand

There is no official recognition from Google yet. John Mueller called it an "interesting experiment", comparable to meta keywords tags in their early days.

But Mintlify, FastHTML, Yoast, and LangChain are already paying attention.

llms.txt guarantees nothing in the short term.
But it is a good practice: explicit, simple, proactive.

If these files become standard in the coming months, we might as well master them now.