AI Assistant Setup
This page explains how users and agent builders should expose react-mnemonic
to coding assistants.
Canonical Source
The canonical AI-oriented source is this docs section:
/docs/ai/docs/ai/invariants/docs/ai/decision-matrix/docs/ai/recipes/docs/ai/anti-patterns
Everything else is a companion surface:
/llms.txtfor compact retrieval/llms-full.txtfor long-form export/ai-contract.jsonfor machine-readable summariescontext7.jsonfor Context7 library-owner indexing- repository instruction packs for Codex, Claude Code, Cursor, and Copilot
.devin/wiki.jsonfor DeepWiki prioritization
Generated Instruction Packs
The repo also ships first-party instruction packs generated from the canonical AI docs:
AGENTS.mdCLAUDE.md.claude/rules/react-mnemonic-persistence.md.claude/rules/durable-state-checklist.md.cursor/rules/react-mnemonic-persistence.mdc.cursor/rules/react-mnemonic-docs.mdc.github/copilot-instructions.md.github/instructions/react-mnemonic-persistence.instructions.md.github/instructions/react-mnemonic-docs.instructions.md
Those files are projections over the same persistence contract rather than independent sources of truth.
Lowest-Friction Retrieval
When an assistant has only HTTP or search access, give it these URLs first:
That ordering keeps the first context window compact while still leaving a long-form export available when the task is more complex.
DeepWiki
The repo ships .devin/wiki.json
to steer DeepWiki toward the files that matter most for correct persistence
behavior:
- canonical AI docs
README.md- the public export surface in
src/index.ts - the hook, provider, and type definitions that define runtime semantics
- the deeper guides for migrations, SSR, clearable values, and custom storage
If you re-index the repository, keep that file up to date with any future source-of-truth moves.
Validated MCP-Friendly Path
The lowest-friction MCP setup is exposing this repository through a filesystem-capable MCP server or any equivalent local-docs MCP layer.
Mount these paths:
website/docs/ai/index.mdwebsite/docs/ai/invariants.mdwebsite/docs/ai/decision-matrix.mdwebsite/docs/ai/recipes.mdwebsite/docs/ai/anti-patterns.mdwebsite/static/llms.txtwebsite/static/llms-full.txtwebsite/static/ai-contract.jsonsrc/Mnemonic/use.tssrc/Mnemonic/provider.tsxsrc/Mnemonic/types.ts
Why this path is validated:
- the website build regenerates
llms.txt,llms-full.txt, andai-contract.jsonbefore Docusaurus runs - the generation step fails fast if any canonical AI source file is missing
- the docs build already fails on broken links, so the published AI entry points stay connected to the site navigation
This gives an MCP client both the compact docs surfaces and the source files that define the contract underneath them.
Hosted Retrieval Path
If your assistant can fetch remote files, use the published GitHub Pages URLs:
https://thirtytwobits.github.io/react-mnemonic/llms.txthttps://thirtytwobits.github.io/react-mnemonic/llms-full.txthttps://thirtytwobits.github.io/react-mnemonic/ai-contract.jsonhttps://thirtytwobits.github.io/react-mnemonic/docs/1.3.0
Use the hosted path when you want versioned, read-only retrieval without mounting the repository locally.
Versioned Docs Behavior
The site now uses Docusaurus versioned docs:
- the latest released docs live at
/docs - unreleased work from
mainlives at/docs/next - released snapshots live at
/docs/<version>
Example released path:
https://thirtytwobits.github.io/react-mnemonic/docs/1.2.0-beta1
Current AI retrieval surfaces remain latest-only:
/llms.txt/llms-full.txt/ai-contract.json
That means version-specific AI retrieval should currently use the versioned docs
pages themselves rather than the root llms exports. Version-specific llms
surfaces can be added later if they become necessary.
Maintenance
Keep the surfaces aligned this way:
- edit the canonical prose in
website/docs/ai/* - regenerate companion assets and instruction packs via
npm run docs:ai - cut new released doc snapshots via
npm run docs:version -- <version>when needed - use
npm run ai:checkin CI or before commits to catch drift in generated AI artifacts and instruction packs - keep
context7.jsonaligned with the same high-signal docs and public API entry points - keep the DeepWiki priorities in
.devin/wiki.jsonaligned with the same sources
The goal is simple: agents should load one canonical contract and then choose the right persistence behavior without inventing missing semantics.