Knowledge Base

Educational resources for understanding and building AI-native website systems.

🚀 Getting Started

What is an AI Website System?

An AI Website System is a website architecture designed for machine-first consumption. Unlike traditional human-centric websites, these systems prioritize structured data, standardized endpoints, and semantic markup that enables AI agents, LLMs, and automated systems to understand, navigate, and utilize web content effectively.

Key Characteristics:

  • Machine-Readable Endpoints: JSON/JSON-LD files at predictable URLs
  • Schema.org Markup: Structured data embedded in HTML
  • LLM Optimization: llm.txt and llm.json files for AI discovery
  • Federation Ready: Standardized protocols for site-to-site communication
  • Stateless Architecture: Often static, version-controlled, database-free

Read full article →

Human-Centric vs. Machine-Centric Websites

Traditional websites optimize for human visual consumption — beautiful designs, intuitive navigation, responsive layouts. AI-native websites optimize for programmatic understanding — predictable structures, semantic clarity, comprehensive metadata.

Human-Centric Machine-Centric (AI-Native)
Visual design priority Data structure priority
Navigation menus JSON endpoints
SEO metadata Schema.org entities
HTML readability Machine-parsable formats
Database-driven Static/flat-file

Read full article →

📚 Core Concepts

AI-to-AI Communication Protocols

How AI systems discover, understand, and exchange data with websites without human intervention.

Topics covered:

  • Discovery protocols (/ai/manifest.json, robots.txt, sitemap.xml)
  • Data exchange formats (JSON-LD, RDF, structured data)
  • Authentication and trust (Digital Karma scoring, verification)
  • Federation networks (bidirectional links, graph topology)

Read full article →

Static vs. Dynamic AI Website Systems

Comparing architecture approaches for AI-native websites.

Static AI Systems (Recommended):

  • ✅ No database — all data in JSON/markdown files
  • ✅ Git version control
  • ✅ Fast, cacheable, CDN-friendly
  • ✅ Transparent, auditable data
  • ✅ Zero server-side processing

Dynamic AI Systems:

  • ✅ Real-time data updates
  • ✅ User authentication
  • ✅ Complex queries
  • ⚠️ Database overhead
  • ⚠️ Server dependency

Read full article →

Federation Architecture Explained

Understanding how websites form federated networks for AI discovery and trust.

Federation Components:

  • Manifest Files — Site identity cards at /ai/manifest.json
  • Health Endpoints — System status at /ai/health.json
  • Catalog Endpoints — Content inventory at /ai/catalog.json
  • Karma Scoring — Trust signals at /ai/karma.json
  • Federation Registry — Network topology at /ai/federation.json

The Digital Karma Web Federation v6.1 implements this architecture across a network of AI-ready websites.

Read full article →

Digital Karma Scoring Methodology

Understanding the trust and quality scoring system for AI websites.

Seven Quality Signals:

  1. Schema Coverage (20%) — Quality of structured data markup
  2. Content Freshness (15%) — Update frequency and recency
  3. AI Endpoints (25%) — Presence of required machine-readable files
  4. Federation Presence (15%) — Network participation and links
  5. External Links (10%) — Quality outbound references
  6. Technical Quality (10%) — Performance, security, accessibility
  7. Dataset Quality (5%) — Availability of structured datasets

Badge Levels:

  • 🏅 Karma Bronze — Score ≥ 0.70
  • Karma Pro — Score ≥ 0.85
  • 💎 Karma Elite — Score ≥ 0.95

Read full article →

🛠️ Technical Guides

LLM.txt Implementation Guide

The llm.txt standard provides a simple text file at your website root that tells LLMs what your site is about, how it's structured, and where to find key information.

Required Sections:

# Site Name
> Short description

## Core Content
- Primary pages and their URLs

## AI Endpoints
- /ai/manifest.json
- /ai/health.json
- /ai/catalog.json

## Dataset URLs
- Link to structured data files

Read full implementation guide →

Schema.org Implementation for AI Discovery

Best practices for adding Schema.org structured data to enable AI understanding.

Recommended Entity Types:

  • Organization — Your business/project identity
  • WebSite — Site-level metadata
  • WebPage — Individual page context
  • Service — Offerings and capabilities
  • Dataset — Data resources
  • SoftwareApplication — Tools and platforms

Read full guide →

Building Federation Endpoints

Step-by-step guide to creating the required JSON endpoints for federation compliance.

Required Files:

  • /ai/manifest.json — Site identity and discovery
  • /ai/health.json — System health metrics
  • /ai/catalog.json — Content catalog
  • /ai/karma.json — Trust score

Read full implementation guide →

AI Crawler & LLM Compatibility

Optimizing your website for discovery by AI agents, web crawlers, and language models.

Best Practices:

  • Implement robots.txt with AI-bot directives
  • Create llm.txt summary files
  • Use JSON-LD for structured data
  • Provide clean, semantic HTML
  • Implement CORS for API access
  • Use consistent URL structures

Read full guide →

💡 Use Cases & Examples

Case Study: Building a Static AI Directory

Real-world example of converting a traditional website into an AI-native static directory using the Digital Karma framework.

Results achieved:

  • Digital Karma Score: 0.87 (Karma Pro ⭐)
  • Federation v6.1 compliance
  • 100% static, no database
  • Automated quality scoring
  • Git-versioned content

Read case study →

Agent-Maintained Websites

How AI agents like Claude can maintain, update, and evolve websites autonomously.

Capabilities:

  • Automated content updates
  • Quality validation scripts
  • Karma score calculations
  • Federation link maintenance
  • Dataset harvesting

Read full article →

📖 Additional Resources

Build Your AI Website System

Ready to make your website AI-ready? Join the Digital Karma Web Federation.

Submit Your System