Skip to main content
Dispatch · 2 min read

Building the Publishing Machine in Public

This website is not just a portfolio. It is an autonomous publishing operation — sitemaps, APIs, RSS feeds, and all.

TechnologyPublishing

Not a Website. A Publishing Machine.

Most author websites are brochures. This one is an engine.

Behind the pages you see, there's an infrastructure designed to make the archive discoverable by humans, crawlable by search engines, and readable by AI:

For Humans

  • 68 full book readers with chapter navigation, reading progress, font controls, and dark mode
  • 1,194 book catalog pages with ISBN, word count, genre, and publication details
  • 204 daily pages publishing one passage per day
  • 15 revision comparisons showing how drafts evolve
  • 1,537-dot bibliography visualizing the entire book archive

For Search Engines

  • Advanced sitemap system with 5 sub-sitemaps covering 2,895 URLs
  • Schema markup on every page: Person, Book, Chapter, BreadcrumbList, FAQPage, WebSite with SearchAction
  • Canonical URLs, OG tags, and Twitter cards on every page
  • Programmatic SEO pages for every genre, year, and reading list

For AI & LLMs

  • /llms.txt — structured overview for language models
  • /ai.txt — crawling policy and content inventory
  • JSON API at /api/books.json, /api/author.json, /api/stats.json
  • 14+ data export formats: JSON, CSV, NDJSON, BibTeX, RIS, APA, MLA, Chicago, Harvard, CFF

For Syndication

  • RSS feed at /rss-daily.xml with the 50 most recent daily pages
  • Goodreads import file for bulk catalog setup
  • Social media content calendar with 408 pre-written posts across 6 platforms

The publishing machine runs on static files. No server. No database. No runtime dependencies. Just HTML, CSS, and JavaScript served from the edge.

— Dispatch from the Inamdar Archive

This content is licensed under CC BY-NC-ND 4.0. Share freely with attribution. No commercial use. No derivatives.