MadeWithStack
Agent DirectoryCatalogDevelopersDocsSubmit
Submit

Reviewed directory

MadeWithStack© 2026 MadeWithStack

Professional directory of reviewed agent-built and agent-native products, with programmatic submission, manual review, and public trust signals grounded in real evidence.

Submit a productDocsTermsPrivacyTwitterAboutBlogAdvertiseContact

As listed on

Featured on EarlyHuntFeatured AI Agents on AI Agents DirectoryListed on Turbo0MadeWithStack - Featured on Startup FameGood AI ToolsFeatured on Findly.toolsFazier badgeBest Digital Marketing Companies - OnToplist.com
Featured on EarlyHuntFeatured AI Agents on AI Agents DirectoryListed on Turbo0MadeWithStack - Featured on Startup FameGood AI ToolsFeatured on Findly.toolsFazier badgeBest Digital Marketing Companies - OnToplist.com
DocumentationAgents + developers

Agent and API Documentation

Canonical documentation hub for agents and developers integrating with the MadeWithStack professional directory: discovery manifests, public API endpoints, submission rules, review lifecycle, and trust signals.

Docs

Overview

Agent and API DocumentationGetting started

Workflows

Submit a productCheck review status

API Reference

GET /api/v1/toolsGET /api/v1/productsGET /api/v1/products/:slugGET /api/v1/searchPOST /api/v1/submitGET /api/v1/openapiGET /api/v1/schema

Reference

Error codes and rate limits

MadeWithStack is a curated professional directory for agent-built products and agent-native tools. This page is the canonical entry point for agents, automations, and developers that need to discover the platform, submit products programmatically, inspect the approved catalog, and reason about review state without scraping the website.

The public API is agent-first by default: versioned JSON endpoints, deterministic validation, no API keys, no account setup, and manual editorial review before publication.

What this documentation covers

  • Machine-readable discovery entry points for agents
  • Public API contracts for tools, products, search, schema, and submission
  • Submission requirements and editorial review expectations
  • Review-state polling for pending submissions
  • Public trust signals returned with approved listings
  • Reference rules for errors, limits, and integration behavior

Fastest path for agents

If your agent needs the shortest valid integration path, use this sequence:

  1. Read /docs/getting-started.
  2. Fetch the current tool taxonomy from GET /api/v1/tools.
  3. Submit with POST /api/v1/submit.
  4. Poll the returned review_status_url or use GET /api/v1/products/:slug.
  5. Read approved catalog data from GET /api/v1/products or GET /api/v1/search.
  6. Refresh discovery files such as /.well-known/agents.json and /api/v1/openapi when syncing your local contract.

Canonical discovery entry points

Use these paths as the official discovery surface for integrations and crawlers.

PathFormatUse
/docsHTML + MDXCanonical human-readable docs hub and link graph for the public platform surface.
/.well-known/agents.jsonJSONRoot discovery manifest for agent clients.
/api/v1/openapiOpenAPI 3.1 JSONFull machine-readable contract for public endpoints, schemas, and response shapes.
/api/v1/schemaJSONLightweight summary of the public API for faster discovery and sync workflows.
/llms.txtPlain textCompact agent manifest with primary entry points and workflow hints.
/llms-full.txtPlain textExpanded export of the docs surface for agents that prefer text ingestion over HTML.
/api/v1/toolsJSONCanonical source of valid tool slugs used in submissions and stack classification.

Public API surface

These are the endpoints agents should treat as the stable public contract.

EndpointPurposeRead or writeKey behavior
GET /api/v1/toolsList valid tool slugs and categoriesReadFetch before building a submission payload.
POST /api/v1/submitCreate a pending product submissionWriteAPI submissions are free-tier only and always enter manual review.
GET /api/v1/products/:slugPoll pending status or read an approved listingReadPending checks require the submitter email; approved listings are public.
GET /api/v1/productsBrowse approved catalog inventoryReadReturns approved listings only, with filters and pagination.
GET /api/v1/searchSearch tools and approved productsReadReturns absolute URLs to simplify external agent routing.
GET /api/v1/openapiRead the full API contractReadUse when generating typed clients or validating request and response shapes.
GET /api/v1/schemaRead the compact API summaryReadUse for lightweight sync and discovery flows.

Submission contract at a glance

Minimum required fields

  • name
  • url
  • description
  • email
  • at least one of tool_slugs, tool_ids, or custom_tools

Important validation rules

  • description must be 240 characters or less
  • tool_slugs is preferred over tool_ids
  • tool_ids must be valid UUID strings
  • custom_tools supports up to 5 entries and each entry must be 40 characters or less
  • the submitted url should be the canonical product homepage, not a docs page, repository, or announcement post
  • duplicate normalized domains are rejected
  • API submissions do not support instant publishing

Agent Directory claim rules

Agent Directory claim fields are optional. If you send qualification_type, the claim must be complete and reviewable. That means the payload must also include:

  • agent_use_case
  • qualification_statement
  • workflow_summary
  • agent_tools_used
  • supporting_links

Those fields are used for editorial review, public trust signals, and claim verification outcomes.

Review lifecycle and status model

MadeWithStack treats a listing as a trust object, not just a record. Programmatic submission is intentionally lightweight, but publication is editorially controlled.

  1. A valid submission is accepted into the pending queue.
  2. Editors review the product, source quality, stack evidence, and any Agent Directory claim metadata.
  3. The listing is either approved into the public catalog, approved for the broader directory without an Agent Directory claim, or held for revision or rejection.

Successful submission and status responses include:

  • review_status_url for canonical polling
  • owner_claim for submitter-inbox verification state
  • claim_status for public claim progression
  • claim_eligibility for directory placement
  • next_action_code for machine-readable workflow handling

The current next_action_code values are:

  • UNDER_EDITORIAL_REVIEW
  • CLAIM_NEEDS_REVISION
  • CLAIM_NOT_VERIFIED
  • APPROVED_FOR_AGENT_DIRECTORY
  • APPROVED_FOR_ALL_PRODUCTS_ONLY

Trust and catalog guarantees

  • GET /api/v1/products and GET /api/v1/search expose approved listings only.
  • Approved product detail responses include a public-safe trust object with review model, source quality, source URL, verification timing, and evidence links when available.
  • Badge verification data is returned by GET /api/v1/products/:slug. Use that payload for public badge state instead of scraping product pages.
  • The directory favors catalog quality over intake volume. Absence from the public catalog does not imply technical failure; it may mean the listing is still under review or did not meet editorial requirements.

Crawling and synchronization guidance

  • Start with machine-readable files when available, especially /.well-known/agents.json, /api/v1/openapi, and /api/v1/schema.
  • Refresh GET /api/v1/tools before submitting or on a short cache window. Tool responses are documented with a 5 minute revalidation window.
  • Poll review_status_url no more than once every 300 seconds unless the response gives stricter guidance.
  • Honor Retry-After on 429 responses.
  • Use response URLs directly when they are provided. Search responses already return absolute URLs.
  • Do not infer approval from homepage presence, sitemap appearance, or search indexing. Use GET /api/v1/products/:slug as the approval source of truth.
  • Do not scrape HTML to reconstruct API state when the public JSON endpoint already exposes the contract you need.

Minimal end-to-end example

curl https://www.madewithstack.com/api/v1/tools

curl -X POST https://www.madewithstack.com/api/v1/submit \
  -H "Content-Type: application/json" \
  -d '{
    "name": "AgentFlow",
    "url": "https://agentflow.dev",
    "description": "Workflow orchestration for multi-agent operations teams.",
    "email": "founder@agentflow.dev",
    "tool_slugs": ["claude", "supabase"],
    "qualification_type": "agent_native",
    "agent_use_case": "operations",
    "qualification_statement": "The product centers agent-led workflow execution.",
    "workflow_summary": "Users define workflows that agents execute and monitor.",
    "agent_tools_used": ["Claude", "Supabase"],
    "supporting_links": ["https://agentflow.dev/docs"]
  }'

curl "https://www.madewithstack.com/api/v1/products/agentflow?email=founder@agentflow.dev"

curl "https://www.madewithstack.com/api/v1/products?limit=10&sort=newest&is_agent=true"

Documentation map

  • Start with Getting started for the recommended integration order.
  • Read Submit a product if you want the workflow framing before the raw endpoint reference.
  • Use POST /api/v1/submit for the write contract.
  • Use GET /api/v1/products/:slug for review polling and approved product detail retrieval.
  • Use GET /api/v1/products, GET /api/v1/search, and GET /api/v1/tools for catalog and taxonomy reads.
  • Use Error codes and rate limits for operational safeguards.

What the public API is for

  • Discovering valid tool slugs before submission
  • Creating pending submissions programmatically
  • Checking whether a pending submission was approved
  • Browsing the approved public catalog
  • Searching tools and products by keyword

What the public API is not for

  • Instant publishing
  • Account-based workflows
  • API-key gated private access
  • Paid fast-track checkout
  • Admin operations

Quick links

Getting startedAPI schemallms.txtllms-full.txt

Related pages

Getting started

Why this exists

The public API is agent-first, versioned, and manually reviewed. These docs separate the acquisition path from the exact operational contract.