Machine-readable discovery

Use llms.txt and openapi.json for agent discovery and contract parsing.

Let your AI agent guide the implementation

Copy this prompt into ChatGPT, Claude, or another AI agent. It will start by reading the ReelForger docs from llms.txt, then ask you the right questions about what you want to create, what assets you have, and what constraints matter before helping you build the correct API request.

You are my ReelForger implementation assistant.

Start by reading the ReelForger machine-readable docs here:
https://www.reelforger.com/llms.txt

Your role is to help me work out how to create what I want with the ReelForger API.

Important instructions:
- Read the docs first before recommending anything.
- Do not assume I have already provided everything you need.
- Lead the conversation step by step.
- Ask me what I am trying to create.
- Ask me what assets I already have available.
- Ask me about any important constraints or preferences such as format, dimensions, style, branding, timing, captions, overlays, logos, motion, voiceover, platform, or automation tool.
- Help me identify what is missing.
- Recommend the best ReelForger endpoint or workflow based on the docs.
- Once you have enough information, produce the exact JSON request body I should send.
- Where useful, use placeholders like {{video_url}}, {{audio_url}}, {{logo_url}}, {{image_1}}, {{image_2}} so I can use it in Make.com or another automation tool.
- Prefer practical implementation guidance over abstract explanation.

How to respond:
1. First, briefly confirm you have read the ReelForger docs.
2. Then ask me the most useful questions needed to understand:
   - what I want to create
   - what assets I have
   - what constraints or preferences I have
3. After I answer, recommend the best endpoint or workflow.
4. Then tell me if anything is missing or needs transforming first.
5. Then give me the final JSON request body.
6. Then give me a short implementation note for Make.com if relevant.

Please begin by reading the docs and then briefly asking me your first questions.

Start Here for Agents

Page role: this page shows how to use ReelForge end-to-end, from path selection to async retrieval.

Use this page when you are an agent (or building one) and need a practical orientation before generating requests.

What ReelForge is for

ReelForge is an AI-agent-native media engine and the easiest video API for AI agents.

It assembles short-form social video from structured inputs:

  • Media URLs (video, image, audio)
  • JSON payloads that define timing/layers/captions
  • Async retrieval through jobs or webhooks

Supported output families

Common supported patterns include:

  • Captioned talking-head clips
  • Voiceover explainers
  • Photo slideshows
  • Ranked listicles
  • Stitched clip reels
  • Split-screen videos
  • Text-overlay videos

Use the Supported Output Patterns page to map broad user goals to the nearest supported path.

Choose your path

Use Recipes when

  • You want the fastest reliable path for common formats
  • You can express intent as recipe_id + variables
  • You prefer guided defaults over low-level layer math

Use: POST https://api.reelforger.com/v1/recipes/render with canonical field recipe_id.

Use Timeline API when

  • You need custom composition behavior not covered by recipe controls
  • You need explicit control over layers/timing/layout/media behavior
  • You need advanced sequencing or overlap logic

Use: POST https://api.reelforger.com/v1/videos/render with full manifest fields.

  1. Choose path (Recipes or Timeline)
  2. Validate payload with POST https://api.reelforger.com/v1/videos/validate
  3. Render (POST https://api.reelforger.com/v1/recipes/render or POST https://api.reelforger.com/v1/videos/render)
  4. Capture job_id from 202 Accepted
  5. Retrieve output:
    • Poll GET https://api.reelforger.com/v1/jobs/{jobId}, or
    • Receive webhook events via webhook_url
  6. Handle failures using structured errors and retry with idempotency keys

Validation is a dry-run/schema-and-warning checkpoint; it does not queue a job or consume credits.

Timing note (agent default)

When building Timeline payloads, do not assume every layer must include explicit duration:

  • With composition.auto_stitch: true, video/audio timing can be inferred from media order and probed duration.
  • Image layers require time.start_seconds; image time.duration_seconds can be inferred from composition timing context.
  • If timing context is still ambiguous, set composition.duration_seconds explicitly.

If you are still unsure

  1. Start with Recipes.
  2. Use a nearest pattern from Supported Output Patterns.
  3. Validate first, then render.
  4. Escalate to Timeline only if recipe controls are insufficient.

Canonical examples (CI-backed)

These examples are maintained as executable golden flows in the docs E2E harness:

  • Talking head + captions + logo (Timeline path)
  • Talking head + captions + logo + background music (Timeline path)
  • Image slideshow + narration + music (Recipe path)