# Reel Forge API Documentation > Complete reference for the Reel Forge video processing API. Base URL: https://api.reelforger.com # Overview Page role: this page explains what ReelForge is and gives a high-level model before path selection. ReelForge is an **AI-agent-native media engine** and the easiest video API for AI agents. In plain English: your agent gives ReelForge structured media inputs (URLs + JSON), and ReelForge handles the annoying parts of video assembly - timing, layering, motion, captions, rendering, and async job delivery. ## What ReelForge is designed for - Programmatic short-form video generation from public media URLs - Agent-driven workflows where reliability and predictable request shapes matter - Fast iteration: validate payloads, render asynchronously, retrieve output URLs ## What you can build You can create almost anything from social explainers to listicles, slideshows, talking-head caption clips, and fully custom compositions. The docs are organized so you can choose the right level of control: - **Recipes**: higher-level, guided path for common outcomes - **Timeline API**: lower-level custom composition control If you are unsure, start with Recipes. ## Orientation boundaries ### Use this page when - You need a high-level model of ReelForge before choosing an API path. - You need quick navigation to canonical docs and machine-readable surfaces. ### Do not use this page when - You already know your target request shape and need field-level detail. - You need immediate implementation examples for a specific pattern. ### Switch paths from here - Go to [Start Here for Agents](/docs/quickstart-agents) for practical orientation flow. - Go to [Supported Output Patterns](/docs/supported-output-patterns) when the user request needs capability matching. - Go to [Recipes Overview](/docs/recipes) or [Timeline Overview](/docs/timeline-api) once you choose a path. ## Recommended path Use [Start Here for Agents](/docs/quickstart-agents) as the canonical decision tree and execution flow owner: 1. choose path (Recipes or Timeline) 2. validate payload 3. render asynchronously 4. retrieve via jobs polling or webhooks ## Recipes vs Timeline (quick version) - Use **Recipes** when you want speed and a clear request shape (`POST https://api.reelforger.com/v1/recipes/render`, canonical field: `recipe_id`) - Use **Timeline API** when you need explicit, low-level control over layers, timing, layout, and media behavior Read more: - Recipes Overview: `/docs/recipes` - Timeline Overview: `/docs/timeline-api` ## Machine-readable discovery (quick links) Use `https://www.reelforger.com/llms.txt` and `https://api.reelforger.com/v1/openapi.json` for agent discovery and contract parsing. - [https://www.reelforger.com/llms.txt](https://www.reelforger.com/llms.txt) - agent-oriented discovery and docs context - [https://api.reelforger.com/v1/openapi.json](https://api.reelforger.com/v1/openapi.json) - machine-readable API contract For details, see the [Machine-readable Discovery page](https://www.reelforger.com/docs/machine-readable-discovery). --- # Start Here for Agents Page role: this page shows how to use ReelForge end-to-end, from path selection to async retrieval. Use this page when you are an agent (or building one) and need a practical orientation before generating requests. ## What ReelForge is for ReelForge is an AI-agent-native media engine and the easiest video API for AI agents. It assembles short-form social video from structured inputs: - Media URLs (video, image, audio) - JSON payloads that define timing/layers/captions - Async retrieval through jobs or webhooks ## Supported output families Common supported patterns include: - Captioned talking-head clips - Voiceover explainers - Photo slideshows - Ranked listicles - Stitched clip reels - Split-screen videos - Text-overlay videos Use the [Supported Output Patterns page](/docs/supported-output-patterns) to map broad user goals to the nearest supported path. ## Choose your path ### Use Recipes when - You want the fastest reliable path for common formats - You can express intent as `recipe_id` + `variables` - You prefer guided defaults over low-level layer math Use: `POST https://api.reelforger.com/v1/recipes/render` with canonical field `recipe_id`. ### Use Timeline API when - You need custom composition behavior not covered by recipe controls - You need explicit control over layers/timing/layout/media behavior - You need advanced sequencing or overlap logic Use: `POST https://api.reelforger.com/v1/videos/render` with full manifest fields. ## Execution flow (recommended) 1. Choose path (Recipes or Timeline) 2. Validate payload with `POST https://api.reelforger.com/v1/videos/validate` 3. Render (`POST https://api.reelforger.com/v1/recipes/render` or `POST https://api.reelforger.com/v1/videos/render`) 4. Capture `job_id` from `202 Accepted` 5. Retrieve output: - Poll `GET https://api.reelforger.com/v1/jobs/{jobId}`, or - Receive webhook events via `webhook_url` 6. Handle failures using structured errors and retry with idempotency keys Validation is a dry-run/schema-and-warning checkpoint; it does not queue a job or consume credits. ## Timing note (agent default) When building Timeline payloads, do not assume every layer must include explicit duration: - With `composition.auto_stitch: true`, video/audio timing can be inferred from media order and probed duration. - Image layers require `time.start_seconds`; image `time.duration_seconds` can be inferred from composition timing context. - If timing context is still ambiguous, set `composition.duration_seconds` explicitly. ## If you are still unsure 1. Start with Recipes. 2. Use a nearest pattern from [Supported Output Patterns](/docs/supported-output-patterns). 3. Validate first, then render. 4. Escalate to Timeline only if recipe controls are insufficient. ## Canonical examples (CI-backed) These examples are maintained as executable golden flows in the docs E2E harness: - Talking head + captions + logo (Timeline path) - Talking head + captions + logo + background music (Timeline path) - Image slideshow + narration + music (Recipe path) ## Next links - [Supported Output Patterns](/docs/supported-output-patterns) - [Recipes Overview](/docs/recipes) - [Timeline API Overview](/docs/timeline-api) - [Jobs & Webhooks](/docs/jobs-webhooks) - [OpenAPI Contract](https://api.reelforger.com/v1/openapi.json) - [llms.txt Discovery Surface](https://www.reelforger.com/llms.txt) --- # Supported Output Patterns Page role: this page helps map user intent to the right ReelForge path (Recipes first, Timeline when needed). Use this page as a capability map when a user request is broad, specific, or unusual. This is not a rigid rules engine. It is a reasoning aid to match user intent to the nearest supported ReelForge pattern. ## How to use this page 1. Identify the nearest pattern by output intent. 2. Start on the recommended path (Recipes or Timeline). 3. Validate before render. 4. Switch paths if the request exceeds that path's control surface. ## Capability map | Pattern | Best starting path | Why this path fits | Minimum inputs (high level) | Switch paths when | |---|---|---|---|---| | Captioned talking-head clips | Recipes (`captioned_clip`) | Fast social-caption workflow with strong defaults | 1 video URL, transcript words, style preset | You need custom multi-layer layout/timing beyond recipe controls | | Photo slideshows | Recipes (`photo_slideshow`) | Built-in pacing from audio + image sequencing | Audio URL, 2+ image URLs, style preset | You need custom per-image overlap/stacking or advanced layer choreography | | Ranked listicles | Recipes (`listicle_ranked`) | Guided countdown structure with hook/item pacing | Music URL, items array, style preset | You need non-standard list transitions or custom mixed layer behavior | | Voiceover explainers | Recipes (`voiceover_explainer`) | Strong default path for narration + caption progression | Voiceover URL, background assets, transcript words, style preset | You need precise multi-track timeline authoring beyond recipe knobs | | Story-led b-roll | Recipes (`storytime_broll`) | Guided narration + b-roll pattern (status: Coming soon) | Voiceover URL, background video URL, transcript words, style preset | Recipe is not yet live, or you need custom timeline behavior now | | Stitched clip reels | Timeline API examples | Direct control over sequencing (`auto_stitch` or explicit timing) | Manifest with assets + timeline layers | A recipe already matches your intent with less payload complexity | | Split-screen videos | Timeline API examples | Explicit layout control per layer | Manifest with 2+ visual layers, layout coordinates, timing | A recipe can express the output without custom layout math | | Text-overlay videos | Timeline API examples | Fine-grained control over overlay timing/style/layout | Manifest with base media + text overlays | You only need standard recipe-level text behavior | ## Fallback when no pattern fits exactly - Choose the nearest supported pattern and start there. - Validate first with `POST https://api.reelforger.com/v1/videos/validate`. - Escalate to Timeline API if recipe/path controls are not enough. ## Pattern-level boundaries ### Use Recipes when - You want reliable speed and minimal request shape complexity. - A known pattern already matches the request closely. - You can express output intent as `recipe_id` + `variables`. ### Use Timeline API when - The request needs explicit control over layer timing, positioning, overlap, or media behavior. - You need composition behavior that is not represented by recipe controls. - You need custom manifests as the primary authoring model. ## Execution loop owner For the canonical step-by-step decision tree and execution flow, use [Start Here for Agents](/docs/quickstart-agents). ## Current status notes - Recipes are the higher-level guided path. - Timeline API is the lower-level custom composition path. - `storytime_broll` is intentionally marked Coming soon. - AI Influencer Workflow is intentionally marked Coming soon. ## Next links - [Start Here for Agents](/docs/quickstart-agents) - [Recipes Overview](/docs/recipes) - [Timeline API Overview](/docs/timeline-api) - [Jobs & Webhooks](/docs/jobs-webhooks) ## Canonical examples used for parity checks - Talking head + captions + logo (Timeline) - Talking head + captions + logo + music (Timeline) - Slideshow + narration + music (Recipe) --- # Recipes Overview Recipes are the higher-level guided path in ReelForge. Use them when you want to describe *what* you want to produce and let ReelForge handle most of the timeline assembly details. ## Use this path when / do not use this path ### Use Recipes when - A known output pattern matches the request (explainer, slideshow, listicle, caption clip). - You want high reliability with less payload complexity. - You want `recipe_id` + `variables` to be the primary authoring model. ### Do not use Recipes when - You need explicit control over custom layer math, overlap, or advanced layout choreography. - You need composition behavior outside recipe controls. ### Switch paths if needed - Start here first, then move to [Timeline Overview](/docs/timeline-api) when recipe controls are insufficient. - Use [Supported Output Patterns](/docs/supported-output-patterns) when intent is broad and you need nearest-pattern matching. ## Why use Recipes first - Less payload complexity than hand-writing timeline manifests - Stable request shapes for AI agents - Built-in pacing and composition defaults for common formats - Faster time-to-first-output for most use cases ## Canonical route and field - Route: `POST https://api.reelforger.com/v1/recipes/render` - Canonical identifier field: `recipe_id` ## Recipes vs Timeline Use this decision rule: - Choose **Recipes** if the format matches a known pattern (explainer, slideshow, listicle, caption clip) - Choose **Timeline API** if you need custom layer math or behavior beyond recipe controls ### Side-by-side comparison | Path | Best for | Request style | Typical tradeoff | |---|---|---|---| | Recipes | Guided common outputs | `recipe_id` + `variables` | Less low-level control | | Timeline API | Fully custom compositions | full `version/output/assets/composition` manifest | More manual setup | ## Recommended workflow 1. Choose a recipe 2. Prepare `variables` payload 3. Validate the request body 4. Render 5. Poll `https://api.reelforger.com/v1/jobs/{jobId}` or use webhooks ## Canonical execution flow 1. Choose path (Recipes or Timeline) 2. Validate payload with `POST https://api.reelforger.com/v1/videos/validate` (recommended) 3. Render with `POST https://api.reelforger.com/v1/recipes/render` or `POST https://api.reelforger.com/v1/videos/render` 4. Capture `job_id` from `202 Accepted` 5. Retrieve via `GET https://api.reelforger.com/v1/jobs/{jobId}` or webhooks 6. Handle failures and retry safely with idempotency keys ## Recipe categories - Core recipe pages: `voiceover_explainer`, `photo_slideshow`, `listicle_ranked`, `captioned_clip` - Coming soon: `storytime_broll` - Workflows: implementation walkthroughs (AI Influencer Workflow is currently marked **Coming soon**) --- # Workflows Workflows show end-to-end implementation patterns that combine ReelForge requests with practical orchestration steps (validate, render, retrieve, retries, and integrations). ## Current status - **AI Influencer Workflow** - Coming soon ## Why this section exists Recipe pages explain each API shape in isolation. Workflows show how to run those shapes in realistic production sequences. --- # Timeline Overview Timeline API is the lower-level custom composition path in ReelForge. Use it when you need explicit control over assets, layers, timing, layout, and rendering behavior that goes beyond recipe-level controls. ## Use this path when / do not use this path ### Use Timeline API when - You need custom composition behavior beyond recipe controls. - You need explicit per-layer control over timing, placement, and overlap. - You are authoring full manifests as your primary model. ### Do not use Timeline API when - A recipe already matches the user request and speed is the priority. - You do not need low-level layout/timing control. ### Switch paths if needed - If a recipe can meet the request, prefer [Recipes Overview](/docs/recipes) first. - If a recipe attempt cannot express needed behavior, switch back to Timeline and keep validate-first flow. ## How Timeline relates to Recipes - **Recipes** are guided abstractions for common formats. - **Timeline API** is the underlying composition model for full customization. If a recipe can express your target output, start there first. Switch to Timeline when you need custom sequencing, overlap logic, split layouts, or layer-by-layer tuning. ## When to use Timeline - Custom compositions beyond available recipes - Precise multi-layer timing control - Advanced layout/overlap behavior - Detailed per-layer media and style control ## Core endpoints - Validate: `POST https://api.reelforger.com/v1/videos/validate` - Render: `POST https://api.reelforger.com/v1/videos/render` - Retrieve: `GET https://api.reelforger.com/v1/jobs/{jobId}` ## Timing inference model Timeline timing can be a mix of explicit and inferred values: - Explicit layer timing always wins when provided. - `video` / `audio` layer `time` can be omitted when `composition.auto_stitch: true`. - `image` layers require `time.start_seconds`; `time.duration_seconds` can be inferred. - Image duration inference order: 1. explicit image `time.duration_seconds` 2. `composition.duration_seconds` 3. max timed end across `composition.timeline` + `composition.text_overlays` 4. render-time media probing when `auto_stitch` untimed media is present ## Execution flow The canonical end-to-end decision tree and execution loop are owned by [Start Here for Agents](/docs/quickstart-agents). Use this page for Timeline-specific behavior and field guidance. ## Section map - **Examples**: copyable payload patterns to learn common composition shapes - **Reference**: field-level behavior, validation guidance, and jobs/webhooks details --- # Core Timeline Examples These are copyable building blocks for the Timeline API. Use them when recipe abstractions are not enough and you need explicit layer control. All examples are compatible with `POST https://api.reelforger.com/v1/videos/render`. ## Example groups ### Stitching and sequencing 1. Stitch Together Videos 2. Stitch Video and Audio 3. Stitch Together Images ### Layout and overlays 4. Split Screen 5. Text Overlays 6. Caption Examples ## Recommended execution flow 1. Draft your render manifest 2. Validate with `POST https://api.reelforger.com/v1/videos/validate` 3. Render with `POST https://api.reelforger.com/v1/videos/render` 4. Retrieve output using `GET https://api.reelforger.com/v1/jobs/{jobId}` or webhooks ## Need field-level detail? Use the **Timeline API reference** pages in the sidebar for: - request structure and required top-level keys - optional fields and behavior by layer type - jobs/webhooks and failure handling guidance --- # Stitch Together Videos Use `composition.auto_stitch: true` to append clips in timeline order without calculating start offsets manually. ## Input assets For this example, we use three 9:16 vertical clips of a Porsche, each approximately 5 seconds long: - **Clip 1:** [Porsche vid 1](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Porsche%20vid%201.mp4) - **Clip 2:** [Porsche vid 2](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20vid%202.mp4) - **Clip 3:** [Porsche vid 3](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20vid%203.mp4) ## Why this works When `auto_stitch: true` is enabled: 1. ReelForge probes the duration of each video asset. 2. It automatically calculates the `start_time` for each layer based on the cumulative duration of previous layers. 3. It handles the cropping of 9:16 vertical content to fit the output dimensions using standard "cover" logic. --- # Stitch Video and Audio Combine video and audio layers in the same timeline by defining shared or overlapping `time` windows. ## Input assets This example uses a stitched vertical video and a high-quality soundtrack: - **Source Video:** [Porsche clips stitched](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20clips%20stitched.mp4) - **Source Audio:** [Porsche sound clip](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsch%20sound%20clip.mp3) ## Why this works When combining multiple media types: 1. **Layer synchronization:** Both the `video-layer` and `audio-layer` share a `start_seconds: 0`, causing them to begin playback simultaneously. 2. **Explicit duration:** By setting a matching `duration_seconds: 15.1`, both layers are pinned to the exact length of the base video. 3. **Volume control:** The `media_settings.volume: 1.0` sets the soundtrack at full gain. If your video has its own audio you want to keep, you can lower this (e.g., `0.3`) to create a background music effect. --- # Stitch Together Images Build image sequences by placing image layers with explicit timing. For sequence-style image stitching, each image should include full `time` values. ## Input assets This example uses four portrait house images in sequence: - **Image 1:** [House exterior](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/House%20exterior.jpeg) - **Image 2:** [House lounge](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/house%20lounge.jpeg) - **Image 3:** [House kitchen](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/house%20kitchen.jpeg) - **Image 4:** [House terrace](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/house%20terrace.jpeg) ## Why this works When stitching images: 1. **Explicit timing:** Each image layer needs `time` with `start_seconds` and `duration_seconds`. Images are placed back-to-back (0–4s, 4–8s, 8–12s, 12–16s) to form a continuous sequence. 2. **Cover scaling:** Images are scaled to fill the 9:16 output frame using cover logic; excess is cropped. 3. **No image auto-stitch sequencing:** `auto_stitch` sequences video/audio clips, not image sequences. For slideshow-style image stitching, define each image layer timing explicitly. --- # Split Screen Split-screen is user-authored layout math using two visual layers with explicit positions/sizes. Place both layers in the same `time` window so they play simultaneously. ## Input assets This example uses two Porsche clips in a top/bottom split: - **Top video:** [Porsche split 1](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20split%201%20vid.mp4) - **Bottom video:** [Porsche split 2](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20split%202%20vid.mp4) ## Why this works When building a split-screen layout: 1. **Shared timing:** Both layers use the same `start_seconds` and `duration_seconds`, so they play in sync. 2. **Layout math:** Top half uses `y: "0%"`, `height: "50%"`; bottom half uses `y: "50%"`, `height: "50%"`. Each layer fills its region with `fit: "cover"`. 3. **Audio control:** Use `media_settings.volume: 0` on the bottom layer so only the top video's audio is heard. Omit `media_settings` on the top layer to keep its default volume. --- # Text Overlays Use `composition.text_overlays` for on-screen text with timing, style, and layout controls. Define shared defaults once via `global_styles.text` and `global_layouts.text`, then each overlay only needs `content` and `time`. ## Input assets This example uses the Porsche video with audio as the base layer and adds three timed text overlays: - **Source Video:** [Porsche stitched with audio](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/prsche%20stitched%20with%20audio.mp4) Overlay text (defined in payload): 1. "Porsche 911 Turbo S" 2. "Launch control energy" 3. "Built for standout reels" ## Why this works When adding text overlays: 1. **Global defaults:** `global_styles.text` and `global_layouts.text` apply to all overlays, so you avoid repeating font size, color, stroke, and position per overlay. 2. **Timed windows:** Each overlay has its own `time` (`start_seconds`, `duration_seconds`), so text appears and disappears at the right moments. 3. **Local overrides:** Per-overlay `style` or `layout` keys override the global defaults when you need a one-off change. --- # Caption Examples Use `composition.captions` for programmatic caption rendering with preset + mode behavior. ## 1) One clear request example This featured request shows the complete shape for a high-readability talking-head caption render. ### Source asset - **Talking-head video:** [talking head runner](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20runner.mp4) ### Words sample The `words` array is the timing source (`start` / `end` in milliseconds). Keep this provider-native whenever possible. ```json [ { "text": "Not", "start": 880, "end": 1000, "speaker": "A" }, { "text": "going", "start": 1000, "end": 1160, "speaker": "A" }, { "text": "to", "start": 1160, "end": 1320, "speaker": "A" }, { "text": "pretend", "start": 1320, "end": 1640, "speaker": "A" }, { "text": "I", "start": 1640, "end": 1760, "speaker": "A" }, { "text": "want", "start": 1760, "end": 1880, "speaker": "A" }, { "text": "to", "start": 1880, "end": 2040, "speaker": "A" }, { "text": "do", "start": 2040, "end": 2200, "speaker": "A" } ] ``` ### Featured payload (Karaoke Yellow) ```json { "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [ { "id": "talking-head", "type": "video", "url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20runner.mp4" } ], "composition": { "timeline": [ { "id": "video-layer", "type": "video", "asset_id": "talking-head", "time": { "start_seconds": 0, "duration_seconds": 13.8 } } ], "captions": { "provider": "assemblyai", "preset": "karaoke_yellow", "mode": "phrase_karaoke", "max_chars_per_segment": 20, "correct_text": "Not going to pretend I want to do this. I don't. But I also know I'll feel better after and hate myself if I don't. So - off we go.", "layout": { "x": "8%", "y": "77%", "width": "84%", "height": "18%" }, "style": { "font_size": 64, "highlight_color": "#FFEA00" }, "words": [ { "text": "Not", "start": 880, "end": 1000 }, { "text": "going", "start": 1000, "end": 1160 }, { "text": "to", "start": 1160, "end": 1320 } ] } } } ``` ### Why this request works - Uses a single base video layer (no split-screen/no extra visual clutter) - Applies `correct_text` to improve punctuation/casing alignment - Uses `phrase_karaoke` for natural phrase grouping with active-word progression - Positions captions at a safe lower-third region for social readability ## 2) Caption presets comparison Use the same source clip and timing input, then switch `captions.preset` to compare output style. ### Quick preset picks - **Fast default:** `tiktok_classic` - **Most social punch:** `bold_outline` - **Best active-word emphasis:** `karaoke_yellow` - **Editorial/luxury styles:** `typewriter`, `luxury_serif` - **Stylized/experimental:** `neon_glow`, `handwriting` - **High-contrast bubble look:** `soft_pill` | Preset | Preview | Best for | |---|---|---| | TikTok Classic | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20tiktok%20classic.mp4) | clean social default readability | | Bold Outline | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20bold%20outline.mp4) | high-impact, thick-stroke emphasis | | Karaoke Yellow | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20karaoke%20yellow.mp4) | active-word karaoke progression | | Neon Glow | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20neon%20glow.mp4) | stylized glow look for energetic edits | | Pill Captions | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20with%20pill%20captions.mp4) | dark rounded bubble with strong contrast | | Typewriter | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20typewriter.mp4) | editorial mono-text presentation | | Handwriting | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20handwriting.mp4) | informal creator voice style | | Luxury Serif | [Preview video](https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20luxury%20serif.mp4) | premium editorial aesthetic | --- # Validate API Validation is the recommended pre-flight step before render. Endpoint: `POST https://api.reelforger.com/v1/videos/validate` ## What validation checks - request shape/schema - semantic warnings (timing/layout risks) - estimated duration and credit cost - both payload families: recipe payloads (`recipe_id` + `variables`) and timeline manifest payloads ## What validation does not do - it does not queue a render job - it does not consume credits - it does not simulate full rendering execution ## How to use it in production Use validate before render for both recipe-first and timeline workflows when reliability matters. 1. validate payload 2. review warnings 3. render if acceptable --- # Timeline Request Structure Timeline requests use a full render manifest shape. > Need the short mental model first? See [Timeline Overview timing inference model](/docs/timeline-api#timing-inference-model). ## Endpoint - Render: `POST https://api.reelforger.com/v1/videos/render` - Validate: `POST https://api.reelforger.com/v1/videos/validate` ## Top-level structure ```json { "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [], "composition": {} } ``` ## Required fields - `version` - `output.width`, `output.height`, `output.fps` - `composition` object At least one visual/audio path must be represented through `composition.timeline` and corresponding `assets`. ## Optional but common fields - `idempotency_key` (safe retry dedupe) - `composition.auto_stitch` (derive sequencing by layer order) - `composition.text_overlays` - `composition.captions` - `webhook_url`, `webhook_headers`, `webhook_secret` - `metadata` ## Assets and layer linkage - Each timeline layer references an asset via `asset_id`. - Every `asset_id` must exist in `assets[]`. - Layer `type` and asset `type` should match intended usage. ## Time rules - `image` layers require `time.start_seconds`. - `image.time.duration_seconds` can be omitted when composition duration is inferable: - from explicit `composition.duration_seconds`, - from max timed end across timeline/text overlays, - or at render-time when `composition.auto_stitch` is enabled and media durations are probed. - `video`/`audio` layers require `time` unless `composition.auto_stitch` is `true`. - `trim.start_seconds` is optional for audio/video. ## Captions and alignment - Use `composition.captions.words` as raw timing source. - Add `composition.captions.correct_text` when you need improved punctuation/casing alignment. - Keep caption placement in safe lower-third regions for social readability. ## Validate first Use `https://api.reelforger.com/v1/videos/validate` with the exact same body before rendering in production. Warnings highlight common readability/layout/timing risks before credits are spent. --- ## Working with Video Video layers are the foundation of most Reel Forge compositions. They allow you to place video assets onto the timeline, optionally trimming them, and controlling how they scale within your composition. ### Basic Video Object A basic video layer requires an `id`, `type`, `asset_id`, and `time` positioning. **When is `time` required?** For video (and audio) layers, `time` is required unless `composition.auto_stitch` is `true`—in that case, timing is derived automatically from clip order. ```json { "id": "main-video", "type": "video", "asset_id": "asset-video-1", "time": { "start_seconds": 0, "duration_seconds": 5 } } ``` ### Auto-Stitching & Arrays If you don't want to calculate exact timeline mathematics (`start_seconds`), Reel Forge supports `auto_stitch`. When enabled on the root `composition` object, you can omit the `time` property from video layers. Reel Forge will probe durations and play clips in timeline order. This also works for a **single** base video clip (for example: talking-head + logo overlay) when you do not want to precompute duration. ```json { "composition": { "auto_stitch": true, "timeline": [ { "id": "clip-1", "type": "video", "asset_id": "asset-video-1" }, { "id": "clip-2", "type": "video", "asset_id": "asset-video-2" } ] } } ``` ### Single Talking-Head + Logo (No Manual Duration) You can omit the base video duration and let ReelForge infer it with `auto_stitch`. ```json { "composition": { "auto_stitch": true, "timeline": [ { "id": "base-video", "type": "video", "asset_id": "asset-video-1" }, { "id": "logo-overlay", "type": "image", "asset_id": "asset-logo-1", "time": { "start_seconds": 0 }, "background_mode": "transparent", "layout": { "x": "82%", "y": "88%", "width": "12%", "height": "8%", "fit": "contain", "z_index": 20 } } ] } } ``` ### Z-Index Stacking By default, layers are drawn back-to-front based on their order in the `timeline` array. The first layer is the bottom-most background, and the last layer is on top. You can explicitly override this by providing a `z_index` inside the `layout` object. ```json { "id": "foreground-video", "type": "video", "asset_id": "asset-video-1", "layout": { "z_index": 100 } } ``` ### Background Mode & Letterboxing When you place a video with a different aspect ratio than your composition (e.g., a 16:9 landscape video into a 9:16 portrait composition), Reel Forge must pad the empty space. You can control this using the `background_mode` property: * `"blurred"` (Default): Creates a cinematic blurred and zoomed version of your video to fill the background. * `"solid"`: Fills the empty space with a solid black background. * `"transparent"`: Leaves the empty space transparent, allowing layers beneath it (lower z-index) to show through. ```json { "id": "landscape-clip", "type": "video", "asset_id": "asset-landscape-video", "background_mode": "blurred", "layout": { "fit": "contain" } } ``` ### The Full Video Request Example Here is a complete payload placing two videos sequentially using explicit timings, with the first video using a solid background mode. ```bash curl -X POST https://api.reelforger.com/v1/videos/render \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [ { "id": "asset-1", "type": "video", "url": "https://example.com/video1.mp4" }, { "id": "asset-2", "type": "video", "url": "https://example.com/video2.mp4" } ], "composition": { "timeline": [ { "id": "layer-1", "type": "video", "asset_id": "asset-1", "background_mode": "solid", "time": { "start_seconds": 0, "duration_seconds": 5 } }, { "id": "layer-2", "type": "video", "asset_id": "asset-2", "time": { "start_seconds": 5, "duration_seconds": 5 } } ] } }' ``` --- ## Working with Audio Audio layers function similarly to video layers but never appear visually on the canvas. They are entirely dedicated to the auditory mix. ### Basic Audio Object A basic audio layer requires an `asset_id` and a `time` definition. If the duration requested is longer than the source audio, the audio will simply end early unless you enable looping. ```json { "id": "bg-music", "type": "audio", "asset_id": "asset-music-1", "time": { "start_seconds": 0, "duration_seconds": 15 } } ``` ### Volume Control You can control the mix of your audio layer using the `volume` property inside `media_settings`. Volume is a multiplier: `1.0` is original volume, `0.5` is 50% volume, and `0.0` is muted. ```json { "id": "voiceover", "type": "audio", "asset_id": "asset-vo-1", "media_settings": { "volume": 0.8 } } ``` ### Looping Audio If you have a short music track (e.g., a 10-second loop) but your video composition is 60 seconds long, you can enable `loop` in the `media_settings`. The audio will repeat seamlessly until it reaches the layer's `duration_seconds`. ```json { "id": "looping-beat", "type": "audio", "asset_id": "asset-beat-1", "time": { "start_seconds": 0, "duration_seconds": 60 }, "media_settings": { "loop": true, "volume": 0.2 } } ``` ### Fades and Crossfades You can explicitly add fade-in and fade-out animations to audio layers using `fade_in_seconds` and `fade_out_seconds`. Reel Forge will automatically ramp the volume smoothly. ```json { "id": "fading-music", "type": "audio", "asset_id": "asset-music-1", "time": { "start_seconds": 0, "duration_seconds": 10 }, "media_settings": { "fade_in_seconds": 2.0, "fade_out_seconds": 3.0 } } ``` ### The Full Audio Request Example This example demonstrates how to add a quiet, looping background track to a 10-second composition. ```bash curl -X POST https://api.reelforger.com/v1/videos/render \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [ { "id": "asset-music", "type": "audio", "url": "https://example.com/looping-beat.mp3" } ], "composition": { "timeline": [ { "id": "bgm-layer", "type": "audio", "asset_id": "asset-music", "time": { "start_seconds": 0, "duration_seconds": 10 }, "media_settings": { "volume": 0.1, "loop": true, "fade_in_seconds": 1.0, "fade_out_seconds": 1.0 } } ] } }' ``` --- ## Working with Images Image layers allow you to render static `.jpg`, `.png`, or `.webp` files over time. They are incredibly powerful when combined with the `layout` object for Picture-in-Picture, logos, or watermarks. ### Basic Image Object An image layer needs an `asset_id` and a `time` definition. `time.start_seconds` is required. `time.duration_seconds` is optional when ReelForge can infer overall composition duration from timed media layers (or explicit `composition.duration_seconds`). ### Duration Inference Precedence When image `time.duration_seconds` is omitted, ReelForge resolves timing in this order: 1. If image `time.duration_seconds` is provided, that explicit value is used. 2. Otherwise, if `composition.duration_seconds` is provided, image duration is inferred as: - `composition.duration_seconds - image.time.start_seconds` 3. Otherwise, composition duration is inferred from the maximum timed end across `composition.timeline` and `composition.text_overlays`, then image duration is inferred from that result. If none of the above can establish a valid duration, validation fails with a clear error. If `composition.auto_stitch` is enabled and media layers are untimed, ReelForge can still infer composition duration at render time by probing media duration, then apply the same image-duration inference. ```json { "id": "watermark-logo", "type": "image", "asset_id": "asset-logo-1", "time": { "start_seconds": 0, "duration_seconds": 60 } } ``` ### Spatial Coordinates (`layout`) To position an image, supply a `layout` object. The `layout` allows you to define standard CSS-like spatial bounds. * `x`: Horizontal position (e.g., `"10%"`, `"20px"`). Maps to the CSS `left` property. * `y`: Vertical position (e.g., `"10%"`, `"20px"`). Maps to the CSS `top` property. * `width`: The width of the bounding box. * `height`: The height of the bounding box. * `fit`: How the image fits inside its bounding box (`"cover"` or `"contain"`). ```json { "id": "logo-top-right", "type": "image", "asset_id": "asset-logo-1", "layout": { "x": "80%", "y": "5%", "width": "15%", "height": "10%", "fit": "contain" } } ``` ### Background Mode (`background_mode`) Similar to Videos, images support `background_mode`. If your image does not perfectly match its layout bounding box, you can control the empty space. For things like transparent `.png` logos, you should always explicitly set `"background_mode": "transparent"`, otherwise they will render with a blurred background copy of themselves. ```json { "id": "transparent-logo", "type": "image", "asset_id": "asset-logo-1", "background_mode": "transparent", "layout": { "x": "10px", "y": "10px", "width": "100px", "height": "100px", "fit": "contain" } } ``` ### Styling Images The `style` object uses snake_case keys (consistent with the rest of the manifest): - `opacity` (number, 0–1) - `border_radius` (string, e.g. `"24px"`) - `box_shadow` (string, e.g. `"0 10px 30px rgba(0,0,0,0.5)"`) ```json { "id": "styled-image", "type": "image", "asset_id": "asset-pic", "style": { "opacity": 0.8, "border_radius": "24px", "box_shadow": "0 10px 30px rgba(0,0,0,0.5)" } } ``` ### The Full Image Request Example This example places a transparent PNG logo in the top right corner of the video for the entire 10-second duration. ```bash curl -X POST https://api.reelforger.com/v1/videos/render \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [ { "id": "asset-bg-video", "type": "video", "url": "https://example.com/background.mp4" }, { "id": "asset-logo", "type": "image", "url": "https://example.com/logo.png" } ], "composition": { "timeline": [ { "id": "base-video", "type": "video", "asset_id": "asset-bg-video", "time": { "start_seconds": 0, "duration_seconds": 10 } }, { "id": "overlay-logo", "type": "image", "asset_id": "asset-logo", "background_mode": "transparent", "time": { "start_seconds": 0, "duration_seconds": 10 }, "layout": { "x": "80%", "y": "5%", "width": "15%", "height": "10%", "fit": "contain", "z_index": 10 } } ] } }' ``` --- ## Working with Text Text now lives in `composition.text_overlays` instead of `composition.timeline`. This keeps media timing (video/audio/image) separate from text instructions for easier no-code payload generation. ### Basic Text Overlay Object A text overlay does not require an `asset_id` (there is no external media file to download). It requires `content` and `time` (`start_seconds`, `duration_seconds`). `id` is optional. ```json { "id": "intro-text", "content": "Welcome to Reel Forge", "time": { "start_seconds": 2, "duration_seconds": 3 } } ``` ### Layout & Bounding Boxes Use `layout` to control where text renders. If omitted, text defaults to full frame (`x: "0%"`, `y: "0%"`, `width: "100%"`, `height: "100%"`). ```json { "content": "Lower Third Caption", "layout": { "x": "10%", "y": "70%", "width": "80%", "height": "20%" }, "time": { "start_seconds": 0, "duration_seconds": 4 } } ``` ### Typography and Styling (`style`) Text overlays use strict snake_case for `style` keys (consistent with image/video layer styles). Accepted keys: - `font_size` (number) - `font_family` (string) - `font_weight` (number) - `color` (string) - `text_align` (`left` | `center` | `right` | `justify`) - `letter_spacing` (number, rendered as em) - `line_height` (number) - `stroke_color` (string) - `stroke_width` (number) - `shadow_color` (string) - `shadow_blur` (number) - `shadow_offset_x` (number) - `shadow_offset_y` (number) `text_align` only controls horizontal alignment inside the overlay's bounding box. Use `layout` (or `global_layouts.text`) to control vertical placement and overlay region. ```json { "content": "LOUD AND CLEAR", "time": { "start_seconds": 1, "duration_seconds": 3 }, "style": { "font_family": "Montserrat", "font_size": 120, "font_weight": 900, "color": "#FFD700", "text_align": "center", "stroke_color": "black", "stroke_width": 4, "shadow_color": "rgba(0,0,0,0.8)", "shadow_offset_x": 0, "shadow_offset_y": 8, "shadow_blur": 16 } } ``` ### Global Text Defaults You can define shared text defaults once and override per overlay: 1. `composition.global_styles.text` then `text_overlay.style` 2. `composition.global_layouts.text` then `text_overlay.layout` Local keys always win. ```json { "composition": { "global_styles": { "text": { "color": "white", "font_family": "Inter", "font_weight": 700, "shadow_color": "rgba(0,0,0,0.6)", "shadow_offset_x": 0, "shadow_offset_y": 4, "shadow_blur": 12 } }, "global_layouts": { "text": { "x": "0%", "y": "30%", "width": "100%", "height": "20%" } }, "text_overlays": [ { "id": "text-default", "content": "Uses global defaults", "time": { "start_seconds": 0, "duration_seconds": 2 } }, { "id": "text-override", "content": "Overrides color locally", "time": { "start_seconds": 2, "duration_seconds": 2 }, "style": { "color": "#FFD700" } } ] } } ``` ### Full Text Request Example ```bash curl -X POST https://api.reelforger.com/v1/videos/render \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "version": "v1", "output": { "width": 1080, "height": 1920, "fps": 30 }, "assets": [ { "id": "asset-bg-video", "type": "video", "url": "https://example.com/background.mp4" } ], "composition": { "timeline": [ { "id": "base-video", "type": "video", "asset_id": "asset-bg-video", "time": { "start_seconds": 0, "duration_seconds": 10 } } ], "text_overlays": [ { "id": "title-text", "content": "THE TRUTH REVEALED", "time": { "start_seconds": 1, "duration_seconds": 4 }, "layout": { "x": "0%", "y": "30%", "width": "100%", "height": "20%", "z_index": 10 }, "style": { "color": "white", "font_size": 80, "font_weight": 700, "font_family": "Inter", "text_align": "center", "stroke_color": "black", "stroke_width": 4 } } ] } }' ``` --- # Jobs & Webhooks ReelForge rendering is asynchronous. ## Async lifecycle 1. Submit request 2. Receive `job_id` 3. Retrieve status via polling or webhooks 4. Download `output_url` on completion ## Polling Use `GET https://api.reelforger.com/v1/jobs/{jobId}` until terminal state: - `completed` - `failed` ## Webhooks Provide `webhook_url` at request time to receive completion/failure events. - success event: `video.render.completed` - failure event: `video.render.failed` Webhooks support retries and signature verification (`ReelForge-Signature`). See this page as the canonical webhook/job retrieval reference. --- # Errors & Recovery ReelForge exposes structured failures so agents/builders can recover predictably. ## Error model - synchronous API validation/auth/billing errors - asynchronous worker errors on failed jobs - webhook delivery retries/failures (without mutating core render status) ## Common recovery pattern 1. inspect error code and message 2. correct payload/data issue 3. retry safely using idempotency key semantics ## Typical failure classes - invalid asset URL / unreachable media - impossible trim math - out-of-bounds timeline placement - caption alignment failure Use validation before render to catch many issues earlier. --- # Limits & Constraints Centralized operational constraints for predictable planning. ## Core limits - Maximum output duration: **60 seconds** - Output format: short-form social video constraints apply - Captions and text should respect readable layout bounds ## Input constraints - Assets must be direct public media URLs (not HTML pages) - Validate and render payloads must use top-level manifest fields - Snake_case style keys are required where style objects are used ## Practical guidance - Keep layout coordinates in sane ranges to avoid clipping - Use validate-first in reliable production workflows - Prefer recipe paths unless custom timeline control is required ## Retention and retrieval - Output URL availability follows current storage retention policy - Use job/webhook events for robust downstream automation --- # Machine-readable Discovery ReelForge exposes canonical machine-readable surfaces for agent discovery and contract validation. Use these together so agents can orient quickly, then validate strictly: - `https://www.reelforger.com/llms.txt` - `https://api.reelforger.com/v1/openapi.json` - `https://www.reelforger.com/llms-full.txt` (extended context) ## Discovery surfaces - [https://www.reelforger.com/llms.txt](https://www.reelforger.com/llms.txt) Use this as the routing/orientation layer (what ReelForge is, main paths, where to go next). - [https://api.reelforger.com/v1/openapi.json](https://api.reelforger.com/v1/openapi.json) Use this as the contract layer (strict endpoint/request/response schemas). - [https://www.reelforger.com/llms-full.txt](https://www.reelforger.com/llms-full.txt) Use this as the extended context layer when an agent needs broad docs recall in one file. ## Surface responsibilities - **`llms.txt`**: first stop for routing and path selection. - **OpenAPI (`https://api.reelforger.com/v1/openapi.json`)**: source of truth for contract validation. - **`llms-full.txt`**: optional deep context for broader reasoning and fallback lookup. ## How to use them together 1. Start with `llms.txt` for orientation. 2. If request intent is broad, route to docs: - [Start Here for Agents](https://www.reelforger.com/docs/quickstart-agents) - [Supported Output Patterns](https://www.reelforger.com/docs/supported-output-patterns) 3. Use OpenAPI to validate exact payload shape and endpoint behavior. 4. Use `llms-full.txt` when a single-file, full-context fallback is needed. 5. Prefer canonical recipe route/field when working with recipes: - `POST https://api.reelforger.com/v1/recipes/render` - `recipe_id` ## Practical guidance for agents - Use Recipes first for common outcomes. - Fall back to Timeline API when recipe controls are not enough. - Validate payloads before render in production workflows. ## Next links - [Start Here for Agents](/docs/quickstart-agents) - [Supported Output Patterns](/docs/supported-output-patterns) - [Recipes Overview](/docs/recipes) - [Timeline API Overview](/docs/timeline-api) --- # Recipes (Typed Source) --- ## Recipe: voiceover_explainer Narration-led explainer recipe with rotating backgrounds, hook, CTA, and transcript-driven captions. ### When to use it - You have a voiceover and transcript words - You want recipe-first generation with minimal timeline math - You need social-ready motion defaults and caption styling ### Required inputs - voiceover_audio_url - transcript_words - background_assets - style_preset ### Optional overrides - correct_text (punctuated reference for caption alignment) - hook_text / cta_text - hook_style / cta_style / captions_style - hook_layout / cta_layout / captions_layout - per-asset motion via object form ### Payload example ```json { "recipe_id": "voiceover_explainer", "style_preset": "bold_outline", "idempotency_key": "recipe-voiceover-001", "variables": { "voiceover_audio_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/football%20commentary%20speech.mp3", "background_assets": [ "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/kai%20ren%201.jpeg", { "url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/kai%20ren%202.jpeg", "motion": "pan_right" }, { "url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/kai%20ren%203.jpeg", "motion": "zoom_out" }, { "url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/kai%20ren%204.jpeg", "motion": "pan_left" }, { "url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/kai%20ren%205.jpeg", "motion": "zoom_in" } ], "transcript_words": [ { "text": "This", "start": 160, "end": 320, "confidence": 0.92733216, "speaker": "A" }, { "text": "goal", "start": 320, "end": 760, "confidence": 0.9995247, "speaker": "A" }, { "text": "changed", "start": 760, "end": 1240, "confidence": 0.9977877, "speaker": "A" }, { "text": "the", "start": 1240, "end": 1400, "confidence": 0.99970955, "speaker": "A" }, { "text": "whole", "start": 1400, "end": 1720, "confidence": 0.999252, "speaker": "A" }, { "text": "match.", "start": 1720, "end": 2080, "confidence": 0.57080585, "speaker": "A" }, { "text": "90", "start": 2720, "end": 3080, "confidence": 0.9999782, "speaker": "A" }, { "text": "seconds", "start": 3080, "end": 3680, "confidence": 0.99942905, "speaker": "A" }, { "text": "left,", "start": 3680, "end": 4000, "confidence": 0.6299604, "speaker": "A" }, { "text": "one", "start": 4240, "end": 4600, "confidence": 0.85895026, "speaker": "A" }, { "text": "chance,", "start": 4600, "end": 5040, "confidence": 0.9754993, "speaker": "A" }, { "text": "one", "start": 5360, "end": 5680, "confidence": 0.99934345, "speaker": "A" }, { "text": "touch", "start": 5680, "end": 6040, "confidence": 0.9998425, "speaker": "A" }, { "text": "to", "start": 6040, "end": 6240, "confidence": 0.99973553, "speaker": "A" }, { "text": "control", "start": 6240, "end": 6520, "confidence": 0.9999193, "speaker": "A" }, { "text": "it,", "start": 6520, "end": 6880, "confidence": 0.98364824, "speaker": "A" }, { "text": "one", "start": 7040, "end": 7360, "confidence": 0.99861884, "speaker": "A" }, { "text": "defender", "start": 7360, "end": 8040, "confidence": 0.99974185, "speaker": "A" }, { "text": "beaten,", "start": 8040, "end": 8640, "confidence": 0.9053415, "speaker": "A" }, { "text": "and", "start": 8720, "end": 9040, "confidence": 0.9994054, "speaker": "A" }, { "text": "then—", "start": 9040, "end": 9360, "confidence": 0.6485888, "speaker": "A" }, { "text": "top", "start": 9920, "end": 10320, "confidence": 0.7475579, "speaker": "A" }, { "text": "corner!", "start": 10400, "end": 10880, "confidence": 0.92240775, "speaker": "A" }, { "text": "From", "start": 11440, "end": 11800, "confidence": 0.9978428, "speaker": "A" }, { "text": "doubt", "start": 11800, "end": 12280, "confidence": 0.99835914, "speaker": "A" }, { "text": "to", "start": 12280, "end": 12560, "confidence": 0.9951866, "speaker": "A" }, { "text": "eruption", "start": 12560, "end": 13240, "confidence": 0.9988387, "speaker": "A" }, { "text": "in", "start": 13240, "end": 13520, "confidence": 0.9958352, "speaker": "A" }, { "text": "a", "start": 13520, "end": 13760, "confidence": 0.99887687, "speaker": "A" }, { "text": "single", "start": 13760, "end": 14240, "confidence": 0.99969864, "speaker": "A" }, { "text": "strike.", "start": 14240, "end": 14640, "confidence": 0.77905774, "speaker": "A" } ], "correct_text": "This goal changed the whole match. 90 seconds left, one chance, one touch to control it, one defender beaten, and then - top corner! From doubt to eruption in a single strike.", "hook_text": "How this goal changed everything", "cta_text": "Built in ReelForge", "captions_layout": { "y": "72%" } } } ``` ### Constraints - hook_text max length is 60 - transcript_words must contain at least one token - Output duration follows transcript timing ### Common mistakes and errors - Sending an empty transcript_words array - Using webpage URLs instead of direct media URLs - Overloading hook_text with long paragraphs --- ## Recipe: photo_slideshow Image-first slideshow paced from soundtrack duration with built-in Ken Burns motion cycle. ### When to use it - You have multiple images and one backing audio track - You need deterministic per-slide pacing from audio duration ### Required inputs - audio_url - image_urls (min 2) - style_preset ### Optional overrides - audio_start_seconds (trim start of audio) - audio_duration_seconds (cap slideshow duration, max 60s) - title_card_text - title_card_style (default: big bold white 96px) - title_card_layout ### Payload example ```json { "recipe_id": "photo_slideshow", "style_preset": "luxury_serif", "variables": { "audio_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/coastal%20escape%20music.mp3", "image_urls": [ "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%201.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%202.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%203.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%204.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%205.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%206.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%207.jpeg", "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Coastal%20Escape%208.jpeg" ], "audio_start_seconds": 19, "audio_duration_seconds": 25, "title_card_text": "Coastal Escape" } } ``` ### Constraints - image_urls must include at least two images - Final pacing derives from probed audio duration (or audio_duration_seconds if set) - Total duration capped at 60 seconds - title_card_text uses big bold white (96px, 800 weight) by default; override with title_card_style ### Common mistakes and errors - Using inaccessible image URLs - Supplying only one image URL - Omitting audio_duration_seconds when audio exceeds 60s --- ## Recipe: listicle_ranked Ranked countdown recipe for Top-N clips with optional intro title, timed item labels, and optional CTA. ### When to use it - You are producing countdown/ranked social content - You need hook/items/CTA pacing from one soundtrack ### Required inputs - background_audio_url - hook_title - items (min 2) - style_preset ### Optional overrides - background_audio_start_seconds (trim start of soundtrack) - background_audio_duration_seconds (cap total duration, max 60s) - hook_duration_seconds (set to 0 to skip hook) - title_text / title_duration_seconds (separate intro title overlay) - countdown_mode (render labels as #N -> #1) - cta_text - hook_style/layout - item_title_style/layout - cta_style/layout ### Payload example ```json { "recipe_id": "listicle_ranked", "style_preset": "bold_outline", "variables": { "background_audio_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20listicle%20music.mp3", "background_audio_start_seconds": 48, "background_audio_duration_seconds": 9, "hook_title": "Top 3 Porsche Details", "hook_duration_seconds": 0, "title_text": "Top 3 Porsche Details", "title_duration_seconds": 1.4, "countdown_mode": true, "item_title_style": { "color": "#FFFFFF", "font_size": 78, "font_weight": 900, "text_align": "center", "stroke_color": "rgba(0,0,0,0.92)", "stroke_width": 5 }, "items": [ { "title": "Driver cockpit", "media_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20vid%202.mp4" }, { "title": "Rear light signature", "media_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/porsche%20rear%20lights%20clip.mp4" }, { "title": "Turbo S stance", "media_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/Porsche%20vid%201.mp4" } ] } } ``` ### Constraints - items must contain at least two entries - Total duration is capped at 60 seconds and derived from background audio timing - If hook_duration_seconds is > 0, hook and optional CTA consume item timing budget - countdown_mode labels follow item order as #N down to #1 ### Common mistakes and errors - Including rank prefixes inside item titles, which causes duplicate numbering - Forgetting to trim/cap long music with background_audio_start_seconds/background_audio_duration_seconds - Leaving hook_duration_seconds at default when you want items to begin immediately --- ## Recipe: captioned_clip Transcript-driven clip recipe for podcast/interview formats with full-frame or split-screen layout. ### When to use it - You have spoken-video clips and transcript words - You need caption-forward social clipping quickly ### Required inputs - primary_video_url - transcript_words - layout_variant - style_preset ### Optional overrides - correct_text (punctuated reference for caption alignment) - secondary_video_url (required for split_screen) - cta_text - cta_style/layout - captions_mode (phrase_karaoke default; supports phrase and word_only) - captions_style/layout ### Payload example ```json { "recipe_id": "captioned_clip", "style_preset": "karaoke_yellow", "variables": { "primary_video_url": "https://pub-2ad5592bc4ca44abb609acfc0b7c5ceb.r2.dev/reel-forge-website-assets/talking%20head%20runner.mp4", "layout_variant": "full_frame", "transcript_words": [ { "text": "Not", "start": 880, "end": 1000, "speaker": "A" }, { "text": "going", "start": 1000, "end": 1160, "speaker": "A" }, { "text": "to", "start": 1160, "end": 1320, "speaker": "A" }, { "text": "pretend", "start": 1320, "end": 1640, "speaker": "A" }, { "text": "I", "start": 1640, "end": 1760, "speaker": "A" }, { "text": "want", "start": 1760, "end": 1880, "speaker": "A" }, { "text": "to", "start": 1880, "end": 2040, "speaker": "A" }, { "text": "do", "start": 2040, "end": 2200, "speaker": "A" }, { "text": "this.", "start": 2200, "end": 2480, "speaker": "A" }, { "text": "I", "start": 3120, "end": 3440, "speaker": "A" }, { "text": "don't.", "start": 3440, "end": 3840, "speaker": "A" }, { "text": "But", "start": 5040, "end": 5320, "speaker": "A" }, { "text": "I", "start": 5320, "end": 5480, "speaker": "A" }, { "text": "also", "start": 5480, "end": 5720, "speaker": "A" }, { "text": "know", "start": 5720, "end": 5960, "speaker": "A" }, { "text": "I'll", "start": 5960, "end": 6160, "speaker": "A" }, { "text": "feel", "start": 6160, "end": 6320, "speaker": "A" }, { "text": "better", "start": 6320, "end": 6560, "speaker": "A" }, { "text": "after", "start": 6560, "end": 6840, "speaker": "A" }, { "text": "and", "start": 6840, "end": 7160, "speaker": "A" }, { "text": "hate", "start": 7160, "end": 7480, "speaker": "A" }, { "text": "myself", "start": 7480, "end": 7800, "speaker": "A" }, { "text": "if", "start": 7800, "end": 8000, "speaker": "A" }, { "text": "I", "start": 8000, "end": 8240, "speaker": "A" }, { "text": "don't.", "start": 8240, "end": 8640, "speaker": "A" }, { "text": "So—", "start": 10080, "end": 10480, "speaker": "A" }, { "text": "off", "start": 12720, "end": 13080, "speaker": "A" }, { "text": "we", "start": 13080, "end": 13360, "speaker": "A" }, { "text": "go.", "start": 13360, "end": 13680, "speaker": "A" } ], "correct_text": "Not going to pretend I want to do this. I don't. But I also know I'll feel better after and hate myself if I don't. So - off we go.", "captions_mode": "phrase", "captions_layout": { "y": "72%" } } } ``` ### Constraints - secondary_video_url is required when layout_variant=split_screen - Duration derives from transcript_words ### Common mistakes and errors - Choosing split_screen without secondary_video_url - Supplying unsorted transcript tokens - Omitting correct_text when raw transcript has poor punctuation --- ## Recipe: storytime_broll Story-driven narration recipe with continuous B-roll, top-third hook, chapter beats, and captions. ### When to use it - Narrative videos with chapter beat overlays - Continuous B-roll under voiceover ### Required inputs - voiceover_audio_url - background_video_url - transcript_words - hook_text - style_preset ### Optional overrides - chapter_beats - hook_style/layout - chapter_beat_style/layout - captions_style/layout ### Payload example ```json { "recipe_id": "storytime_broll", "style_preset": "karaoke_yellow", "variables": { "voiceover_audio_url": "https://samplelib.com/lib/preview/mp3/sample-15s.mp3", "background_video_url": "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4", "hook_text": "The greatest comeback ever.", "transcript_words": [{ "text": "Hello", "start": 0, "end": 900 }], "chapter_beats": [{ "label": "The Setup", "timecode_seconds": 4 }] } } ``` ### Constraints - hook_text max length is 60 - chapter_beats max length is 10 - out-of-bounds beats are silently dropped ### Common mistakes and errors - Using long chapter labels - Assuming chapter beats outside duration will throw hard errors --- # Workflows (Typed Source) --- ## Workflow: AI Influencer Workflow End-to-end recipe-first workflow to generate a short social reel from clips + audio, with validate-before-render and async retrieval. ### User request Create a 15-second social reel from my highlight clips, add narration, and return a final output URL I can post. ### Required assets - 1 voiceover audio URL - 2-4 background media URLs (video or image) - transcript_words array (AssemblyAI-style text/start/end ms) - optional webhook URL for completion delivery ### Chosen path - Choose recipe: voiceover_explainer - Validate payload first (recommended for production reliability) - Render via recipe endpoint - Retrieve via jobs polling or webhook callback ### Render request payload ```json { "recipe_id": "voiceover_explainer", "style_preset": "karaoke_yellow", "idempotency_key": "influencer-flow-001", "variables": { "voiceover_audio_url": "https://samplelib.com/lib/preview/mp3/sample-15s.mp3", "background_assets": [ "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4", "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerEscapes.mp4", "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4" ], "transcript_words": [ { "text": "The", "start": 0, "end": 900 }, { "text": "striker", "start": 900, "end": 1900 }, { "text": "gets", "start": 1900, "end": 2800 }, { "text": "past", "start": 2800, "end": 3600 }, { "text": "the", "start": 3600, "end": 4300 }, { "text": "defender,", "start": 4300, "end": 5600 }, { "text": "takes", "start": 5600, "end": 6600 }, { "text": "the", "start": 6600, "end": 7300 }, { "text": "shot,", "start": 7300, "end": 8700 }, { "text": "and", "start": 8700, "end": 9500 }, { "text": "it", "start": 9500, "end": 10200 }, { "text": "is", "start": 10200, "end": 10900 }, { "text": "a", "start": 10900, "end": 11600 }, { "text": "beautiful", "start": 11600, "end": 13400 }, { "text": "goal!", "start": 13400, "end": 15000 } ], "hook_text": "Watch this crazy play!", "cta_text": "Follow for more highlights" }, "webhook_url": "https://example.com/reelforge/webhooks" } ``` ### Validate response example ```json { "success": true, "data": { "valid": true, "estimated_total_duration_seconds": 15, "estimated_credit_cost": 15, "warnings": [] }, "request_id": "req_01hxyz" } ``` ### Render response example ```json { "success": true, "job_id": "8f1fd0fe-63a5-4fef-8f2c-8f1225f6d309", "request_id": "req_01hxyz" } ``` ### Job retrieval response example ```json { "success": true, "data": { "id": "8f1fd0fe-63a5-4fef-8f2c-8f1225f6d309", "status": "completed", "output_url": "https://signed-url.example/video.mp4", "error_message": null, "created_at": "2026-03-10T12:00:00.000Z", "updated_at": "2026-03-10T12:00:21.000Z", "total_duration_seconds": 15, "credit_cost": 15 }, "request_id": "req_01hxyz" } ``` --- ## OpenAPI Schema Machine-readable JSON schema: https://api.reelforger.com/v1/openapi.json