Frame Intelligence
AI-powered commercial film intelligence platform—six-layer video analysis and a seven-stage production pipeline for professional filmmakers and creative directors.
Executive Summary
Frame Intelligence (ATANDA Studio) is a three-product AI platform built for commercial filmmakers, creative directors, and brand strategists. ATANDA Lens analyses any TikTok or Instagram commercial across six professional frameworks—Brand Strategy, Story Structure, Cinematography, Sound, Post Production, and Performance Prediction—using Claude API with structured JSON output enforced via forced tool_choice. ATANDA Forge is a seven-stage AI production pipeline that takes a brief or Lens analysis from concept to post-production brief, powered by the Meridian Engine (seven specialist Claude system prompts).
Analysis Layers
Brand, Story, Cinematography, Sound, Post, Performance
Forge Stages
Brief → Concepts → Script → Visual → Generate → Guide → Post
Frames Sampled
≈90K input tokens per analysis
Platforms Supported
TikTok and Instagram (fully working)
Problem Statement
Professional commercial filmmakers and creative directors have no structured tool to systematically analyse what makes a commercial work. Feedback is subjective, inconsistent, and locked in the heads of senior creatives. Junior directors and brand strategists waste days trying to reverse-engineer successful campaigns with no framework to guide them.
On the production side, the process from brief to shooting script involves dozens of discrete creative decisions—audience insight, proposition, concept routes, shot-by-shot visual direction, AI generation prompts, and post-production briefs. Each decision currently requires a different specialist: brand consultant, scriptwriter, director of photography, producer. For independent filmmakers and small agencies, that chain is inaccessible.
Existing AI tools for video either describe what is on screen (generic captions) or analyse sentiment at a surface level. None apply the professional frameworks used in the commercial film industry—Bruce Block's visual components, SB7 story structure, Cialdini persuasion principles, Byron Sharp brand distinctiveness—to give structured, actionable intelligence that a working director can actually use.
User Personas
Kolade
Junior Commercial Director
Goals:
- •Build a repeatable, defensible system for visual direction he can present to clients with confidence
- •Reduce the gap between what he feels intuitively about a reference and what he can explain technically
- •Diagnose why pitches fail using structured framework analysis
Pain Points:
- •Client feedback is subjective with no framework to push back
- •Shot lists built from memory and instinct rather than structured analysis
- •Research takes 3–4 hours per project with no reusable output
"I upload a Nike reference, get a structured breakdown of why it works — technically and emotionally — and walk into my client pitch with a visual rationale I can actually defend."
Amara
Brand Strategist at a Creative Agency
Goals:
- •Close the translation gap between a brand brief and a shooting script
- •Audit competitor brand films systematically
- •Move from brief to first-draft creative direction in days
Pain Points:
- •Directors interpret briefs differently every time — no shared visual language between strategy and production
- •Campaign timelines are shrinking but the briefing process has not
- •Alignment meetings frequently require multiple rounds of revision
"I write the brief once. The tool translates it into a shooting script structure the director can work from immediately. We go into pre-production aligned."
Ola
Independent Filmmaker and Content Creator
Goals:
- •Build a full production pack — concept, shot list, and AI-generated visuals — that clients can sign off on without an agency
- •Use Midjourney and Higgsfield effectively without hours of trial and error
- •Deliver 2–3 commercial projects per month solo
Pain Points:
- •AI generation tools require precise technical prompts she has to reverse-engineer each time
- •No end-to-end workflow from concept to client deliverable for solo operators
- •Clients expect agency-grade output at freelance rates
"I describe the look I want and get back the exact prompts, the shot breakdown, and a client-ready pack. In hours, not days."
Solution Overview
Frame Intelligence is built as three interconnected products on a single platform. ATANDA Lens performs deep six-layer analysis of commercial videos using Claude API with professional frameworks baked into structured system prompts. ATANDA Forge chains seven specialist AI modules (the Meridian Engine) to take any brief or Lens analysis through a complete commercial production pipeline. ATANDA Vault is the engine marketplace where filmmakers can publish and monetise their own specialist system prompts.
Six-Layer Lens Analysis
Brand Strategy (Cialdini ×7 including Unity + Byron Sharp + Ries & Trout positioning), Story Structure (SB7 full 7-field + McKee dramatic structure + Save the Cat beats), Cinematography (Bruce Block ×7 visual components + Blain Brown lens theory + John Alton lighting language), Sound, Post Production (Walter Murch ×6 criteria + Karen Pearlman rhythm theory), and Performance Prediction — each triggered individually by the user, never auto-loaded.
Seven-Stage Forge Pipeline
Brief → Concepts (5+ routes with approach tags) → Script (30s shooting script + 6s cutdown) → Visual Direction (per-scene, per-shot) → Generate (Midjourney + Higgsfield prompts) → Production Guide → Post Briefs. Every stage is explicit user-triggered.
The Meridian Engine
Seven specialist Claude system prompts (Brand Consultant, Idea Machine, Scriptwriter, DOP, AI Producer, Line Producer, Post Supervisor) loaded from a structured JSON engine file. Swappable—any filmmaker can publish their own engine to the Vault.
Ethics & Responsible AI
User-Triggered Only
Every Claude API call in the platform is triggered by an explicit user click. No automatic loading, no background processing, no prefetching. Token spend equals user intent.
Data Minimisation
Videos are not stored permanently. Extracted frames are deleted after analysis via a GDPR-compliant DELETE /job/{job_id} endpoint. Only the analysis JSON is saved to Supabase.
Transparency of Output
Every analysis output shows which framework was applied (SB7, Bruce Block, Cialdini) and how scores were derived—users understand the reasoning, not just the result.
Platform Limitations Disclosed
YouTube is explicitly flagged as unsupported due to bot detection on cloud servers. The product never silently fails—every limitation is visible in the UI.
Guardrails & Safeguards
| Rule | Threshold | Rationale |
|---|---|---|
| No auto-load | 0 unprompted API calls | Token spend must equal explicit user intent |
| Frame cap | 60 frames maximum | Cost control and predictable token usage |
| Structured output | 100% tool_choice enforcement | Prevents hallucinated or malformed analysis |
Bias Audit & Fairness Assessment
The analysis frameworks (SB7, Bruce Block, Cialdini) are applied uniformly regardless of video content, platform, or brand. The Lens analysis does not score cultural appropriateness—it scores craft and persuasion techniques. Future phases will include an explicit content flag for ASA/Clearcast compliance review in the Forge Script stage.
OKRs & Success Metrics
Objective
Build an end-to-end AI commercial film intelligence platform applying professional craft frameworks across analysis and production
Key Results
Ship ATANDA Lens with 6 analysis layers using professional frameworks
100%Target: 6 layers complete
Ship auth, Supabase library, and all 7 Forge route shells (Phase 2)
100%Target: Phase 2 complete
Wire all 7 Forge stages to Claude API via Meridian Engine (Phase 3)
100%Target: 7 stages live
Deploy production-grade platform (Vercel + Render + Supabase)
100%Target: Live in production
Success Metrics
| Metric | Target | Achieved | Status |
|---|---|---|---|
| Lens layers shipped | 6 | 6 | Achieved |
| Auth system | Magic link + RLS | Complete | Achieved |
| Forge stages wired | 7 | 7 of 7 — complete and deployed | Achieved |
| Engine JSON architecture | Extensible | Meridian v1 — all 7 modules live | Achieved |
Roadmap & Future Vision
Now
Completed- →ATANDA Lens — 6-layer video analysis (TikTok + Instagram)
- →Magic link auth + Supabase library with session persistence
- →Forge dashboard + all 7 stage route shells
- →Vault UI with Meridian Engine card
- →Meridian Engine JSON — all 7 module system prompts
- →backend/forge_pipeline.py + engine_loader.py + POST /forge/stage
- →All 7 Forge stages wired to Claude API
- →Lens → Forge pipeline (analysis as Stage 1 context)
- →Forge project save per stage + resume from Supabase
- →Production pack PDF export
Next
In Progress- →Phase 2 audio analysis via Whisper (Sound layer fully populated)
- →Vault engine upload for private engines
- →Usage analytics — layer click-through rate, export rate
Later
Planned- →Vault engine marketplace + publishing workflow
- →Stripe subscriptions (Free / Creator / Studio / Agency)
- →Revenue share for engine creators (Stripe Connect)
- →Team collaboration (Agency tier)
Learnings & Reflections
What Went Well
- •Forced tool_choice on every Claude API call was the right call from day one—structured JSON output made every layer immediately renderable and eliminated all parsing edge cases
- •Design token architecture (single CSS variables file → Tailwind config) made the entire UI themeable from one file—one change cascades everywhere instantly
- •Separating Lens analysis storage (Supabase) from session state (sessionStorage) gave the right UX: fast resume from cached state, with persistent backup in the database
Challenges Faced
- •YouTube is blocked by bot detection on cloud servers—a known limitation that required explicit product communication rather than a workaround
- •Frame sampling strategy required careful tuning: too few frames missed key scenes, too many hit token limits and increased cost unpredictably
- •Building 7 Forge stage components with consistent layout and state management required a clear shared architecture (StageLayout, StageSidebar, useForge hook) before building individual stages
What I'd Do Differently
- •Build the Meridian Engine JSON before the UI shells—having the system prompts locked in would have let me validate the pipeline output quality much earlier in development
- •Define the per-stage Supabase schema more precisely before building Phase 2—some JSONB column structures were revised after Phase 2 was complete
- •Prototype the Forge pipeline end-to-end in a single script before building the full UI—faster signal on whether the chained stage context actually produces coherent commercial briefs
"From The AI Product Manager's Handbook: 'The best AI products apply domain expertise to constrain AI outputs, not general intelligence to replace domain expertise.' Every framework in Frame Intelligence—SB7, Bruce Block, Cialdini—is a constraint that makes the AI output more useful, not less."
PM Artefacts
Written before any code. Every project ships with a full PM artefact set.
Let's Connect
I'm actively seeking Junior AI PM / Technical PM roles at creative tech, media, and AI-first companies. Let's connect if you're building tools for creators or applying AI to professional workflows.
Quick Links
© 2025 Ogbebor Osaheni. Built with Next.js, React, and Tailwind CSS.
