Ojuit
A two-product AI platform for indie filmmakers: AI colour intelligence with Delta E measurement, XGBoost correction, and LUT export — plus an AI story engine that guides writers from raw idea to full beat sheet. Built end-to-end with FastAPI, Next.js, Supabase, OpenCV, and Claude Opus 4.5 with extended thinking.
Executive Summary
Ojuit is a two-product AI platform for solo indie filmmakers. The Colour product uses OpenCV, XGBoost, and CIE Lab Delta E to solve colour drift across the full shoot pipeline — including scene-to-reference LUT generation for DaVinci Resolve and Premiere Pro. The Story product is a full-stack AI story engine that guides writers from a raw idea through interrogation, logline, character psychology, and a full beat board using structured prompt chains, real-time state persistence, and Claude as the AI backbone. Both products share a single deployed platform, a unified CSS design system, and one product philosophy: AI that suggests, humans that decide.
AI Products
Colour intelligence + Story engine
Story Stages
Idea → Interrogation → Logline → Character → Beats
Built End-to-End
FastAPI · Next.js · Supabase · Render · Vercel
State Persisted
Every action saved — resume from exact last step
Problem Statement
Solo indie filmmakers face two distinct but related problems. The first is technical: colour drift. Shooting run-and-gun means ISO rises in low light, white balance shifts between environments, and exposure floats with changing conditions. The result is footage where scene 1 and scene 12 look like they were filmed on different days — costing 5–8 hours of manual correction per project.
The second problem is creative: most filmmakers have story ideas they never develop because the path from raw idea to structured screenplay is opaque and usually requires either expensive development support or years of craft knowledge. Existing writing tools — Final Draft, Celtx, generic AI chat — either assume you already have a fully formed story or generate one wholesale, removing the writer from their own creative process.
Ojuit addresses both. The Colour product eliminates the technical barrier with CIE Lab Delta E measurement, XGBoost correction predictions, and downloadable LUT files for DaVinci Resolve and Premiere Pro. The Story product eliminates the structural barrier — guiding writers through story discovery rather than doing it for them, using AI as a collaborator that asks better questions rather than one that writes the story.
"From Prediction Machines (Agrawal, Gans and Gold): AI reduces the cost of prediction. Ojuit applies this across two domains — colour correction prediction for filmmakers who can't afford a colorist, and story structure prediction for writers who don't yet know the craft. When prediction is cheap, solo creators can achieve professional outcomes without professional support."
User Personas
Ojuit serves three distinct solo filmmaking profiles: a documentary filmmaker who needs colour consistency without crew, a content creator making the leap to narrative film without technical training, and a micro-budget producer who needs to catch problems on set before post begins. Each persona maps directly to a specific stage of the colour pipeline — pre-shoot reference analysis, on-shoot drift detection, and post correction.
Kofi
Solo Documentary Filmmaker, Manchester
Goals:
- •Achieve consistent professional colour across all scenes without hiring a colorist
- •Spend creative time on storytelling rather than technical correction
- •Deliver a final grade that looks intentional, not repaired
Pain Points:
- •Spends 5–8 hours per project manually fixing colour continuity caused by camera drift between locations
- •Cannot afford a colorist or DIT on a solo budget
- •Has no reliable way to measure drift before it becomes a post-production problem
"I finish a multi-location shoot, run my footage through the tool, and get a precise measurement of how far each scene has drifted from my reference. I apply corrections in an hour instead of a day."
Priya
Content Creator Transitioning to Narrative Film, London
Goals:
- •Achieve a cinematic, consistent look without deep technical expertise
- •Understand which camera settings to adjust for a specific visual style
- •Graduate from content that looks competent to content that looks intentional
Pain Points:
- •Footage looks inconsistent between scenes but she cannot identify the cause
- •Colour grading tutorials assume professional knowledge she does not have
- •No clear path from a reference look she admires to camera settings she can actually use
"I find a reference film with the look I want. The tool tells me the colour profile of that look and the camera settings I need to replicate it. My whole film looks like it came from the same world."
Marcus
Micro-Budget Producer, Birmingham
Goals:
- •Reduce post-production hours without sacrificing output quality
- •Deliver consistent colour across multi-day shoots
- •Catch drift on set rather than discovering it in post
Pain Points:
- •Cannot afford a colorist for every project
- •Colour consistency between Day 1 and Day 3 of a shoot is unreliable
- •Current tools require manual work that consumes the post schedule
"I check colour drift between days while we are still on location. If something is off, I fix it before we wrap. Post-production becomes grading, not rescue."
Solution Overview
Ojuit is a two-product AI platform. The Colour product is a three-module pipeline covering pre-shoot, on-shoot, and post correction — with CIE Lab Delta E measurement, XGBoost correction predictions, and scene-to-reference LUT generation for DaVinci Resolve and Premiere Pro. The Story product is a five-stage AI story engine: Cold Open → Interrogation → Logline Forge → Character Forge → Beat Board. Theme development is embedded within Logline Forge. The Story Bible is a persistent panel available from Logline Forge onward — not a stage. Both products share a single Next.js frontend, a FastAPI backend on Render, a Supabase database, and a unified CSS design system built entirely on CSS custom properties.
AI Story Engine
A progressive story development tool guiding writers from raw idea through interrogation, logline generation, character psychology (Lie/Want/Need), Save the Cat moments, and a full beat board. Every AI suggestion is grounded in the writer's own committed inputs. The writer commits answers — the AI never locks a field.
Full-Stack AI Architecture
FastAPI backend with structured prompt chains, context-aware suggestion endpoints, and AVOID_LIST negative constraint injection to prevent AI output monoculture. Next.js frontend with real-time Supabase persistence — every action saved, every session resumable from the exact last step.
Colour Intelligence Pipeline
OpenCV per-frame colour extraction, XGBoost correction prediction, CIE Lab Delta E measurement, and scene-to-reference LUT generation. Upload two frames and get a downloadable .cube LUT file for DaVinci Resolve or Premiere Pro. Delta E below 5 is the professional continuity threshold. AI suggests — filmmaker decides.
Data & Methodology
Data Dictionary
| Feature | Type | Description | Source |
|---|---|---|---|
| raw_idea | text | Writer's original idea or title — seed for all downstream AI calls | User input |
| interrogation_location / broken_relationship / private_behaviour | text | Three specificity answers that ground all AI suggestions in the writer's world — saved to Supabase on interrogation continue | User input |
| theme | text | Primal question beneath the story — AI-generated, user-editable, affects character and beat suggestions | Claude API → user edit |
| logline / logline_label | text | Locked logline and its angle label (External Stakes / Internal Stakes / Tonal Shift / Custom) | User selection |
| wound_answer / character_name | text | Protagonist wound and name — saved immediately on submission, restored on resume | User input |
| character_lie / want / need / save_the_cat_scene | text | Full character psychology profile — generated from wound + logline, user-editable per field | Claude API → user edit |
| beats | jsonb | Array of completed beats each with number, name, answer — incremental save after every beat submission | User input per beat |
| stage | integer | Current story stage (0–6) — used to route resume to correct component with full state | Derived |
| mean_r/g/b, colour_temperature_k, drift_magnitude | float | Per-frame colour statistics for XGBoost correction model (Colour product) | OpenCV extraction |
Methodology
Two distinct data strategies for two products. The Colour product uses synthetic training data — 8,000 scenes across four drift types (standard, mixed lighting, LOG profile, harsh clipping) with programmatically applied drift, so ground truth corrections are always known exactly. All colour difference calculations use CIE Lab space, not RGB Euclidean distance. Scene-to-reference LUT generation fits a degree-2 polynomial mapping from scene Lab values to reference Lab values per channel, producing a 33x33x33 .cube file. The Story product uses a progressive schema: the story record is created in Supabase at Cold Open when the user confirms Save and Begin — before interrogation starts. Every subsequent committed answer is saved immediately on commit. Synthetic data is a deliberate professional choice, not a shortcut. The XGBoost model was trained with a 3% noise factor, confirmed in model_metadata.json, to simulate real camera sensor variation.
Validation Approach
- •Story: resume fidelity — every field restored from Supabase matches exact state when user left
- •Story: suggestion grounding — all AI suggestions verified to use full committed story context, not just raw idea
- •Colour: Delta E measurement — industry standard metric, target below 5 between corrected scenes
- •Colour: override rate tracking — proxy for model alignment with creative intent (target below 25%)
Ethics & Responsible AI
Creative Autonomy
The writer commits every answer — the AI never locks a field. Every suggestion is optional. This is enforced at UX level: no AI output is applied without explicit user action. The story belongs to the writer, not the model.
Avoiding AI Monoculture
Every prompt injects an AVOID_LIST — negative constraints explicitly blocking overused AI defaults: absent parent wounds, chosen one structures, speech-at-the-end resolutions. Forces the model to find the specific human truth in each writer's idea rather than pattern-matching to familiar tropes.
Transparency in AI Assistance
The Story Bible is a persistent panel available from Logline Forge onward — it populates progressively as the writer commits answers and is always accessible via a side tab on desktop or a FAB on mobile. The Theme field surfaces the AI's primal question so the writer can interrogate, edit, or reject it. Nothing is hidden.
Honest Scope
ChromaSync Story guides — it does not write. ChromaSync Colour corrects continuity drift — it does not replace creative grading. UI language consistently says 'suggest' not 'fix'. Scope is clearly communicated to prevent over-reliance.
Guardrails & Safeguards
| Rule | Threshold | Rationale |
|---|---|---|
| AVOID_LIST injection | Applied to every AI prompt in the story pipeline | Prevents the model defaulting to overused patterns regardless of how sparse the input is |
| Suggestions always optional | No AI output applied without explicit user click | Writer creative autonomy is non-negotiable — the AI cannot write the story for them |
| Large Colour Correction Flag | Delta E above 10 (Critical) triggers manual review warning | Large corrections may indicate intentional creative choice, not accidental drift |
| Override Rate Monitor | Alert if colour override rate exceeds 40% | High override rate signals model misalignment with creative intent requiring retraining |
Bias Audit & Fairness Assessment
Story product: primary risk is AI output monoculture — the model converging on the same archetypes regardless of the writer's input. Mitigation: AVOID_LIST negative constraints in every prompt block overused wounds, structures, and resolutions. Secondary risk: sparse input producing generic suggestions. Mitigation: title-awareness prompts treat minimal input as a seed for divergent possibilities. Colour product: primary risk is inconsistent correction across diverse skin tones. Mitigation: synthetic training data across full 2700K–8000K spectrum. Full bias validation deferred to V2 with real diverse footage. Ethical framework: Brian Christian, The Alignment Problem — building AI that serves human creative intent, not just the training objective.
OKRs & Success Metrics
Objective
Build and ship Ojuit — a two-product AI platform for solo indie filmmakers demonstrating full-stack AI product thinking, CIE Lab colour science, XGBoost ML, LUT generation, ethical AI design, and production-grade engineering
Key Results
Story Engine live at chromasync-app.vercel.app with all five stages functional end to end
100%Target: Live URL
Full state persistence: every story action saved to Supabase and resumable from exact last step
100%Target: 100% resumable
Three story frameworks supported: Save the Cat, Truby, Story Circle — Short Story is a format not a framework
100%Target: 3 frameworks
AVOID_LIST negative constraints injected into every AI prompt to prevent output monoculture
100%Target: All prompts
Full PM artefact suite published: PRD, metrics, competitive analysis, ethics, risk register
100%Target: 5 artefacts
Colour pipeline: Delta E below 5 on 10 diverse test clips
30%Target: <5 Delta E units
Success Metrics
| Metric | Target | Achieved | Status |
|---|---|---|---|
| Story Engine live and functional | All 5 stages | Shipped | Achieved |
| State persistence | Every action saved | 100% via Supabase | Achieved |
| PM artefact suite | 5 documents | PRD, Metrics, Competitive, Ethics, Risk Register | Achieved |
| Colour pipeline Delta E | <5 units | In progress | In Progress |
Learnings & Reflections
What Went Well
- •The two-product architecture was the right strategic call. Colour and Story serve the same user at different points in the creative workflow. They share a platform, a design system, and a philosophy — but each stands alone as a demonstrable product with distinct PM topics to discuss.
- •Progressive disclosure in the Story engine significantly reduced perceived complexity. One question at a time, each stage unlocking only after the previous is committed — this mirrors how experienced story editors work and makes the AI feel like a collaborator rather than a form.
- •State persistence designed as a product feature, not a technical afterthought. Designing the Supabase schema to capture every action — interrogation answers, wound, character fields, theme, beats — before building the UI forced better product decisions. Resume-from-exact-last-action is a differentiating feature.
- •The AVOID_LIST negative constraint injection is the single most effective prompt engineering decision. Blocking overused defaults forces the model to find something specific in each writer's idea — which is exactly what a good story editor does.
Challenges Faced
- •Context propagation across a multi-stage flow was harder than expected. Each stage needs the full accumulated story state to produce grounded suggestions. Getting the frontend to pass the right context to every API endpoint required multiple refactoring passes.
- •Cold start latency on Render's free tier created a silent failure mode — the first user action after dormancy would fail because the API wasn't awake. Solved with a silent /health ping on mount, but the root cause is a product constraint needing addressing before real user growth.
- •Sparse input is a genuine design problem. A writer who types only a title gets suggestions built from almost nothing. The title-awareness prompt helps, but the interrogation stage itself is the real solution — which is why it exists.
What I'd Do Differently
- •Define the full Supabase schema before building any story stage. Adding columns incrementally worked but required multiple migration rounds. A full data model upfront — even with nullable columns — would have been cleaner.
- •Build the Story Library earlier. It's the feature that makes the product feel real — a writer sees saved stories, resumes them, and experiences the value of persistence firsthand. It should have been Day 1, not Phase 4.
- •Run prompt engineering and UI design in parallel. Several UX decisions were made before AI outputs were stable, which required rework when the suggestions changed shape.
"As Jesse Schell writes in The Art of Game Design, every design decision should be tested against one question: does this serve the experience? For Ojuit Story, every feature was tested against: does this keep the writer in their creative process, or does it pull them out? The progressive disclosure, commit-before-continuing mechanic, and AVOID_LIST all flow from that single design principle. For Ojuit Colour, the equivalent question is: does this give the filmmaker a decision they can actually make on set, or just a number they cannot interpret? CIE Lab Delta E and plain-English verdicts exist because of that question."
PM Artefacts
Written before any code. Every project ships with a full PM artefact set.
Let's Connect
I am actively seeking Junior AI PM, Technical PM, and AI BA roles at companies building AI-powered products — creative tech, media, e-commerce, fintech, or any domain where product thinking and technical depth matter equally. Ojuit demonstrates full-stack AI product development: prompt engineering, CIE Lab colour science, XGBoost ML pipelines, LUT generation, real-time persistence, mobile-first UX, and ethical AI design — built and shipped, not just planned. Let us connect.
Quick Links
© 2025 Ogbebor Osaheni. Built with Next.js, React, and Tailwind CSS.
