Mural is a visual collaboration platform — digital whiteboards, sticky notes, templates, workshops. Millions of people use it to brainstorm, plan, and align. But the CEO had a problem with where the product was headed.
People collaborate in Mural. They workshop. They create outputs. And then the work dies in the canvas. The emails still need to be written manually. The designs still need to be created manually. The action items still need to be assigned manually. The canvas captures thinking but doesn't carry it forward into doing.
"If all we get to are stickies, we've failed."
— Mural CEOThe question: how does a visual collaboration tool evolve when AI agents can participate in the work, not just assist with it? And how do you show that vision — concretely, viscerally — to a global sales team in a way that makes them believe it?
Bill Moore was brought in to work directly with Mural's CEO. The engagement had two deliverables: an interactive prototype demonstrating the future product vision, and a 60-second cinematic vision video for the CEO's keynote at Mural's global sales kickoff.
Bill owned creative direction, technical execution, and production. From the CEO's strategic framing to a finished video and working prototype in eight days.
The core reframe: Mural isn't a canvas. Mural is a visual system of work.
In the current product, humans do the work on the canvas. In the future state, the canvas does work with humans. AI agents observe a brainstorm and structure the outputs. Voice input becomes visual structure in real-time. A workshop session auto-generates action plans with owners and deadlines. A sales meeting in Mural triggers a follow-up email draft, a Figma mockup, a competitive analysis — not because someone asked, but because the system understands the intent.
Before: Users collaborate on a canvas. Outputs stay in the canvas. Everything downstream is manual.
After: The canvas understands intent. AI agents carry work forward — from sticky notes to structured outputs, from conversation to action, from collaboration to execution.
The vision was not about adding an AI chatbot to the sidebar. It was about making the entire collaboration surface intelligent — voice-to-visual, text-to-structure, system-to-system orchestration. The canvas stops being a whiteboard and starts being an operating surface for work.
A multi-layered HTML prototype showing the future product in action. Built with vanilla JavaScript, GSAP animation sequencing, and D3.js for knowledge graph visualization. The prototype demonstrates real-time voice-to-visual transformation — participants speak in a meeting, and structured visual elements auto-populate on the canvas.
18 super PNG layers at precise pixel dimensions, composited into an animated sequence. Participant avatars, card-based information architecture, a knowledge graph radiating from a central hub, and orchestration connections showing how work flows from canvas to external systems. Every element positioned and timed to match the CEO's presentation talk track.
A 60-second cinematic piece structured in three acts:
Real Mural interface. Sticky notes, templates, workshops. Familiar. Functional. The user doing the work, not the tool helping.
The canvas shatters. Dissolves. Silence. A breath before the transformation.
What emerges: voice-to-visual, text-to-structure, system orchestration, action beyond the canvas. Not a whiteboard anymore. An intelligent operating surface for work.
The video was assembled entirely from AI-generated footage and composited prototype layers. Multiple AI video generation models were tested and selected by shot type — stop-motion timelapse for the canvas work, photorealistic footage for human moments, abstract generation for the transformation sequence. No stock footage, no film crew.
Built a custom Playwright-based tool that renders the prototype headlessly, extracts bounding boxes for every element, detects overlaps, and saves annotated screenshots. When you're positioning 18 composited layers at exact pixel coordinates for a CEO keynote, you don't eyeball it — you measure it.
The CEO didn't need a strategy deck about AI transformation. The sales team didn't need bullet points about agentic systems. They needed to see it — to watch a canvas come alive, to feel the moment the tool starts working with you instead of for you. The video and prototype are arguments made through experience, not explanation.
No single AI video model could do everything. Bill tested multiple models and selected the best one for each shot type: one for stop-motion timelapse, another for photorealistic human footage, another for the abstract transformation sequence. The pipeline optimizes for output quality, not tool simplicity.
The prototype renders inside a specific content zone in a PowerPoint deck projected at a sales kickoff. Every pixel matters. Off by 40 pixels and the CEO's presentation breaks. Bill built layout verification tooling rather than relying on visual inspection — automated bounding box extraction, overlap detection, screenshot comparison.
The three-act structure — familiar, break, transformation — is a narrative device, not a product walkthrough. The video earns the future vision by first grounding the audience in what they already know, then creating space for what's possible. This is storytelling, not a feature tour.
The vision video played as the centerpiece of the CEO's keynote at Mural's global sales kickoff. The interactive prototype was deployed for live demonstration during the presentation. The CEO's feedback on the deliverables: "Vision translated."
Eight days from signed contract to shipped deliverables. One creative technologist. A 60-second video and a working prototype that turned an abstract product strategy — "AI-powered visual system of work" — into something a global sales team could see, feel, and sell.
Vision work is translation work. A CEO knows what the future should feel like. Translating that intuition into something concrete — a video, a prototype, a sequence that makes people believe — is the actual job. The hardest part is not the production. It's understanding what the executive sees in their head and building the bridge to everyone else.
Compute pixels before committing. When multiple layouts are valid and the output is a keynote on a massive screen, don't move CSS numbers and hope. Calculate. Build measurement tools. Verify programmatically. The audience for this work is a CEO standing on stage — there is no "close enough."
Mock before you code. When multiple visual directions are possible, screenshot the options and present visuals to the decision-maker. Not descriptions. Not wireframes. Actual rendered frames. The decision quality goes up when the decision-maker can point at what they want.
Speed is the product. Eight days. The timeline was the constraint that made everything else sharper — no time for exploration that doesn't ship, no time for tools that don't produce output, no time for perfection that doesn't serve the keynote. The speed forced clarity.