Skip to content

productivity

3 posts with the tag “productivity”

High-Speed AI-Driven Product Development with Gasoline MCP

What if your AI could see the UI, react to issues in real time, and tighten the feedback loop from “something looks wrong” to “it’s fixed” to under a minute? That’s the development speed Gasoline enables.

The speed of software development is the speed of feedback loops. How fast can you go from:

  • Idea to prototype?
  • Code change to visual result?
  • Bug report to fix deployed?
  • “This looks wrong” to “now it looks right”?

Every great product development methodology — agile, lean, continuous deployment — is fundamentally about making these loops shorter. The faster you iterate, the better the product gets.

AI coding agents just compressed the “code change” part of the loop from hours to seconds. But there’s a gap: the AI can write code fast, but it can’t see the result. It pushes a fix and then asks you, “Did that work?” You check the browser, describe what you see, and the loop slows back down to the speed of human communication.

Gasoline closes that gap. The AI sees the browser. The loop runs at machine speed.

Real-Time React: See It, Fix It, Verify It

Section titled “Real-Time React: See It, Fix It, Verify It”

Here’s what a high-speed development session looks like with Gasoline:

You: “The signup form looks broken on mobile. Fix it.”

AI observes the page:

observe({what: "screenshot"})
observe({what: "errors"})
observe({what: "vitals"})

The AI sees the screenshot — the form is overflowing its container, the submit button is off-screen. It sees a CSS warning in the console. It notes the LCP is 4.2s (should be under 2.5s).

AI diagnoses: “The form container has max-width: 600px but no horizontal padding. On screens narrower than 600px, the content bleeds. The submit button has position: absolute with a fixed top that pushes it below the viewport on mobile.”

AI writes the fix. Updates the CSS — adds responsive padding, changes the button positioning to relative.

AI verifies the fix:

observe({what: "screenshot"})
observe({what: "errors"})
observe({what: "vitals"})

The AI sees the updated screenshot — form fits the viewport, button is visible. No new errors. LCP dropped to 1.8s.

Total time: under 60 seconds. The AI saw the problem, understood the visual context, wrote the fix, and verified it — all without you describing anything beyond “looks broken on mobile.”

Traditional AI coding assistants are blind to the visual result of their work. They can reason about code, but they can’t reason about what the code looks like when rendered.

With Gasoline, the AI becomes design-aware:

observe({what: "screenshot"})

The AI takes a screenshot after every significant change. It can compare before and after, catch layout regressions, verify that a modal actually appeared, confirm that an error banner is gone.

observe({what: "vitals"})

Every navigation and interaction includes Web Vitals. The AI knows if a change improved or degraded LCP, CLS, or INP. No separate performance testing step — it’s built into the development loop.

observe({what: "errors"})

After every change, the AI checks for console errors. A CSS change that accidentally breaks a JavaScript selector? Caught immediately. A component that throws on re-render? Caught before you even look at the page.

interact({action: "list_interactive"})

The AI can verify that all expected interactive elements are present, visible, and accessible after a change. Did the redesign accidentally hide a button? The AI knows.

Here’s where it gets powerful. You’re not just fixing bugs — you’re refining the product at high speed.

You: “The dashboard feels cluttered. Make it cleaner.”

The AI screenshots the page, identifies the visual elements, and starts making targeted changes:

  1. Increases whitespace between sections
  2. Reduces the number of visible metrics (hides secondary ones behind a toggle)
  3. Simplifies the header
  4. Screenshots after each change to compare

You: “Better, but the chart is too small now.”

The AI adjusts, screenshots, verifies. Three iterations in the time it would have taken to write one Jira ticket describing the problem.

This is the Loveable model of development — rapid visual iteration where the AI handles implementation and you guide the direction. Every critique becomes a fix becomes a verification in under a minute.

The AI doesn’t just respond to your feedback — it proactively catches issues through Gasoline’s continuous capture:

The AI monitors observe({what: "errors"}) and observe({what: "vitals"}) as you browse. It can interrupt with: “I noticed a new TypeError appearing on the settings page — it started after the last commit. Want me to investigate?”

Run your natural language test scripts against production:

1. Navigate to the homepage
2. Verify no console errors
3. Verify LCP is under 2.5 seconds
4. Click "Sign Up"
5. Verify the form loads without errors
6. Navigate to /dashboard
7. Verify the WebSocket connects successfully

If anything regresses, the AI has the full context: the error, the network state, the visual state, the performance metrics. It can start debugging before you even know there’s a problem.

Take a screenshot on desktop, then tell the AI to check the responsive viewport:

interact({action: "execute_js",
script: "window.innerWidth + 'x' + window.innerHeight"})

The AI can systematically check different viewport sizes and report visual issues at each breakpoint.

Each individual capability — screenshots, error checking, Web Vitals, interactive element discovery — is useful on its own. But the compound effect is what transforms development speed:

Traditional LoopGasoline Loop
Write codeWrite code
Switch to browserAI checks browser automatically
Visually inspectAI analyzes screenshot
Open DevTools if something looks wrongAI already checked errors
Check Network tabAI already checked network
Describe problem to AIAI already knows the problem
Wait for AI suggestionAI already wrote the fix
Apply fix, repeatFix is applied, verified, and committed

The traditional loop has 8 steps with human bottlenecks at each one. The Gasoline loop has 3 steps that run at machine speed.

Designers and PMs become directly effective. They describe what they want in natural language. The AI implements and verifies it in real time. The feedback loop between “I want this to look different” and “it looks different” drops from days (designer → Jira ticket → engineer → PR → deploy → review) to minutes.

Engineers focus on architecture, not pixel-pushing. The AI handles the visual iteration while engineers work on the hard problems — data models, system design, performance optimization, security.

QA shifts from catching bugs to preventing them. When the AI verifies every change visually and functionally in real time, bugs get caught at the moment they’re introduced — not three sprints later when QA runs the regression suite.

Product velocity compounds. Faster feedback loops mean more iterations per day. More iterations mean better product quality. Better quality means less time spent on bugs and more time on features. The cycle accelerates.

The gap between “AI can write code” and “AI can build products” is context. An AI that can see the browser, check the errors, verify the visuals, and confirm the performance isn’t just a coding assistant — it’s a development partner that operates at the speed you think.

Gasoline provides that context. Four tools, zero setup, everything the AI needs to see your product the way your users see it.

The fastest development teams in the world will be the ones where the feedback loop runs in seconds, not days. That future starts with giving the AI eyes.

One Tool Replaces Four: How Gasoline MCP Eliminates Loom, DevTools, Selenium, and Playwright

Most development teams juggle at least four tools to ship a feature: Loom for demos and bug reports, Chrome DevTools for debugging, Selenium or Playwright for automated testing, and some combination of all three for QA. Each tool has its own setup, its own learning curve, and its own context switch.

Gasoline MCP replaces all four with a single Chrome extension and one MCP server. And the result isn’t just fewer tools — it’s dramatically faster cycle times.

Loom — “Let Me Show You What’s Happening”

Section titled “Loom — “Let Me Show You What’s Happening””

Product managers record Loom videos to demo features. Developers record Loom videos to show bugs. QA records Loom videos to document test failures. Everyone records Loom videos because the alternative — writing a detailed description with screenshots — takes even longer.

The problem: Loom videos are static. They can’t be replayed against a new build. They can’t be edited when the flow changes. They can’t be version-controlled. And they require $12.50/user/month.

Chrome DevTools — “Let Me Check the Console”

Section titled “Chrome DevTools — “Let Me Check the Console””

Every debugging session starts with opening DevTools, switching between Console, Network, and Elements tabs, copying error messages, and pasting them somewhere the AI or another developer can see them.

The problem: DevTools is manual and disconnected. The AI can’t see what’s in DevTools. You’re the human bridge between the browser and your tools.

Selenium / WebDriver — “Let Me Automate This”

Section titled “Selenium / WebDriver — “Let Me Automate This””

Automated browser testing requires WebDriver binaries, a programming language (Java, Python, JavaScript), and coded selectors that break whenever the UI changes.

The problem: High setup cost, high maintenance cost, requires developer skills. Product managers and QA without coding experience can’t use it.

Playwright — “Let Me Write a Proper Test”

Section titled “Playwright — “Let Me Write a Proper Test””

Modern browser automation that’s better than Selenium but still requires JavaScript/TypeScript, an npm project, and coded selectors.

The problem: Same fundamental issue — you need code to create tests. And when tests break (they always break), you need code to fix them.

Instead of recording a video:

"Navigate to the dashboard. Add a subtitle: 'Welcome to the Q1 report.'
Click the revenue tab. Subtitle: 'Revenue is up 23% quarter over quarter.'
Click the export button. Subtitle: 'One click to export to PDF.'"

The AI navigates the application while displaying narration text at the bottom of the viewport — like closed captions. Action toasts show what’s happening (“Click: Revenue Tab”). The audience watches a live, narrated walkthrough.

Why it’s better than Loom:

  • Replayable — run the same script against tomorrow’s build
  • Editable — change one line of text, not re-record a whole video
  • Adaptive — semantic selectors survive UI redesigns
  • Versionable — store scripts in your repo, diff them in PRs
  • Free — no per-seat subscription

Instead of opening DevTools and copy-pasting:

"What browser errors do you see?"

The AI calls observe({what: "errors"}) and sees every console error with full stack traces. Then observe({what: "network_bodies", url: "/api"}) for the API response body. Then observe({what: "websocket_status"}) for WebSocket connection state. Then observe({what: "vitals"}) for performance metrics.

Why it’s better than DevTools:

  • The AI sees it directly — no human copy-paste bridge
  • Everything in one place — errors, network, WebSocket, performance, accessibility, security
  • Correlatederror_bundles returns the error with its network context and user actions
  • Persistent — data doesn’t vanish on page refresh
  • Actionable — the AI diagnoses and fixes, not just observes

Selenium → interact() + Natural Language

Section titled “Selenium → interact() + Natural Language”

Instead of writing Java with WebDriver:

"Go to the registration page. Fill in 'Jane Doe' as the name,
'jane@example.com' as the email, and 'secure123' as the password.
Click Register. Verify you see the welcome message."

The AI navigates, types, clicks, and verifies — using semantic selectors (label=Name, text=Register) that survive UI changes.

Why it’s better than Selenium:

  • No code — describe the test in English
  • No setup — no WebDriver, no JDK, no project scaffolding
  • Resilient — semantic selectors adapt to redesigns
  • Anyone can use it — PMs, QA, designers, not just developers

Playwright → generate(format: “test”)

Section titled “Playwright → generate(format: “test”)”

After running a natural language test, lock it in for CI:

generate({format: "test", test_name: "registration-flow",
assert_network: true, assert_no_errors: true})

Gasoline generates a complete Playwright test from the session — real selectors, network assertions, error checking. The AI explored in English; Gasoline exports for CI/CD.

Why it’s better than writing Playwright by hand:

  • Faster — describe the flow, don’t code it
  • Accurate — generated from real browser behavior, not guessed
  • Maintainable — when the test breaks, re-run in English and regenerate

The Compound Effect: Radical Cycle Time Reduction

Section titled “The Compound Effect: Radical Cycle Time Reduction”

Replacing four tools isn’t just about having fewer subscriptions. It’s about what happens when demo, debug, test, and automate are the same workflow.

  1. PM records a Loom showing the desired feature (10 minutes)
  2. Developer watches the Loom, opens DevTools, starts building (context switch)
  3. Developer debugs in DevTools, copies errors, pastes to AI, gets suggestions (context switch)
  4. Developer writes Playwright tests for the feature (30-60 minutes)
  5. QA records a Loom of a bug they found (10 minutes)
  6. Developer watches the Loom, reproduces, opens DevTools again (context switch)
  7. Developer fixes and re-runs tests (context switch)
  8. PM records another Loom for the stakeholder demo (10 minutes)

Four tools. Six context switches. Half the time spent on ceremony instead of building.

  1. PM describes the feature to the AI: “The user should be able to export the report as PDF”
  2. AI builds the feature, debugging in real time — it sees errors as they happen, fixes them, verifies with observe({what: "errors"}), checks performance with observe({what: "vitals"})
  3. AI generates a test: generate({format: "test", test_name: "pdf-export"})
  4. AI runs the demo with subtitles for the stakeholder
  5. If QA finds a bug, the AI already has the error context — observe({what: "error_bundles"}) — and fixes it in the same session
  6. AI regenerates the test if the fix changed the flow

One tool. Zero context switches. The cycle from “PM describes feature” to “tested, demo-ready feature” happens in a single conversation.

Activity4-Tool CycleGasoline Cycle
Feature demo (PM)10 min Loom recording0 — AI demos with subtitles
Debugging20 min (DevTools + copy-paste)2 min (AI observes directly)
Test creation30-60 min (Playwright)2 min (generate from session)
Bug report10 min Loom + reproduce1 min (AI already has context)
Bug fix verification5 min (re-run tests)30 sec (refresh + observe)
Stakeholder demo10 min (new Loom)1 min (replay demo script)
Total85-115 min~7 min

That’s not an incremental improvement. It’s an order of magnitude.

Product velocity isn’t about how fast you type. It’s about how fast you can go from “idea” to “shipped and verified.” Every context switch adds latency. Every tool boundary adds friction. Every manual step adds error.

When demo, debug, test, and automate collapse into a single AI conversation:

  • Feedback loops tighten — the AI sees the result of every change in real time
  • Iteration cost drops — trying a different approach is a sentence, not a sprint
  • Quality increases — tests are generated from real behavior, not written from memory
  • Everyone participates — PMs can demo, test, and file bugs without developer involvement

This is what AI-native development looks like. Not “AI helps you write code faster” — but “AI collapses the entire build-debug-test-demo cycle into minutes.”

The one remaining advantage Loom has over Gasoline is shareability — you can send a Loom link to anyone with a browser. Gasoline’s demo scripts require the AI to replay them.

The fix: tab recording. Chrome’s tabCapture API can record the active tab as video while the AI runs a demo script. Subtitles and action toasts are already rendered in the page, so they’d be captured automatically. The output: a narrated demo video, generated from a replayable script, with burned-in captions. No Loom subscription. No manual recording. No re-takes.

That feature is on the roadmap. When it ships, the Loom replacement is complete.

You don’t need four tools. You need one browser extension, one MCP server, and an AI that can see your browser.

Loom → Gasoline subtitles + demo scripts (+ tab recording, coming soon) Chrome DevTools → Gasoline observe() Selenium → Gasoline interact() + natural language Playwright → Gasoline generate(format: “test”)

One install. Zero subscriptions. Faster than all four combined.

Get started →

Why Gasoline Saves Hours on Product Demos

Product demos eat time — scripting, rehearsing, recovering from mistakes. Gasoline turns demos into repeatable, AI-driven presentations you write once and run forever.

A product demo looks like 15 minutes of clicking. Behind it is hours of preparation:

  • Scripting: Deciding the flow, what to show, what to skip, what order
  • Data setup: Creating demo accounts, populating sample data, resetting state
  • Rehearsal: Practicing the flow so you hit every screen without fumbling
  • Slide sync: Bouncing between slides and the live app, losing flow each time
  • Recovery planning: What to do when the API times out mid-demo, when the spinner won’t stop, when you click the wrong thing

And then you do it again next week for a different audience, with slightly different emphasis, and the whole prep cycle repeats.

The demo becomes a text file. You write the flow in natural language — what to click, what to type, what narration to show. The AI drives the browser. You talk to the audience.

Preparation drops from hours to minutes. Write the script once. Adjust a few lines for different audiences. Run it.

Rehearsal is instant. Run the script, watch it execute, tweak a line, run it again. No manual clicking through 30 screens to test one change.

Recovery is automatic. If the AI clicks something and a spinner appears, it waits. If an error pops up, it can observe the page and adapt. It’s not a rigid recording — it’s an intelligent agent.

Here’s a conservative estimate for a recurring weekly demo:

ActivityManual (per week)Gasoline (first time)Gasoline (repeat)
Script/plan the flow30 min15 min0 min
Set up demo data20 min20 min2 min (load state)
Rehearse45 min10 min (run + tweak)0 min
Deliver the demo15 min15 min15 min
Recover from mistakes10 min avg0 min0 min
Total2 hours1 hour17 minutes

After the first run, each repeat costs you 17 minutes — the demo itself. Everything else is automated.

Over a quarter of weekly demos, that’s 24 hours saved. Over a year, close to 100 hours.

Subtitles appear directly on the page — like closed captions for your demo. The audience watches one screen. You don’t bounce between slides and the app. You don’t lose them at the transition.

The AI types perfectly every time. No mistyped email addresses, no “let me just clear that and try again” moments. Every form fill is precise.

The AI uses semantic selectors — it clicks text=Create Project, not “the blue button that I think is third from the left.” It clicks the right thing every time, even if the layout shifted since your last rehearsal.

Save a checkpoint before the demo. Reset to it between runs. No manually deleting test data, no “let me just refresh and log in again.”

Save state as "demo-ready" -> Run demo -> Load state "demo-ready" -> Run again

Different audience? Edit the script:

  • For executives: Skip the technical details, emphasize the business metrics
  • For engineers: Show the API calls, the WebSocket traffic, the performance data
  • For prospects: Focus on the happy path, add more narration

Same product. Same demo infrastructure. Different scripts. Swap a few lines instead of re-planning from scratch.

What a Gasoline Demo Looks Like in Practice

Section titled “What a Gasoline Demo Looks Like in Practice”

You sit in the meeting. You share your screen. Your AI tool is open. You say:

“I’m going to show you Acme’s project management platform. My AI assistant will drive the demo while I walk you through what you’re seeing.”

You hit enter on the script. The browser navigates, fills forms, clicks buttons. Subtitles appear at the bottom explaining each step. You narrate over the top, adding context the subtitles don’t cover.

If someone asks “Can you go back to the timeline view?” — you tell the AI, it navigates back. No fumbling.

If someone asks “What happens if you enter an invalid date?” — you tell the AI, it tries it, and everyone sees what happens. Live, unscripted, confident.

The demo feels polished because it is polished. The mechanical parts are automated. The human parts — your storytelling, your answers, your energy — are where you focus.

Demos are high-leverage moments — a 15-minute demo can close a deal, align a team, or greenlight a project. But the prep cost means most teams under-invest in demo quality.

Gasoline makes demo quality cheap. Write it once, run it perfectly every time, adapt it in minutes. Spend your time on the story, not the choreography.