v0.7.0 is a ground-up rewrite delivering a complete browser observability platform. This is the first stable release — all prior versions are deprecated.
Gasoline v6.0.0 introduces the Link Health Analyzer, plus browser automation, recording, and performance analysis for AI agents. Check all links on your page, record full sessions with video, capture performance metrics, and let AI agents test, debug, and fix your app automatically. Complete visibility. You stay in control.
Link Health Analyzer — Automatically check all links on your page for issues (broken, redirects, auth-required). 20 concurrent checks, categorized results, and async tracking with correlation IDs.
Full Recording System — Record browser tabs with video and audio. Videos stream to local disk. No cloud, no transcoding—raw WebM format.
Permission Prompts — When recording starts, you get a clear prompt to approve it. No silent recordings. You’re always in control.
CWE-942 Fixed — Replaced wildcard postMessage origins with window.location.origin across content scripts, test helpers, and background workers. Prevents message hijacking on cross-origin pages.
Secure Cookie Attributes — Cookie deletion and restoration now include Secure and SameSite attributes, preventing session fixation and CSRF vulnerabilities.
Path Traversal Protection — Hardened file operations in extension persistence layer to prevent directory traversal attacks.
Input Validation — Comprehensive validation of extension log queue capacity (2000-entry cap) and screenshot rate limiter bounds to prevent unbounded memory growth.
Smart HTTP Timeouts — 5s default timeout for localhost operations, extended to 30s+ only when accessibility features are requested. Reduces false positives while respecting slow connections.
Atomic File Writes — Log rotation uses temp + rename pattern, preventing partial writes and data loss on disk full.
Efficient Deduplication — SeenMessages pruning optimized for large event volumes.
What if you could test your web application by describing what should happen — in plain English — and have an AI actually run the tests?
No Playwright scripts. No Selenium WebDriver setup. No npm install or pip install. No learning CSS selectors, XPath, or assertion libraries. Just tell the AI what to test, and it tests it.
This isn’t a future vision. It works today with Gasoline MCP.
"Go to the login page. Enter 'test@example.com' as the email and 'password123'
as the password. Click Sign In. Verify that you land on the dashboard and there
are no console errors."
The AI:
Navigates to the login page
Finds the email field (using semantic selectors — label=Email, not #email-input-field-v2)
Types the email
Finds the password field
Types the password
Clicks the Sign In button (by text, not by CSS selector)
Waits for navigation
Checks the URL contains /dashboard
Checks for console errors
If anything fails, the AI reports exactly what happened: “The Sign In button was found and clicked, but the page navigated to /error instead of /dashboard. The API returned a 401 with {"error": "invalid credentials"}.”
The email field ID changes from #email-input to #email-field
The submit button gets a new class or is replaced with a different component
The form structure changes (inputs wrapped in a new div)
The natural language test survives all of these because the AI uses meaning-based selectors: “the email field” → label=Email, “the sign in button” → text=Sign In.
Gasoline generates a complete Playwright test from the session — every action translated to Playwright commands with proper selectors, network assertions, and error checking. The AI ran the test in natural language; Gasoline converts it to code for CI.
This is the best of both worlds:
Write tests in English — fast, no setup
Export to Playwright — repeatable, CI-ready
Re-run in English — if the generated test breaks, describe the flow again and regenerate
You know the user flows better than anyone. You shouldn’t need to write JavaScript to verify them. Describe the flow, the AI tests it, and you see the results.
You don’t have dedicated QA engineers, and your developers are building features, not writing tests. Natural language testing gives you test coverage without the headcount.
You already know how to test. Natural language testing lets you work faster — describe 10 test cases in the time it takes to code 1. Generate Playwright tests from the ones that should be permanent.
You just shipped a feature and want to verify the happy path before the PR review. A 30-second natural language test is faster than writing a proper test and faster than manual testing.
text=Submit → If the button now says “Place Order”, the AI reads the page and finds the new text
label=Email → Works regardless of whether it’s an <input>, a Material UI <TextField>, or a custom component
role=button → Works regardless of styling or class names
And if a selector doesn’t match, the AI doesn’t just fail — it calls interact({action: "list_interactive"}) to discover what’s actually on the page and adapts.
MCP (Model Context Protocol) lets AI coding assistants plug into external tools — browsers, databases, APIs, and more. The right combination of MCP servers turns your AI assistant from a code-only tool into a full-stack development partner.
Here are the most useful MCP servers for web developers, what they do, and how they work together.
What it does: Streams real-time browser telemetry to your AI — console logs, network errors, WebSocket events, Web Vitals, accessibility audits, user actions — and gives the AI browser control.
Why it matters: Without browser observability, your AI can read code but can’t see what happens when it runs. Every debugging session requires you to manually describe the problem. With Gasoline, the AI observes the bug directly.
Most AI coding tools (Claude Code, Cursor, Windsurf) have built-in filesystem access. If yours doesn’t, the reference filesystem MCP server handles it:
What it does: Read, write, search, and navigate files.
Why it matters: The foundation. Everything else builds on the AI being able to read and edit your code.
Key capabilities: Read files, write files, search by name or content, directory listing.
What it does: Lets the AI query your database directly — read schemas, run SELECT queries, inspect data.
Why it matters: When debugging a “wrong data” bug, the AI can check the database instead of you running psql and pasting results. It can also verify that migrations ran correctly.
Key capabilities: Schema inspection, read queries, data exploration. Most implementations are read-only by default (safe for production databases).
Use case: “Why is the user’s email wrong on the profile page?” → AI checks the database, finds the email was never updated after the migration, identifies the migration bug.
What it does: Create PRs, read issues, check CI status, review code, manage releases.
Why it matters: The AI can close the loop — fix a bug, create a PR, link it to the issue, and check if CI passes. Without GitHub access, you’re the intermediary for every PR and issue interaction.
What it does: Searches the web and fetches page content.
Why it matters: When your AI encounters an unfamiliar error or needs documentation for a third-party library, it can search instead of guessing. This is especially useful for new APIs, recent library versions, and obscure error messages.
Key capabilities: Web search, URL fetching, content extraction.
Use case: “I’m getting a ERR_OSSL_EVP_UNSUPPORTED error” → AI searches, finds it’s a Node.js 17+ OpenSSL 3.0 issue, applies the fix.
What it does: List containers, read logs, start/stop services, check health.
Why it matters: If your backend runs in Docker, the AI can check container logs when the API returns 500s. No more “can you check the Docker logs?” copy-paste cycles.
Key capabilities: Container listing, log reading, service management, health checks.
Use case: “The API is returning 500s” → AI checks Gasoline for the error response, then checks Docker logs for the backend container, finds the database container is down, restarts it.
What it does: Check build status, read test results, manage tickets.
Why it matters: The AI can check if CI is green after pushing a fix, read test failure logs, and update tickets with results — closing the loop without tab-switching.
Start with Gasoline and your built-in filesystem access. Add GitHub and database when you find yourself copy-pasting between those tools and your AI. Add the rest as needed.
MCP adoption is accelerating. Every major AI coding tool now supports MCP, and new servers appear weekly. The pattern is clear: AI assistants are becoming environment-aware, connecting to every data source and tool a developer uses.
The developers who set up the right MCP servers today work significantly faster — not because the AI is smarter, but because the AI can see more of the picture.