Skip to content

Blog

Gasoline v0.7.0 Released

v0.7.0 is a ground-up rewrite delivering a complete browser observability platform. This is the first stable release — all prior versions are deprecated.

  • Zero-dependency Go daemon with MCP JSON-RPC 2.0 protocol — no runtime dependencies, single binary
  • Chrome MV3 extension capturing console logs, network requests, DOM state, screenshots, and Web Vitals in real-time
  • 5 MCP tools — observe, generate, configure, interact, analyze — giving AI agents full browser visibility
  • File upload pipeline with 4-stage escalation and OS automation for native file dialogs
  • Draw mode for visual region selection and annotation directly in the browser
  • SARIF export for integrating accessibility and security findings into CI/CD pipelines
  • Session recording with WebM video capture
  • Pilot mode for autonomous browser interaction via AI agents
  • Link health analysis with CORS detection
  • npm + PyPI distribution with auto-update daemon lifecycle
  • Complete MCP server with tool-based architecture (observe, generate, configure, interact, analyze)
  • Real-time browser telemetry: console, network, WebSocket, DOM, performance, errors
  • HAR and SARIF export for network traces and accessibility audits
  • Test generation from browser interactions (Playwright, Vitest)
  • Noise filtering with persistent rules
  • CSP policy generation from observed network traffic
  • Network waterfall analysis with body capture
Terminal window
npx gasoline-mcp@0.7.0

Or with pip:

Terminal window
pip install gasoline-mcp==0.7.0

View on GitHub

Gasoline v6.0.0 Released

Gasoline v6.0.0 introduces the Link Health Analyzer, plus browser automation, recording, and performance analysis for AI agents. Check all links on your page, record full sessions with video, capture performance metrics, and let AI agents test, debug, and fix your app automatically. Complete visibility. You stay in control.

  • Link Health Analyzer — Automatically check all links on your page for issues (broken, redirects, auth-required). 20 concurrent checks, categorized results, and async tracking with correlation IDs.

  • Full Recording System — Record browser tabs with video and audio. Videos stream to local disk. No cloud, no transcoding—raw WebM format.

  • Permission Prompts — When recording starts, you get a clear prompt to approve it. No silent recordings. You’re always in control.

  • CWE-942 Fixed — Replaced wildcard postMessage origins with window.location.origin across content scripts, test helpers, and background workers. Prevents message hijacking on cross-origin pages.

  • Secure Cookie Attributes — Cookie deletion and restoration now include Secure and SameSite attributes, preventing session fixation and CSRF vulnerabilities.

  • Path Traversal Protection — Hardened file operations in extension persistence layer to prevent directory traversal attacks.

  • Input Validation — Comprehensive validation of extension log queue capacity (2000-entry cap) and screenshot rate limiter bounds to prevent unbounded memory growth.

  • Smart HTTP Timeouts — 5s default timeout for localhost operations, extended to 30s+ only when accessibility features are requested. Reduces false positives while respecting slow connections.

  • Atomic File Writes — Log rotation uses temp + rename pattern, preventing partial writes and data loss on disk full.

  • Efficient Deduplication — SeenMessages pruning optimized for large event volumes.

  • 99.4% Pass Rate — 154 out of 155 smoke tests pass (one known edge case with watermark on rapid navigation).
  • Comprehensive UAT Suite — 140 tests covering recording, permissions, security, performance, and WebSocket capture.
  • Full TypeScript Strict Mode — No implicit any, zero Codacy security issues.
  • Extension v5.x → v6.0.0 — Auto-update via Chrome. Manual re-add may be required if permissions are denied.
  • MCP Server — Same 4-tool interface; no API changes.

📥 Download gasoline-extension-v6.0.0.crx (480 KB)

Extension ID: behrmkvjipzkr7hu6mwmbt5vpdgcdyvk

Drag-drop the signed CRX into chrome://extensions/:

  1. Download the file above
  2. Open chrome://extensions/
  3. Enable Developer mode (top right)
  4. Drag and drop the .crx file into the page
  5. Click “Add extension” when prompted

Full installation guide →

Terminal window
npm install -g gasoline-mcp@6.0.0
gasoline-mcp --help

Or via pip:

Terminal window
pip install gasoline-mcp==6.0.0
  • Recording audio on muted tabs — Tab audio capture requires tab to have sound playing. Silent tabs record video only.
  • Watermark on rapid navigation — Watermark may not re-appear if user navigates during recording. Next navigation resets correctly.
  • Chrome 120+ only — Manifest v3 (MV3) requires Chrome 120 or later. No Safari/Firefox support in v6.
  • File Upload API — Automated file form handling for bulk uploads to no-API platforms.
  • Replay System — Event playback with timeline scrubbing.
  • Deployment Integration — Capture git-linked deploy events for post-incident correlation.

v5.8.0…v6.0.0

Gasoline v5.7.4 Released

Gasoline v5.7.4 improves stability and MCP protocol reliability based on production feedback.

  • Better handling of slow client connections
  • Improved timeout recovery and reconnection logic
  • Enhanced message serialization performance
  • More robust error reporting to clients
  • Fixed observer timeout on pages with extremely high event volume
  • Resolved occasional message ordering issues
  • Improved cleanup of abandoned connections
  • Better resilience to malformed MCP requests
  • Reduced latency for high-frequency events
  • Optimized buffer management for large responses
Terminal window
npm install -g gasoline-mcp@5.7.4

v5.7.4 Release

AI-Powered QA: How to Test Your Web App Without Writing Test Code

What if you could test your web application by describing what should happen — in plain English — and have an AI actually run the tests?

No Playwright scripts. No Selenium WebDriver setup. No npm install or pip install. No learning CSS selectors, XPath, or assertion libraries. Just tell the AI what to test, and it tests it.

This isn’t a future vision. It works today with Gasoline MCP.

Writing automated tests is expensive:

  • Setup cost: Install Node.js, install Playwright, configure the test runner, set up CI/CD
  • Writing cost: Learn the API, figure out selectors, handle async operations, manage test data
  • Maintenance cost: Every UI change breaks selectors. Every flow change breaks sequences. Tests that took 2 hours to write take 4 hours to maintain.

The result? Most teams have either:

  1. No automated tests — manual QA only
  2. Fragile tests — break on every deploy, ignored by the team
  3. Expensive tests — dedicated QA engineers maintaining a test suite that’s always behind

With Gasoline, testing looks like this:

"Go to the login page. Enter 'test@example.com' as the email and 'password123'
as the password. Click Sign In. Verify that you land on the dashboard and there
are no console errors."

The AI:

  1. Navigates to the login page
  2. Finds the email field (using semantic selectors — label=Email, not #email-input-field-v2)
  3. Types the email
  4. Finds the password field
  5. Types the password
  6. Clicks the Sign In button (by text, not by CSS selector)
  7. Waits for navigation
  8. Checks the URL contains /dashboard
  9. Checks for console errors

If anything fails, the AI reports exactly what happened: “The Sign In button was found and clicked, but the page navigated to /error instead of /dashboard. The API returned a 401 with {"error": "invalid credentials"}.”

Selenium/Playwright test:

await page.goto('https://myapp.com/login');
await page.locator('#email-input').fill('test@example.com');
await page.locator('#password-input').fill('password123');
await page.locator('button[type="submit"]').click();
await expect(page).toHaveURL(/.*dashboard/);

Gasoline natural language:

Log in with test@example.com / password123.
Verify you reach the dashboard.

The Selenium test breaks when:

  • The email field ID changes from #email-input to #email-field
  • The submit button gets a new class or is replaced with a different component
  • The form structure changes (inputs wrapped in a new div)

The natural language test survives all of these because the AI uses meaning-based selectors: “the email field” → label=Email, “the sign in button” → text=Sign In.

"Sign up with a new account, verify the welcome email prompt appears,
dismiss it, navigate to settings, change the display name, and verify
the change is reflected in the header."
"Submit the contact form with an empty email. Verify an error message
appears. Then enter a valid email and submit. Verify it succeeds."
"Navigate to a product page that doesn't exist (/products/99999).
Verify a 404 page is shown and there are no console errors."
"Navigate to the homepage. Check that LCP is under 2.5 seconds and
there are no layout shifts above 0.1."
"Run an accessibility audit on the checkout page. Report any critical
or serious violations."
"Submit an order. Verify the API returns a 201 status and the response
includes an order ID."

Natural language tests are great for exploratory testing and quick validation. But for CI/CD, you need repeatable tests.

After running a natural language test session:

generate({format: "test", test_name: "guest-checkout",
assert_network: true, assert_no_errors: true})

Gasoline generates a complete Playwright test from the session — every action translated to Playwright commands with proper selectors, network assertions, and error checking. The AI ran the test in natural language; Gasoline converts it to code for CI.

This is the best of both worlds:

  1. Write tests in English — fast, no setup
  2. Export to Playwright — repeatable, CI-ready
  3. Re-run in English — if the generated test breaks, describe the flow again and regenerate

You know the user flows better than anyone. You shouldn’t need to write JavaScript to verify them. Describe the flow, the AI tests it, and you see the results.

You don’t have dedicated QA engineers, and your developers are building features, not writing tests. Natural language testing gives you test coverage without the headcount.

You already know how to test. Natural language testing lets you work faster — describe 10 test cases in the time it takes to code 1. Generate Playwright tests from the ones that should be permanent.

You just shipped a feature and want to verify the happy path before the PR review. A 30-second natural language test is faster than writing a proper test and faster than manual testing.

Resilience: Why AI Tests Survive UI Changes

Section titled “Resilience: Why AI Tests Survive UI Changes”

Traditional tests are tightly coupled to the UI implementation:

// Breaks when the button text changes from "Submit" to "Place Order"
await page.locator('button:has-text("Submit")').click();
// Breaks when the ID changes
await page.locator('#checkout-submit-btn').click();
// Breaks when the class changes
await page.locator('.btn-primary.submit').click();

The AI uses semantic selectors that adapt:

  • text=Submit → If the button now says “Place Order”, the AI reads the page and finds the new text
  • label=Email → Works regardless of whether it’s an <input>, a Material UI <TextField>, or a custom component
  • role=button → Works regardless of styling or class names

And if a selector doesn’t match, the AI doesn’t just fail — it calls interact({action: "list_interactive"}) to discover what’s actually on the page and adapts.

For tests you run regularly:

"Save this test flow as 'checkout-happy-path'."
configure({action: "store", store_action: "save",
namespace: "tests", key: "checkout-happy-path",
data: {steps: ["navigate to /checkout", "fill in shipping...", ...]}})
"Load and run the 'checkout-happy-path' test."
configure({action: "store", store_action: "load",
namespace: "tests", key: "checkout-happy-path"})

Save browser state at key points:

interact({action: "save_state", snapshot_name: "logged-in"})

Later, restore that state instead of repeating the login flow:

interact({action: "load_state", snapshot_name: "logged-in", include_url: true})
  1. Install Gasoline (Quick Start)
  2. Open your web app
  3. Tell your AI: “Test the login flow — go to the login page, enter test credentials, sign in, and verify you reach the dashboard.”

No setup. No dependencies. No test code. Just describe what should happen.

Best MCP Servers for Web Development in 2026

MCP (Model Context Protocol) lets AI coding assistants plug into external tools — browsers, databases, APIs, and more. The right combination of MCP servers turns your AI assistant from a code-only tool into a full-stack development partner.

Here are the most useful MCP servers for web developers, what they do, and how they work together.

A good MCP server:

  1. Gives the AI information it can’t get otherwise — runtime data, live state, external services
  2. Reduces copy-paste — the AI reads data directly instead of you pasting it in
  3. Enables actions — the AI can do things, not just observe
  4. Works locally — your data stays on your machine

With that in mind, here are the servers worth setting up.

What it does: Streams real-time browser telemetry to your AI — console logs, network errors, WebSocket events, Web Vitals, accessibility audits, user actions — and gives the AI browser control.

Why it matters: Without browser observability, your AI can read code but can’t see what happens when it runs. Every debugging session requires you to manually describe the problem. With Gasoline, the AI observes the bug directly.

Key capabilities:

  • 4 tools: observe (23 modes), generate (7 formats), configure (12 actions), interact (24 actions)
  • Real-time: Console errors, network failures, WebSocket traffic as they happen
  • Browser control: Navigate, click, type, run JavaScript, take screenshots
  • Artifact generation: Playwright tests, reproduction scripts, HAR exports, CSP headers, SARIF reports
  • Security auditing: Credential detection, PII scanning, third-party script analysis
  • Performance: Web Vitals with before/after comparison on every navigation

Setup: Chrome extension + npx gasoline-mcp

Zero dependencies: Single Go binary, no Node.js runtime. Localhost only.

Get started with Gasoline →

Most AI coding tools (Claude Code, Cursor, Windsurf) have built-in filesystem access. If yours doesn’t, the reference filesystem MCP server handles it:

What it does: Read, write, search, and navigate files.

Why it matters: The foundation. Everything else builds on the AI being able to read and edit your code.

Key capabilities: Read files, write files, search by name or content, directory listing.

What it does: Lets the AI query your database directly — read schemas, run SELECT queries, inspect data.

Why it matters: When debugging a “wrong data” bug, the AI can check the database instead of you running psql and pasting results. It can also verify that migrations ran correctly.

Key capabilities: Schema inspection, read queries, data exploration. Most implementations are read-only by default (safe for production databases).

Use case: “Why is the user’s email wrong on the profile page?” → AI checks the database, finds the email was never updated after the migration, identifies the migration bug.

What it does: Create PRs, read issues, check CI status, review code, manage releases.

Why it matters: The AI can close the loop — fix a bug, create a PR, link it to the issue, and check if CI passes. Without GitHub access, you’re the intermediary for every PR and issue interaction.

Key capabilities: Create/update PRs, read/comment on issues, check workflow runs, view PR reviews.

Use case: “Fix this bug and open a PR” → AI fixes the code, commits, pushes, creates the PR with a summary, and links it to the issue.

What it does: Searches the web and fetches page content.

Why it matters: When your AI encounters an unfamiliar error or needs documentation for a third-party library, it can search instead of guessing. This is especially useful for new APIs, recent library versions, and obscure error messages.

Key capabilities: Web search, URL fetching, content extraction.

Use case: “I’m getting a ERR_OSSL_EVP_UNSUPPORTED error” → AI searches, finds it’s a Node.js 17+ OpenSSL 3.0 issue, applies the fix.

What it does: List containers, read logs, start/stop services, check health.

Why it matters: If your backend runs in Docker, the AI can check container logs when the API returns 500s. No more “can you check the Docker logs?” copy-paste cycles.

Key capabilities: Container listing, log reading, service management, health checks.

Use case: “The API is returning 500s” → AI checks Gasoline for the error response, then checks Docker logs for the backend container, finds the database container is down, restarts it.

What it does: Check build status, read test results, manage tickets.

Why it matters: The AI can check if CI is green after pushing a fix, read test failure logs, and update tickets with results — closing the loop without tab-switching.

The real power is composition. Here’s a debugging workflow using multiple MCP servers:

  1. Gasoline: observe({what: "error_bundles"}) — sees a TypeError correlated with a 500 from /api/orders
  2. Gasoline: observe({what: "network_bodies", url: "/api/orders"}) — the 500 response says "column 'discount_code' does not exist"
  3. Filesystem: Reads the migration files — finds the discount_code column was added in a migration that hasn’t run
  4. Docker: Checks the database container logs — confirms the migration wasn’t applied
  5. Filesystem: Reads the deployment script — finds migrations don’t auto-run
  6. Filesystem: Fixes the deployment script to run migrations
  7. Gasoline: interact({action: "refresh"}) — refreshes the page, verifies the error is gone
  8. GitHub: Creates a PR with the fix

Six MCP servers. One conversation. No copy-paste. No tab-switching. The AI moved from symptom to root cause to fix to PR in a single flow.

For a typical web development workflow:

PriorityServerWhy
EssentialFilesystem (usually built-in)Read and edit code
EssentialGasoline (browser)See runtime errors, debug, test
High valueGitHubPRs, issues, CI status
High valueDatabaseData inspection, schema verification
UsefulSearchDocumentation, error lookup
UsefulDockerContainer log access

Start with Gasoline and your built-in filesystem access. Add GitHub and database when you find yourself copy-pasting between those tools and your AI. Add the rest as needed.

Most AI tools support multiple MCP servers in their config. Example for Claude Code (.mcp.json):

{
"mcpServers": {
"gasoline": {
"command": "npx",
"args": ["-y", "gasoline-mcp"]
}
}
}

Each server gets its own entry. The AI discovers all available tools on startup and uses them as needed.

MCP adoption is accelerating. Every major AI coding tool now supports MCP, and new servers appear weekly. The pattern is clear: AI assistants are becoming environment-aware, connecting to every data source and tool a developer uses.

The developers who set up the right MCP servers today work significantly faster — not because the AI is smarter, but because the AI can see more of the picture.