Skip to content

mcp

4 posts with the tag “mcp”

Best MCP Servers for Web Development in 2026

MCP (Model Context Protocol) lets AI coding assistants plug into external tools — browsers, databases, APIs, and more. The right combination of MCP servers turns your AI assistant from a code-only tool into a full-stack development partner.

Here are the most useful MCP servers for web developers, what they do, and how they work together.

A good MCP server:

  1. Gives the AI information it can’t get otherwise — runtime data, live state, external services
  2. Reduces copy-paste — the AI reads data directly instead of you pasting it in
  3. Enables actions — the AI can do things, not just observe
  4. Works locally — your data stays on your machine

With that in mind, here are the servers worth setting up.

What it does: Streams real-time browser telemetry to your AI — console logs, network errors, WebSocket events, Web Vitals, accessibility audits, user actions — and gives the AI browser control.

Why it matters: Without browser observability, your AI can read code but can’t see what happens when it runs. Every debugging session requires you to manually describe the problem. With Gasoline, the AI observes the bug directly.

Key capabilities:

  • 4 tools: observe (23 modes), generate (7 formats), configure (12 actions), interact (24 actions)
  • Real-time: Console errors, network failures, WebSocket traffic as they happen
  • Browser control: Navigate, click, type, run JavaScript, take screenshots
  • Artifact generation: Playwright tests, reproduction scripts, HAR exports, CSP headers, SARIF reports
  • Security auditing: Credential detection, PII scanning, third-party script analysis
  • Performance: Web Vitals with before/after comparison on every navigation

Setup: Chrome extension + npx gasoline-mcp

Zero dependencies: Single Go binary, no Node.js runtime. Localhost only.

Get started with Gasoline →

Most AI coding tools (Claude Code, Cursor, Windsurf) have built-in filesystem access. If yours doesn’t, the reference filesystem MCP server handles it:

What it does: Read, write, search, and navigate files.

Why it matters: The foundation. Everything else builds on the AI being able to read and edit your code.

Key capabilities: Read files, write files, search by name or content, directory listing.

What it does: Lets the AI query your database directly — read schemas, run SELECT queries, inspect data.

Why it matters: When debugging a “wrong data” bug, the AI can check the database instead of you running psql and pasting results. It can also verify that migrations ran correctly.

Key capabilities: Schema inspection, read queries, data exploration. Most implementations are read-only by default (safe for production databases).

Use case: “Why is the user’s email wrong on the profile page?” → AI checks the database, finds the email was never updated after the migration, identifies the migration bug.

What it does: Create PRs, read issues, check CI status, review code, manage releases.

Why it matters: The AI can close the loop — fix a bug, create a PR, link it to the issue, and check if CI passes. Without GitHub access, you’re the intermediary for every PR and issue interaction.

Key capabilities: Create/update PRs, read/comment on issues, check workflow runs, view PR reviews.

Use case: “Fix this bug and open a PR” → AI fixes the code, commits, pushes, creates the PR with a summary, and links it to the issue.

What it does: Searches the web and fetches page content.

Why it matters: When your AI encounters an unfamiliar error or needs documentation for a third-party library, it can search instead of guessing. This is especially useful for new APIs, recent library versions, and obscure error messages.

Key capabilities: Web search, URL fetching, content extraction.

Use case: “I’m getting a ERR_OSSL_EVP_UNSUPPORTED error” → AI searches, finds it’s a Node.js 17+ OpenSSL 3.0 issue, applies the fix.

What it does: List containers, read logs, start/stop services, check health.

Why it matters: If your backend runs in Docker, the AI can check container logs when the API returns 500s. No more “can you check the Docker logs?” copy-paste cycles.

Key capabilities: Container listing, log reading, service management, health checks.

Use case: “The API is returning 500s” → AI checks Gasoline for the error response, then checks Docker logs for the backend container, finds the database container is down, restarts it.

What it does: Check build status, read test results, manage tickets.

Why it matters: The AI can check if CI is green after pushing a fix, read test failure logs, and update tickets with results — closing the loop without tab-switching.

The real power is composition. Here’s a debugging workflow using multiple MCP servers:

  1. Gasoline: observe({what: "error_bundles"}) — sees a TypeError correlated with a 500 from /api/orders
  2. Gasoline: observe({what: "network_bodies", url: "/api/orders"}) — the 500 response says "column 'discount_code' does not exist"
  3. Filesystem: Reads the migration files — finds the discount_code column was added in a migration that hasn’t run
  4. Docker: Checks the database container logs — confirms the migration wasn’t applied
  5. Filesystem: Reads the deployment script — finds migrations don’t auto-run
  6. Filesystem: Fixes the deployment script to run migrations
  7. Gasoline: interact({action: "refresh"}) — refreshes the page, verifies the error is gone
  8. GitHub: Creates a PR with the fix

Six MCP servers. One conversation. No copy-paste. No tab-switching. The AI moved from symptom to root cause to fix to PR in a single flow.

For a typical web development workflow:

PriorityServerWhy
EssentialFilesystem (usually built-in)Read and edit code
EssentialGasoline (browser)See runtime errors, debug, test
High valueGitHubPRs, issues, CI status
High valueDatabaseData inspection, schema verification
UsefulSearchDocumentation, error lookup
UsefulDockerContainer log access

Start with Gasoline and your built-in filesystem access. Add GitHub and database when you find yourself copy-pasting between those tools and your AI. Add the rest as needed.

Most AI tools support multiple MCP servers in their config. Example for Claude Code (.mcp.json):

{
"mcpServers": {
"gasoline": {
"command": "npx",
"args": ["-y", "gasoline-mcp"]
}
}
}

Each server gets its own entry. The AI discovers all available tools on startup and uses them as needed.

MCP adoption is accelerating. Every major AI coding tool now supports MCP, and new servers appear weekly. The pattern is clear: AI assistants are becoming environment-aware, connecting to every data source and tool a developer uses.

The developers who set up the right MCP servers today work significantly faster — not because the AI is smarter, but because the AI can see more of the picture.

How Gasoline MCP Improves Your Application Security

Most developers discover security issues in production. A penetration test finds exposed credentials in an API response. A security review flags missing headers. A breach notification reveals that a third-party script was exfiltrating form data.

Gasoline MCP flips the timeline. Your AI assistant audits security while you develop, catching issues before they ship.

In the typical development cycle, security checks happen late:

  1. Development — features built, tested, deployed
  2. Security review — weeks later, if at all
  3. Penetration test — quarterly, expensive, findings arrive after context is lost
  4. Incident — the worst time to learn about a vulnerability

Every step between writing the code and finding the issue adds cost. A missing HttpOnly flag caught during development takes 30 seconds to fix. The same flag caught in a pen test takes a meeting, a ticket, a sprint, and a deploy.

Real-Time Security Auditing During Development

Section titled “Real-Time Security Auditing During Development”

Gasoline gives your AI assistant six categories of security checks that run against live browser traffic:

Your AI can scan every network request and response for exposed secrets:

observe({what: "security_audit", checks: ["credentials"]})

This catches:

  • AWS Access Keys (AKIA...) in API responses
  • GitHub PATs (ghp_..., ghs_...) in console logs
  • Stripe keys (sk_test_..., sk_live_...) in client-side code
  • JWTs in URL parameters (a common mistake)
  • Bearer tokens in responses that shouldn’t contain them
  • Private keys accidentally bundled in source maps

Every detection runs regex plus validation (Luhn algorithm for credit cards, structure checks for JWTs) to minimize false positives.

observe({what: "security_audit", checks: ["pii"]})

Finds personal data flowing through your application:

  • Social Security Numbers
  • Credit card numbers (with Luhn validation — not just pattern matching)
  • Email addresses in unexpected API responses
  • Phone numbers in contexts where they shouldn’t appear

This matters for GDPR, CCPA, and HIPAA compliance. If your user list API is returning full SSNs when the frontend only needs names, your AI catches it during development.

observe({what: "security_audit", checks: ["headers"]})

Validates that your responses include critical security headers:

HeaderWhat It Prevents
Strict-Transport-SecurityDowngrade attacks, cookie hijacking
X-Content-Type-OptionsMIME sniffing attacks
X-Frame-OptionsClickjacking
Content-Security-PolicyXSS, injection attacks
Referrer-PolicyReferrer leakage to third parties
Permissions-PolicyUnauthorized browser feature access

Missing any of these? Your AI knows immediately — and can fix it.

observe({what: "security_audit", checks: ["cookies"]})

Session cookies without HttpOnly are accessible to XSS attacks. Cookies without Secure can be intercepted over HTTP. Missing SameSite enables CSRF. Gasoline checks every cookie against every flag and rates severity based on whether it’s a session cookie.

observe({what: "security_audit", checks: ["transport"]})

Detects:

  • HTTP usage on non-localhost origins (unencrypted traffic)
  • Mixed content (HTTPS page loading HTTP resources)
  • HTTPS downgrade patterns
observe({what: "security_audit", checks: ["auth"]})

Identifies API endpoints that return PII without requiring authentication. If /api/users/123 returns a full user profile without an Authorization header, that’s a finding.

Third-party scripts are one of the largest attack surfaces in modern web applications. Every <script src="..."> from an external CDN is a trust decision.

observe({what: "third_party_audit"})

Gasoline classifies every third-party origin by risk:

  • Critical risk — scripts from suspicious domains, data exfiltration patterns
  • High risk — scripts from unknown origins, data sent to third parties with POST requests
  • Medium risk — non-essential third-party resources, suspicious TLDs (.xyz, .top, .click)
  • Low risk — fonts and images from known CDNs

It detects domain generation algorithm (DGA) patterns — high-entropy hostnames that indicate malware communication. It flags when your application sends PII-containing form data to third-party origins.

And it’s configurable. Specify your first-party origins and custom allow/block lists:

observe({what: "third_party_audit",
first_party_origins: ["https://api.myapp.com"],
custom_lists: {
allowed: ["https://cdn.mycompany.com"],
blocked: ["https://suspicious-tracker.xyz"]
}})

Security isn’t just about finding issues — it’s about making sure fixes stay fixed.

// Before your deploy
configure({action: "diff_sessions", session_action: "capture", name: "before-deploy"})
// After
configure({action: "diff_sessions", session_action: "capture", name: "after-deploy"})
// Compare
configure({action: "diff_sessions",
session_action: "compare",
compare_a: "before-deploy",
compare_b: "after-deploy"})

The security_diff mode specifically tracks:

  • Headers removed — did someone drop the CSP header?
  • Cookie flags removed — did HttpOnly get lost in a refactor?
  • Authentication removed — did an endpoint become public?
  • Transport downgrades — did something switch from HTTPS to HTTP?

Each change is severity-rated. A removed CSP header is high severity. A transport downgrade is critical.

Gasoline doesn’t just find problems — it generates the artifacts you need to fix and prevent them.

generate({format: "csp", mode: "strict"})

Gasoline observes which origins your page actually loads resources from during development and generates a CSP that allows exactly those origins — nothing more. It uses a confidence scoring system (3+ observations from 2+ pages = high confidence) to filter out extension noise and ad injection.

generate({format: "sri"})

Every third-party script and stylesheet gets a SHA-384 hash. If a CDN is compromised and serves modified JavaScript, the browser refuses to execute it.

The output includes ready-to-paste HTML tags:

<script src="https://cdn.example.com/lib.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8w"
crossorigin="anonymous"></script>

Even before auditing, Gasoline protects against accidental data exposure. The redaction engine automatically scrubs sensitive data from all MCP tool responses before they reach the AI:

  • AWS keys become [REDACTED:aws-key]
  • Bearer tokens become [REDACTED:bearer-token]
  • Credit card numbers become [REDACTED:credit-card]
  • SSNs become [REDACTED:ssn]

This is a double safety net. The extension strips auth headers before data reaches the server. The server’s redaction engine catches anything else before it reaches the AI. Two layers, zero configuration.

Here’s the workflow that makes Gasoline transformative for security:

  1. Develop normally — write code, test features
  2. AI audits continuously — security checks run against live traffic
  3. Issues found immediately — in the same terminal where you’re coding
  4. Fix in context — the AI has the code open and the finding in hand
  5. Verify the fix — re-run the audit, confirm the finding is gone
  6. Prevent regression — capture a security snapshot, compare after future changes

The entire cycle takes minutes, not months. No separate tool. No context switch. No ticket in a backlog that nobody reads.

For developers: Security becomes part of your flow, not an interruption to it. The AI catches what you’d need a security expert to find — and you fix it while the code is still fresh in your mind.

For security teams: Shift-left isn’t a buzzword anymore. Developers arrive at security review with most issues already caught and fixed. Reviews focus on architecture and design, not missing headers.

For compliance: Every audit finding is captured with timestamp, severity, and evidence. SARIF export integrates directly with GitHub Code Scanning. The audit log records every security check the AI performed.

For enterprises: Zero data egress. All security scanning happens on the developer’s machine. No credentials sent to cloud services. No browser traffic leaving the network. Localhost only, zero dependencies, open source.

Install Gasoline, open your application, and ask your AI:

“Run a full security audit of this page and tell me what you find.”

You might be surprised what’s been hiding in plain sight.

What Is MCP? The Model Context Protocol Explained for Web Developers

MCP — the Model Context Protocol — is the USB-C of AI tools. It’s a standard that lets AI assistants plug into external data sources and capabilities without custom integrations. If you’ve ever wished your AI coding assistant could see your browser, read your database, or check your CI pipeline, MCP is how that works.

Here’s what MCP means for web developers and why it changes how you build software.

AI coding assistants are powerful but blind. They can read your source code, but they can’t see:

  • The runtime error in your browser console
  • The 500 response from your API
  • The layout shift that happens after your component mounts
  • The WebSocket connection that silently drops
  • The third-party script that’s loading slowly

Without this context, every debugging session starts with you describing the problem to the AI instead of the AI observing it directly. You become a human copy-paste bridge between your browser and your terminal.

MCP eliminates that bridge.

MCP is a JSON-RPC 2.0 protocol with a simple contract:

  1. Servers expose tools (functions the AI can call) and resources (data the AI can read)
  2. Clients (AI assistants like Claude Code, Cursor, Windsurf) discover and invoke those tools
  3. Transport is flexible — stdio pipes, HTTP, or any bidirectional channel

A typical MCP server might expose tools like:

observe({what: "errors"}) → returns browser console errors
generate({format: "test"}) → generates a Playwright test
configure({action: "health"}) → returns server status
interact({action: "click", selector: "text=Submit"}) → clicks a button

The AI assistant discovers what tools are available, reads their descriptions, and calls them as needed during a conversation. No custom plugin architecture. No vendor-specific API. Just a protocol.

Before MCP, debugging with AI looked like this:

You: “I’m getting an error when I submit the form.” AI: “What error? Can you paste the console output?” You: [switches to browser, opens DevTools, copies error, pastes] AI: “Can you also show me the network request?” You: [switches to Network tab, finds request, copies, pastes]

With an MCP server like Gasoline connected:

You: “I’m getting an error when I submit the form.” AI: [calls observe({what: “errors”})] “I can see the TypeError. The API returned a 422 because the email field is missing from the request body. Let me check the form handler…”

The AI skips the back-and-forth and goes straight to diagnosing.

MCP tools compose naturally. An AI assistant with a browser MCP server and a filesystem MCP server can:

  1. Observe a runtime error in the browser
  2. Read the relevant source file
  3. Edit the code to fix the bug
  4. Refresh the browser
  5. Verify the error is gone

That’s a complete debugging loop without human intervention beyond the initial request.

Because MCP is a standard protocol, the same server works with every compatible client:

AI ToolMCP Support
Claude CodeBuilt-in
CursorBuilt-in
WindsurfBuilt-in
Claude DesktopBuilt-in
ZedBuilt-in
VS Code + ContinuePlugin

You configure the server once. Every AI tool that speaks MCP can use it.

MCP servers exist for many data sources:

CategoryExamples
BrowserGasoline (real-time telemetry, browser control)
FilesystemRead, write, search files
DatabasesPostgreSQL, SQLite, MongoDB
APIsGitHub, Slack, Jira, Linear
DevOpsDocker, Kubernetes, CI/CD
SearchBrave Search, web fetch

The power comes from combining them. A browser MCP server plus a GitHub MCP server means your AI can observe a bug, fix it, and open a PR — all in one conversation.

Not all browser MCP servers are equal. The critical capabilities for web development:

The server should capture browser state as it happens — console logs, network errors, exceptions, WebSocket events — not just static snapshots. When you’re debugging a race condition, you need the sequence of events, not a point-in-time dump.

Observation alone isn’t enough. The AI needs to navigate, click, type, and interact with the page. Otherwise it’s reading but not testing. Semantic selectors (text=Submit, label=Email) are more resilient than CSS selectors that break with every redesign.

Captured session data should translate into useful outputs: Playwright tests, reproduction scripts, accessibility reports, performance summaries. The AI has the data — let it produce the artifacts.

A browser MCP server sees everything — network traffic, form inputs, cookies. It must:

  • Strip credentials before storing or transmitting data
  • Bind to localhost only (no network exposure)
  • Minimize permissions (no broad host access)
  • Keep all data on the developer’s machine

Web Vitals, resource timing, long tasks, layout shifts — performance data should flow alongside error data. The AI shouldn’t need a separate tool to check if the page is fast.

If you want to add browser observability to your AI workflow:

Terminal window
git clone https://github.com/brennhill/gasoline-mcp-ai-devtools.git

Load the extension/ folder as an unpacked Chrome extension.

Add to your MCP config (example for Claude Code’s .mcp.json):

{
"mcpServers": {
"gasoline": {
"command": "npx",
"args": ["-y", "gasoline-mcp"]
}
}
}

Open your app, restart your AI tool, and ask:

“What browser errors do you see?”

The AI calls observe({what: "errors"}), gets the real-time error list, and starts diagnosing. No copy-paste. No screenshots. No description of the problem. The AI sees it directly.

MCP is still early. The protocol is evolving, new servers appear weekly, and AI tools are deepening their integration. But the direction is clear: AI assistants are becoming aware of their environment, not just their context window.

For web developers, this means the feedback loop between writing code and seeing results gets tighter. The AI sees the browser. The AI sees the error. The AI sees the fix work. All in real time.

That’s what MCP enables. And it’s just getting started.

Gasoline v5.5.0: Rock-Solid MCP Protocol Compliance

Gasoline v5.5.0 is a stability release focused on MCP protocol compliance. If you experienced “Unexpected end of JSON input” errors or connection issues with Claude Desktop or Cursor, this release fixes them all.

The Problem: Intermittent Connection Failures

Section titled “The Problem: Intermittent Connection Failures”

Users reported sporadic errors when connecting to Gasoline via Claude Desktop:

[error] Unexpected end of JSON input

The MCP server appeared to be working — valid JSON responses were logged — but immediately after each response, a parse error occurred.

Our investigation uncovered three distinct MCP protocol violations:

Go’s json.Encoder.Encode() adds a trailing newline to JSON output. Our stdio bridge then called fmt.Println(), adding a second newline. The empty line between messages was parsed as an empty JSON message, causing the parse error.

Fix: Changed fmt.Println(string(body)) to fmt.Print(string(body)) in the bridge — the HTTP response already includes the trailing newline.

JSON-RPC 2.0 notifications (requests without an id field) must not receive responses. We were responding to notifications/initialized with an empty response, violating the spec.

Fix: Notifications now return nil from the handler and receive no response. HTTP transport returns 204 No Content.

The stdio bridge could exit before the final response was written to stdout, truncating the last message.

Fix: Implemented an exit gate pattern — the process waits for any pending responses to flush before exiting.

v5.5.0 adds 10 new Go tests that verify MCP protocol compliance:

  • TestMCPProtocol_ResponseNewlines — exactly one trailing newline per response
  • TestMCPProtocol_NotificationNoResponse — notifications receive no response
  • TestMCPProtocol_JSONRPCStructure — valid JSON-RPC 2.0 structure
  • TestMCPProtocol_IDNeverNull — response ID is never null (Cursor requirement)
  • TestMCPProtocol_ErrorCodes — standard JSON-RPC error codes
  • TestMCPProtocol_InitializeResponse — MCP initialize handshake
  • TestMCPProtocol_ToolsListStructure — tools/list response format
  • TestMCPProtocol_HandlerUnit — handler method dispatch
  • TestMCPProtocol_HTTPHandler — HTTP transport compliance
  • TestMCPProtocol_BridgeCodeVerification — static analysis of bridge code

These tests are designed to be unalterable — they verify the MCP spec, not implementation details. Any future change that breaks MCP compliance will fail these tests.

The GitHub API version check now fails silently. Previously, rate limit errors (403) would log warnings even though version checking is non-critical.

All npm packages prior to v5.5.0 have been deprecated. Users installing old versions will see a warning directing them to upgrade.

Terminal window
npx gasoline-mcp@5.5.0

Or update your MCP configuration:

{
"mcpServers": {
"gasoline": {
"type": "stdio",
"command": "npx",
"args": ["-y", "gasoline-mcp@5.5.0", "--port", "7890", "--persist"]
}
}
}

GitHub Release