Skip to content

how-to

4 posts with the tag “how-to”

How to Debug CORS Errors with AI Using Gasoline MCP

CORS errors are the most misleading errors in web development. The browser tells you “access has been blocked” — but the actual problem could be a missing header, a wrong origin, a preflight failure, a credentials mismatch, or a server that’s simply crashing and returning a 500 without CORS headers.

Here’s how to use Gasoline MCP to let your AI assistant see the full picture and fix CORS issues in minutes instead of hours.

The browser console shows you something like:

Access to fetch at 'https://api.example.com/users' from origin 'http://localhost:3000'
has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present
on the requested resource.

This tells you what happened but not why. Common causes:

  1. The server doesn’t send CORS headers at all — needs configuration
  2. The server sends the wrong origin* doesn’t work with credentials
  3. The preflight OPTIONS request fails — the server doesn’t handle OPTIONS
  4. The server errors out — a 500 response won’t have CORS headers either
  5. A proxy strips headers — nginx, Cloudflare, or your reverse proxy eats the headers
  6. Credentials mode mismatchwithCredentials: true requires explicit origin, not *

Chrome DevTools shows the failed request in the Network tab, but the response body is hidden for CORS-blocked requests. You can’t see what the server actually returned. You’re debugging blind.

With Gasoline connected, your AI can see the error, the network request details, and the response headers — everything needed to diagnose the root cause.

observe({what: "errors"})

The AI sees the CORS error message with the exact URL, origin, and which header is missing.

observe({what: "network_bodies", url: "/api/users"})

This shows the full request/response pair:

  • Request headers — the Origin header the browser sent
  • Response headers — whether Access-Control-Allow-Origin is present, and what value it has
  • Response status — is it a 200 with missing headers, or a 500 that also lacks headers?
  • Response body — the actual error payload (which Chrome hides for CORS failures)
observe({what: "network_waterfall", url: "/api/users"})

The waterfall shows if there are two requests — the preflight OPTIONS and the actual request. If the OPTIONS request fails or returns the wrong status, the browser never sends the real request.

observe({what: "timeline", include: ["network", "errors"]})

The timeline shows the sequence: did the preflight succeed? Did the main request fire? When did the error appear relative to the request? This catches timing-related CORS issues like the server sending headers on GET but not POST.

What the AI sees: Request to api.example.com, response status 200, no Access-Control-Allow-Origin header.

The fix: Add CORS headers to the server. The AI can look at your server code and add the appropriate middleware:

// Express
app.use(cors({ origin: 'http://localhost:3000' }));
// Go
w.Header().Set("Access-Control-Allow-Origin", "http://localhost:3000")
// Nginx
add_header 'Access-Control-Allow-Origin' 'http://localhost:3000';

What the AI sees: Request to /api/users, response status 500, body contains {"error": "database connection failed"}, no CORS headers.

The real problem: The server is crashing, and crash responses don’t go through the CORS middleware. The CORS error is a red herring.

This is why seeing the response body matters. Without Gasoline, you’d spend an hour debugging CORS headers when the actual issue is a database connection string.

What the AI sees: Two requests in the waterfall — an OPTIONS request returning 404, and no follow-up request.

The fix: The server doesn’t handle OPTIONS requests for that route. Add an OPTIONS handler or configure your framework’s CORS middleware to handle preflight requests.

What the AI sees: Response has Access-Control-Allow-Origin: *, request has credentials: include. Error says “wildcard cannot be used with credentials.”

The fix: Replace * with the specific origin. The AI can read the Origin header from the request and configure the server to echo it back (with a whitelist).

What the AI sees: Server code sends CORS headers (the AI can read the source), but the response in the browser doesn’t have them.

Diagnosis: Something between the server and browser is stripping headers. The AI checks nginx configs, Cloudflare settings, or reverse proxy configuration.

Here’s what it looks like end-to-end:

You: “I’m getting a CORS error when calling the API.”

The AI:

  1. Calls observe({what: "errors"}) — sees the CORS error with URL and origin
  2. Calls observe({what: "network_bodies", url: "/api"}) — sees the actual response (a 500 with a database error)
  3. Reads the server code — finds the missing error handler that skips CORS middleware
  4. Fixes the error handler to pass through CORS middleware even on errors
  5. Calls interact({action: "refresh"}) — reloads the page
  6. Calls observe({what: "errors"}) — confirms the CORS error is gone

Total time: 2 minutes. No manual DevTools inspection. No guessing about headers. No Stack Overflow rabbit holes.

Chrome DevTools has a fundamental limitation for CORS debugging: it hides the response body for CORS-blocked requests. The Network tab shows the request was blocked, but you can’t see what the server actually returned.

This means you can’t tell the difference between:

  • A correctly configured server that’s missing one header
  • A server that’s completely crashing and returning a 500

Gasoline captures the response at the network level before CORS enforcement, so the AI sees everything — headers, body, status code. The diagnosis goes from “something is wrong with CORS” to “the server returned a 500 because the database is down, and the error handler doesn’t set CORS headers.”

Check the timeline, not just the error. CORS errors sometimes cascade — one failed preflight blocks ten subsequent requests. The timeline shows the cascade pattern so you fix the root cause, not the symptoms.

Look at both staging and production headers. CORS works in staging with * but breaks in production with credentials? The network bodies show exactly which headers each environment returns.

Watch for mixed HTTP/HTTPS. http://localhost:3000 and https://localhost:3000 are different origins. The AI’s transport security check (observe({what: "security_audit", checks: ["transport"]})) catches this mismatch.

Use error_bundles for context. observe({what: "error_bundles"}) returns the CORS error along with the correlated network request and recent actions — everything in one call instead of three.

How to Debug React and Next.js Apps with AI Using Gasoline MCP

React and Next.js applications have a unique set of debugging challenges — hydration mismatches, stale closures, useEffect dependency bugs, SSR/client divergence, and API route failures. Your AI coding assistant can fix all of these faster if it can actually see your browser.

Here’s how Gasoline MCP gives your AI the runtime context it needs to debug React and Next.js apps effectively.

What Makes React/Next.js Debugging Different

Section titled “What Makes React/Next.js Debugging Different”

React errors are notoriously unhelpful:

Uncaught Error: Minified React error #418

Even in development mode, React errors like “Cannot update a component while rendering a different component” don’t tell you which component or what triggered the update. And Next.js adds its own layer of complexity:

  • Hydration mismatches — server HTML differs from client render
  • SSR errors — server-side code fails but the page looks fine on the client
  • API route failures/api/* routes return 500s that the client silently swallows
  • Middleware issues — redirects and rewrites that happen before the page loads
  • Client/server boundary confusion"use client" and "use server" scope mistakes

Your AI assistant can read your source code, but without browser data it can’t see what’s actually happening at runtime.

observe({what: "errors"})

Your AI sees every console error with the full message, stack trace, and source file location. For minified builds, Gasoline resolves source maps — so even in production, the AI sees the original component name and line number.

Most React bugs involve data:

observe({what: "error_bundles"})

Error bundles return each error with its correlated context — the network requests that happened around the same time, the user actions that preceded it, and relevant console logs. One call gives the AI the complete picture:

  • The error: TypeError: Cannot read properties of undefined (reading 'map')
  • The API call: GET /api/products → 200, but the response body was { products: null } instead of { products: [] }
  • The user action: Clicked “Load More” button

The AI immediately knows: the API returned null where the component expected an array.

For race conditions and ordering issues:

observe({what: "timeline"})

The timeline shows actions, network requests, and errors in chronological order. This reveals:

  • Components that fetch data before mounting
  • Effects that fire in unexpected order
  • Network requests that resolve after the component unmounts

Symptom: “Text content does not match server-rendered HTML” or “Hydration failed because the initial UI does not match.”

observe({what: "errors"})

The AI sees the hydration warning with the mismatched content. Common causes:

  • Using Date.now() or Math.random() during render (different on server vs client)
  • Checking window or localStorage during initial render
  • Conditional rendering based on typeof window !== 'undefined'

The AI can find the component, identify the non-deterministic code, and move it into a useEffect or behind a suppressHydrationWarning.

Symptom: A feature silently fails. No error in the UI, but the data is wrong.

observe({what: "network_bodies", url: "/api"})

The AI sees every API route call with the full request and response body. A 500 response from /api/checkout with {"error": "STRIPE_KEY is undefined"} tells the AI exactly what’s wrong — an environment variable isn’t set.

Symptom: The component re-renders endlessly, or an effect doesn’t fire when it should.

observe({what: "network_waterfall", url: "/api"})

If an effect with a missing dependency is refetching on every render, the waterfall shows dozens of identical API calls in rapid succession. The AI sees the pattern and checks the effect’s dependency array.

Symptom: “Can’t perform a React state update on an unmounted component.”

observe({what: "timeline", include: ["actions", "errors", "network"]})

The timeline shows: user navigates away → API call from the previous page resolves → state update on the now-unmounted component. The AI adds cleanup logic to the effect.

Symptom: Page transitions feel sluggish.

observe({what: "vitals"})
observe({what: "performance"})

The AI checks INP (responsiveness) and long tasks. If client-side navigation triggers heavy re-renders, the performance snapshot shows the blocking time. The AI can suggest React.memo, useMemo, code splitting, or moving work to a Web Worker.

Server components run on the server and stream HTML to the client. Errors in server components don’t always appear in the browser console.

observe({what: "network_bodies", url: "/"})

The response body for a Next.js page includes the serialized server component tree. If a server component throws, the error boundary HTML is visible in the response.

Next.js middleware runs before the page loads. If a redirect or rewrite misbehaves:

observe({what: "network_waterfall"})

The waterfall shows every request including redirects (301, 307, 308). The AI can see if middleware is redirecting to the wrong URL or creating redirect loops.

Next.js <Image> component can cause CLS if dimensions aren’t right:

observe({what: "vitals"}) // Check CLS
configure({action: "query_dom", selector: "img"}) // Check image dimensions

After adding a new dependency:

observe({what: "network_waterfall"})
observe({what: "performance"})

The network summary shows total JavaScript transfer size. If it jumped from 300KB to 800KB, the waterfall identifies which new bundles appeared.

You: “The product page is broken — it shows a blank screen after I click ‘Add to Cart’.”

The AI:

  1. Calls observe({what: "error_bundles"}) — sees a TypeError: Cannot read properties of undefined (reading 'quantity') correlated with POST /api/cart → 201 that returned {item: {id: 5}} (no quantity field)

  2. Reads the cart component — finds cartItem.quantity.toString() without null checking

  3. Checks the API route — finds the response omits quantity for new items (it defaults to 1 on the backend but isn’t serialized)

  4. Fixes both: adds quantity to the API response and adds a fallback in the component

  5. Calls interact({action: "refresh"}) then observe({what: "errors"}) — confirms zero errors

Total time: 3 minutes. No manual DevTools inspection. No reproducing the bug by clicking through the UI.

Use error_bundles as your first call. It returns errors with their network and action context in one shot — faster than calling errors, then network_bodies, then actions separately.

Check the waterfall after deploys. New React bundles, changed chunk names, and different loading order are all visible in the network waterfall. The AI spots unexpected changes immediately.

Profile page transitions. Use interact({action: "navigate", url: "/products"}) to trigger a client-side navigation. The perf_diff shows the performance impact of that navigation including any heavy re-renders.

For SSR issues, check response bodies. The HTML response for a Next.js page contains the server-rendered markup. If something is wrong on the server side, it’s visible in the network body before hydration even starts.

How to Debug WebSocket Connections in 2026

WebSocket debugging in Chrome DevTools is painful. You get a flat list of frames, no filtering, no search, no way to correlate messages with application state, and if you close the tab, everything is gone.

For real-time applications — chat, live dashboards, collaborative editors, trading platforms — you need better tools. Here’s the modern approach using AI-assisted debugging.

The Problem with DevTools WebSocket Debugging

Section titled “The Problem with DevTools WebSocket Debugging”

Open Chrome DevTools, go to the Network tab, filter by WS, click on your connection, and look at the Messages tab. That’s the entire experience. Here’s what’s missing:

No filtering by message type. If your WebSocket sends 10 message types (chat, typing indicators, presence updates, notifications), you can’t filter to just one. You scroll through hundreds of messages hunting for the one you need.

No directional filtering. You can’t show only incoming or only outgoing messages without reading every row.

No correlation. When a WebSocket message causes an error, there’s no link between the Network tab and the Console tab. You’re manually matching timestamps.

No persistence. Navigate away or refresh, and the WebSocket data is gone. You can’t compare messages across page loads.

No AI access. Even if you find the problematic message, you can’t easily get it to your AI assistant. You’re back to copy-pasting.

With Gasoline MCP, your AI can observe WebSocket traffic directly, filter it, correlate it with errors, and diagnose issues without you touching DevTools.

observe({what: "websocket_status"})

The AI immediately knows:

  • How many WebSocket connections are open
  • Their URLs and states (connecting, open, closed, error)
  • Message rates per connection
  • Total messages sent and received
  • Inferred message schemas (if JSON)
observe({what: "websocket_events", direction: "incoming", last_n: 20})

The AI sees the actual message payloads, filtered to just what’s relevant. No scrolling through thousands of frames.

observe({what: "timeline", include: ["websocket", "errors"]})

The timeline shows WebSocket events and console errors chronologically. The AI sees: “The user_presence message arrived at 14:23:05.123, and a TypeError occurred at 14:23:05.125 — the presence handler is crashing.”

Your real-time dashboard stopped updating. No error in the console. The data just went stale.

You: “The dashboard stopped getting live updates.”

The AI calls observe({what: "websocket_status"}) and sees:

Connection ws-1: wss://api.example.com/live
State: closed
Close code: 1006 (abnormal closure)
Messages received: 3,847
Last message: 2 minutes ago

Close code 1006 means the connection dropped without a proper close handshake — likely a network interruption or server crash. The AI checks:

observe({what: "websocket_events", connection_id: "ws-1", last_n: 5})

The last messages were normal data frames, then nothing. No close frame from the server. The AI looks at the client-side reconnection logic and finds it has a bug — it tries to reconnect but uses the wrong URL after a server failover.

After a backend deploy, the chat stops working. Messages send but nothing appears.

The AI calls observe({what: "websocket_events", direction: "outgoing", last_n: 5}):

{"type": "message", "payload": {"text": "hello", "room": "general"}}

Then observe({what: "websocket_events", direction: "incoming", last_n: 5}):

{"type": "error", "code": "INVALID_PAYLOAD", "message": "missing field: channel"}

The backend renamed room to channel but the frontend still sends room. The AI finds the mismatch, updates the frontend, and the chat works again.

The page slows down when connected to the WebSocket. CPU usage spikes.

observe({what: "websocket_status"})
Connection ws-2: wss://api.example.com/stream
State: open
Incoming rate: 340 msg/sec
Total messages: 48,291

340 messages per second is flooding the client. The AI checks:

observe({what: "vitals"})

INP is 890ms — the main thread is completely blocked processing messages. The AI looks at the message handler, finds it’s updating React state on every message (triggering a re-render 340 times per second), and refactors it to batch updates with requestAnimationFrame or useDeferredValue.

WebSocket connections fail immediately after a deploy.

observe({what: "websocket_events", last_n: 10})

Shows open followed immediately by close with code 1008 (policy violation). The AI checks the server’s WebSocket authentication — the new deploy requires a different auth token format, but the client is sending the old format.

The most powerful pattern: combining WebSocket data with error tracking.

observe({what: "error_bundles"})

Error bundles include WebSocket events in the correlation window. When a WebSocket message triggers a JavaScript error, the AI sees both together:

  • Error: TypeError: Cannot read properties of undefined (reading 'user')
  • Correlated WebSocket message: {"type": "presence_update", "data": null} (arrived 50ms before the error)
  • User action: None (this was server-pushed)

The AI knows the server sent a presence_update with null data, and the handler doesn’t check for null. One fix: add a null guard in the handler. Better fix: also fix the server so it doesn’t send null presence data.

Real-time features are everywhere in 2026:

  • AI chat interfaces with streaming responses
  • Collaborative editing (Notion, Figma, Google Docs style)
  • Live dashboards and monitoring
  • Multiplayer applications
  • Real-time notifications

These applications live and die by their WebSocket connections. A dropped connection means lost messages. A format change means silent failures. A flooding server means frozen UIs.

DevTools hasn’t evolved to match. The WebSocket debugging experience in Chrome is fundamentally the same as it was in 2018. Meanwhile, applications have moved from “we have one WebSocket for notifications” to “we have five WebSocket connections handling different data streams.”

AI-assisted debugging — where the AI can filter, correlate, and diagnose WebSocket issues programmatically — is the first real advancement in WebSocket debugging in years.

  1. Install Gasoline (Quick Start)
  2. Open your real-time application
  3. Ask your AI: “Show me all active WebSocket connections and their status.”

Your AI calls observe({what: "websocket_status"}) and you’re debugging WebSockets without opening DevTools.

How to Fix Slow Web Vitals with AI Using Gasoline MCP

Your Core Web Vitals are red. LCP is 4.2 seconds. CLS is 0.35. Google Search Console is sending angry emails. Lighthouse gives you a list of suggestions, but they’re generic — “reduce unused JavaScript” doesn’t tell you which JavaScript or why it’s slow.

Here’s how to use Gasoline MCP to give your AI assistant real-time performance data, so it can identify exactly what’s wrong and fix it.

The Problem with Traditional Performance Tools

Section titled “The Problem with Traditional Performance Tools”

Lighthouse runs a synthetic test on a throttled connection. It’s useful for benchmarking but disconnects from your actual development experience:

  • It’s a snapshot, not real-time — you fix something, re-run Lighthouse, wait 30 seconds, check the score, repeat
  • Suggestions are generic — “eliminate render-blocking resources” doesn’t tell you which stylesheet is the problem
  • No before/after — you can’t easily compare metrics across changes
  • No correlation — it doesn’t connect slow performance to specific code changes or network requests

Gasoline solves all four problems.

observe({what: "vitals"})

Your AI gets the real numbers immediately:

MetricValueRating
FCP2.1sneeds_improvement
LCP4.2spoor
CLS0.35poor
INP280msneeds_improvement

No waiting for Lighthouse. No throttled simulation. These are the real metrics from your real browser on your real page.

observe({what: "performance"})

This returns everything — not just vitals, but the full diagnostic picture:

Navigation timing: TTFB, DomContentLoaded, Load event — shows where time is spent during page load.

Network summary by type: How many scripts, stylesheets, images, and fonts loaded. Total transfer size and decoded size per category. Your AI can immediately see “you’re loading 2.1MB of JavaScript across 47 files.”

Slowest requests: The top resources by duration. If a single API call takes 3 seconds, it shows up here.

Long tasks: JavaScript execution that blocks the main thread for more than 50ms. The count, total blocking time, and longest task. If INP is bad, this is where you find out why.

LCP measures when the main content becomes visible. Common causes of slow LCP:

High TTFB: If time_to_first_byte is over 800ms, the server is the bottleneck. The AI checks your server code, database queries, or caching configuration.

Render-blocking resources: The network waterfall shows which scripts and stylesheets load before content paints:

observe({what: "network_waterfall"})

The AI looks for CSS and JavaScript files with early start_time and long duration. These are the render-blocking resources. The fix: defer non-critical scripts, inline critical CSS, use media attributes on non-essential stylesheets.

Large hero images: If the LCP element is an image, the performance snapshot shows its transfer size. A 2MB uncompressed PNG as the hero image? The AI suggests WebP, proper sizing, and fetchpriority="high".

Late-loading content: If FCP is fast but LCP is slow, the main content loads late — maybe behind an API call or a client-side render. The timeline shows the gap:

observe({what: "timeline", include: ["network"]})

CLS measures visual stability. Things that cause layout shifts:

Images without dimensions: An <img> without width and height causes the browser to reflow when the image loads. The AI can audit your images:

configure({action: "query_dom", selector: "img"})

Dynamic content insertion: Ads, banners, or lazy-loaded content that pushes existing content down. The timeline shows when shifts happen relative to network requests.

Font loading: Web fonts that cause text to resize. The AI checks for font-display: swap or font-display: optional in your CSS.

CSS without containment: The AI can check if your dynamic containers use contain: layout or explicit dimensions.

INP measures the worst-case responsiveness to user input. If INP is high, the main thread is busy when the user interacts.

Long tasks are the smoking gun: The performance snapshot shows total blocking time and the longest task. If you have 800ms of blocking time from 12 long tasks, the AI knows exactly what to target.

Heavy event handlers: The AI can read your click and input handlers to find expensive operations (DOM manipulation, synchronous computation, large state updates) that should be deferred or moved to a Web Worker.

Third-party scripts: The network waterfall shows which third-party scripts are loading and how long their execution takes:

observe({what: "third_party_audit"})

A third-party analytics script running 200ms of JavaScript on every page load directly impacts INP.

This is where Gasoline shines. After the AI makes a change:

interact({action: "refresh"})

Gasoline automatically captures before and after performance snapshots and computes a diff. The result includes:

  • Per-metric comparison: LCP went from 4200ms to 2800ms (-33%, improved, rating: needs_improvement)
  • Resource changes: “Removed analytics-v2.js (180KB), resized bundle.js from 450KB to 320KB”
  • Verdict: “improved” — more metrics got better than worse

The AI says: “LCP improved from 4.2s to 2.8s after removing the synchronous analytics script. CLS dropped from 0.35 to 0.08 after adding image dimensions. INP is still 250ms — let me look at the long tasks.”

No re-running Lighthouse. No waiting. Instant feedback.

If INP is the remaining problem, profile the actual interactions:

interact({action: "click", selector: "text=Load More", analyze: true})

The analyze: true parameter captures before/after performance around that specific click. The AI sees exactly how much main-thread time that button click consumes.

When you’re done optimizing:

generate({format: "pr_summary"})

This produces a before/after performance summary suitable for your pull request description — showing stakeholders exactly what improved and by how much.

Here’s a real workflow condensed:

Initial vitals: LCP 5.1s, CLS 0.42, INP 380ms

AI diagnosis:

  1. Network waterfall shows 3.2MB of JavaScript across 62 requests
  2. TTFB is 1.8s — slow API call blocks server-side rendering
  3. Five images without width/height attributes cause CLS
  4. Long tasks total 1.2s of blocking time — mostly from a charting library initializing synchronously

AI fixes:

  1. Adds loading="lazy" to below-fold charts, defers non-critical scripts → JS drops to 1.4MB initial
  2. Adds Redis caching to the slow API endpoint → TTFB drops to 200ms
  3. Adds explicit dimensions to all images → CLS drops to 0.02
  4. Wraps chart initialization in requestIdleCallback → blocking time drops to 180ms

Final vitals: LCP 1.9s (good), CLS 0.02 (good), INP 150ms (good)

Total time: One conversation, about 20 minutes. Each fix was verified immediately with perf_diff.

LighthouseGasoline
Speed30s synthetic run per checkReal-time, instant
ComparisonManual before/afterAutomatic perf_diff
DiagnosisGeneric suggestionsYour actual bottlenecks
Fix cycleRun → fix → re-run → checkFix → refresh → see diff
ContextScore and suggestionsFull waterfall, timeline, long tasks
IntegrationSeparate toolSame terminal as your AI assistant

Lighthouse tells you your LCP is 4.2 seconds and suggests “reduce unused JavaScript.” Gasoline tells your AI that analytics-v2.js (180KB) loads synchronously in the head, blocks FCP by 800ms, and can be deferred without breaking anything.

Set budgets in .gasoline.json to catch regressions automatically:

{
"budgets": {
"default": {
"lcp_ms": 2500,
"cls": 0.1,
"inp_ms": 200,
"total_transfer_kb": 500
},
"routes": {
"/": { "lcp_ms": 2000 },
"/dashboard": { "lcp_ms": 3000, "total_transfer_kb": 800 }
}
}
}

When any metric exceeds its budget, the AI gets an alert. Regressions are caught during development, not after deploy.

  1. Install Gasoline and connect your AI tool (Quick Start)
  2. Navigate to your slowest page
  3. Ask: “What are the Web Vitals for this page, and what’s causing the worst ones?”

Your AI sees the numbers, identifies the bottlenecks, and starts fixing. Real metrics, real fixes, real-time feedback.