SEO Development: Where SEO and Web Dev Actually Overlap

SEO development is the overlap zone where SEO strategy turns into code: indexability, rendering, schema, Core Web Vitals, redirects, canonicals, and server response. If your developers ignore SEO, your rankings break. If your SEO team ignores how the site is built, nothing they recommend will actually ship.

I’ve sat on both sides of this table for years. I write the ticket, then I’m also the one closing it. Most of the friction between SEO and engineering comes from people speaking past each other, not from either side being wrong. This piece is the translation layer.

What SEO development actually is

SEO development is the engineering work required to make a site rank: clean rendering, correct status codes, valid schema, fast Core Web Vitals, crawlable URLs, and indexable content. It’s not content. It’s not link building. It’s the plumbing underneath both.

Think of it as three layers. The server layer (response times, status codes, redirects). The rendering layer (HTML, JavaScript, hydration). The markup layer (meta tags, schema, internal links). Every SEO problem lives in one of those three layers.

A good SEO developer owns all three. A good SEO strategist knows enough to file the right ticket. That’s the whole job description.

Where SEO and engineering disagree, and who’s usually right

Developers want to ship clean code. SEOs want to rank. These goals overlap about 80% of the time. The other 20% is where the arguments live.

Engineers usually win on: framework choice, database structure, deployment pipeline, server config. SEOs usually win on: URL structure, redirect maps, canonical strategy, schema types, rendering method for indexed pages. Where it gets messy is stuff like JavaScript hydration, client-side routing, and dynamic rendering, because both sides have legitimate concerns.

Rule of thumb I use with Gatilab clients: if it shows up in Google Search Console, the SEO team owns the call. If it never touches the crawler, engineering owns it. That splits 95% of disputes.

Core Web Vitals: what devs need to actually ship

Core Web Vitals are three metrics: LCP (Largest Contentful Paint) under 2.5s, INP (Interaction to Next Paint) under 200ms, CLS (Cumulative Layout Shift) under 0.1. Miss any one of these at the 75th percentile and your “Good” score goes to “Needs Improvement.”

LCP is almost always a hero image problem. Either it’s too big (you shipped a 400KB JPEG when 80KB WebP would do), it’s lazy-loaded when it shouldn’t be (above-the-fold images must eager-load with fetchpriority="high"), or your LCP element is a blocking CSS background instead of a proper tag. Fix those three things and LCP drops by half on most sites.

INP replaced FID in March 2024. It measures the slowest interaction on the page, not the first. If your checkout button runs a 400ms JavaScript handler, INP will catch it. The fix is usually chunking long tasks with scheduler.yield() or moving work off the main thread with a Web Worker. Don’t just trust your framework. Measure.

CLS is the easiest to fix and the most often ignored. Every image needs width and height attributes. Every ad slot needs a reserved container. Every font needs font-display: swap with a matched metric override so the fallback doesn’t reflow when the web font loads. I’ve cut CLS from 0.24 to 0.02 on client sites just by auditing font loading.

JavaScript rendering: the part that breaks SEO quietly

Google renders JavaScript, but not on your timeline. It crawls the HTML first, queues the page for rendering, then comes back hours or days later to see the JS-rendered version. If your content only exists after hydration, you’re asking the crawler to be patient. It often isn’t.

Three rendering approaches actually work for SEO. Server-side rendering (SSR): Next.js in app-router mode, Nuxt, Remix. The HTML is complete on first request. Google indexes it immediately. Static site generation (SSG): same idea, built at deploy time. Fastest to serve. Dynamic rendering: detect the crawler, serve pre-rendered HTML, serve the SPA to real users. This is a fallback, not a strategy. Google officially deprecated the recommendation, but it still works.

The one to avoid: client-side-only rendering (CSR) with dynamic content. Create React App without SSR. Vue SPAs without Nuxt. Angular without Angular Universal. These ship an empty

and ask Googlebot to wait. It waits badly.

I tested this on a React SPA client site in Q3 2025. 1,200 product pages, all CSR. Search Console showed only 340 indexed. Migrated to Next.js SSR over three weeks. Six weeks later, 1,147 indexed. Same content, just rendered on the server. That’s the whole lesson.

Schema markup: what to implement, what to skip

Structured data helps search engines understand what your page is about and unlocks rich results. But most teams over-implement it. You don’t need 14 schema types on every page. You need the right one.

High-value schema by page type:

Page TypePrimary SchemaSecondary SchemaActual Rich Result?
Blog articleArticleBreadcrumbList, Author (Person)Top Stories, article cards
How-to guideHowTo (deprecated 2023)Article + FAQPageNo, use FAQPage instead
Product pageProductAggregateRating, Offer, ReviewYes, price and stars in SERP
Local businessLocalBusinessPostalAddress, OpeningHoursSpecificationKnowledge panel data
FAQ pageFAQPageWebPageDropdown in SERP
EventEventOffer, PlaceEvent cards
VideoVideoObjectClip, BroadcastEventVideo rich result
ReviewReviewRating, ItemReviewedYes (with policy compliance)

Implementation notes: use JSON-LD in the , not microdata in the markup. Google prefers JSON-LD and it separates concerns. Validate every schema change at validator.schema.org AND search.google.com/test/rich-results, because they catch different things.

Skip: Organization schema on every page (once on the homepage is enough), WebSite schema beyond Sitelinks search box, and anything marked “Thing” in schema.org. If the schema type doesn’t unlock a rich result, it’s SEO theater.

URL structure, redirects, and canonicals

URL decisions are hard to undo, so get them right the first time. Short, readable, keyword-aligned, no parameters if you can avoid them. Hyphens, not underscores. Lowercase. No trailing slashes or all trailing slashes, pick one and redirect the other.

Redirect rules that save ranking:

  1. Use 301 for permanent moves. Never 302 unless it’s genuinely temporary.
  2. Redirect chains lose equity at each hop. Audit quarterly with Screaming Frog. Flatten chains over 2 hops.
  3. When you kill a page, redirect to the closest parent, not the homepage. A homepage redirect tells Google the page is gone but you’re too lazy to map it.
  4. Update internal links to the final URL. Don’t rely on redirects to do cleanup work for you.

Canonical tags tell Google which URL is the authoritative version when similar content exists. Rules: self-canonical on every page (points to itself), pagination uses self-canonical (not rel="next/prev" anymore, Google deprecated that in 2019), and cross-domain syndication uses canonical back to the original.

The canonical sin I see weekly: pointing every page to the homepage. It happens when a developer copies a canonical tag from a template and forgets to make it dynamic. Google then thinks your whole site is one page. Catastrophic. Audit with curl -I or a full crawl every release.

Server response time and TTFB

Time to First Byte (TTFB) under 600ms is the Google-stated target. Under 200ms is where you want to be. Everything above that compounds into LCP and INP problems downstream.

What kills TTFB in real sites: unoptimized database queries (the #1 cause on WordPress), no object cache (install Redis or Memcached), uncached dynamic pages (add page caching, even on logged-out traffic), bloated PHP/Node processes (profile with Blackfire, Xdebug, or Node’s built-in profiler), and cheap hosting (a $5/mo shared host will never hit 200ms TTFB, move to a $20/mo VPS minimum).

On WordPress specifically: object caching with Redis cuts query count 80% on a 400-post site. Page caching with a plugin like FlyingPress or WP Rocket cuts TTFB from 800ms to 80ms on cached hits. These two changes alone fix most “my site is slow” tickets without touching code.

Indexability: robots.txt, noindex, and the crawl budget

Indexability is the part SEO teams obsess over and dev teams forget exists. It’s the set of signals that tell Google what to crawl and what to skip. Get it wrong and your pages either don’t get indexed or your crawl budget burns on low-value URLs.

The hierarchy of blocking signals:

  • robots.txt Disallow: don’t crawl (but the URL can still appear in search without content)
  • noindex meta tag: crawl, but don’t index
  • X-Robots-Tag HTTP header: same as noindex but works for non-HTML files (PDFs, images)
  • 401/403 status: blocked by access control
  • 404/410: gone, remove from index

Common mistakes I clean up on audits: blocking JavaScript or CSS in robots.txt (Google needs to render the page, let it), using noindex AND robots.txt on the same URL (Google can’t see the noindex if it can’t crawl), and blocking the staging site with robots.txt then forgetting to remove it on launch. That last one happens roughly every third client migration. Always grep the codebase for “Disallow: /” before going live.

Crawl budget only matters if you have more than 100,000 URLs. Below that, Google can crawl your whole site in a day. Above that, clean up. Kill thin tag pages, remove infinite filter combinations, canonicalize paginated duplicates, and make sure 404s return 404, not 200.

Sitemaps: the handshake with search engines

XML sitemaps tell Google what URLs exist, when they changed, and how important they are. They don’t guarantee indexing, but they help discovery, especially for sites with weak internal linking or fresh content.

What belongs in the sitemap: canonical, indexable, 200-status URLs that you actually want in search. That’s it. Not tag pages if they’re noindexed. Not redirect targets. Not 404s. Not pagination URLs past page 1 (most of the time).

Split sitemaps when you have more than 50,000 URLs or 50MB uncompressed. Use a sitemap index file. Submit the index, not every child. For WordPress, Rank Math and Yoast both handle this. For custom stacks, generate at build time or via cron, not on every request.

The lastmod field actually matters now. Google started trusting it again in 2023 after years of ignoring it. Populate it from your CMS’s actual modified date, not the current date. If every URL has today’s lastmod, Google ignores the signal completely.

Internal linking as a dev responsibility

Internal linking is usually punted to the content team, but it’s a dev problem too. Link structure determines how PageRank flows through your site, and it’s hardcoded in navigation, footer, templates, and block components.

What devs should build:

  • Breadcrumbs with BreadcrumbList schema on every page
  • Contextual related-posts modules (not just “more from this category”)
  • Footer links to your highest-converting pages, not your About page
  • A clean taxonomy with no orphan pages (every page reachable in 3 clicks from the homepage)

What to avoid: links in JavaScript-only navigation (Googlebot has to render to find them), rel=”nofollow” on internal links (you’re throwing away your own PageRank), and navigation menus with 200 links stuffed into mega-menus (dilutes every link’s weight).

I audit internal linking with Screaming Frog’s “Crawl Depth” report. Any page more than 4 clicks from the homepage is a candidate for better internal linking. For a Gatilab travel-site client, fixing this moved 312 pages from depth 5-7 down to depth 2-3 over six weeks. Organic traffic to those pages went up 41%.

Monitoring and ongoing work

SEO development isn’t one-and-done. Deployments break things. Frameworks change defaults. Schema specs update. You need monitoring.

Tools I actually use: Google Search Console (free, non-negotiable, set up Property verification for every environment), Screaming Frog (desktop crawler, catches 80% of technical issues), Ahrefs Site Audit (continuous monitoring, alerts on regressions), Lighthouse CI (runs on every deploy, fails builds that regress Core Web Vitals), and Sentry or similar for JavaScript errors that might be breaking crawl.

Bake SEO into the PR checklist, not a quarterly audit. If every PR has to pass Lighthouse CI, schema validation, and a link-checker, regressions get caught before they ship. If SEO is only reviewed quarterly, you spend the quarter between audits fighting fires.

The handoff document that actually works

Every Gatilab engagement ends with a handoff doc. It’s boring. It’s also what keeps the work alive after we leave. Copy this structure.

Sections: current tech stack (framework, host, CDN, rendering method), SEO-critical configs (robots.txt rules, sitemap generation, canonical logic, schema templates), Core Web Vitals targets (per page type, with thresholds), deployment checks (what must pass before merge), and a “known issues” list with links to tickets.

The doc lives in the repo, not in Notion. When the repo is the source of truth, it stays current. When the doc lives somewhere else, it rots within a quarter.

The verdict

SEO development is a specialty, not a side quest. It sits between two disciplines that rarely communicate, and the people who can translate between them are rare, valuable, and busy. If you’re a developer, learn enough SEO to know which tickets actually matter. If you’re an SEO, learn enough dev to write tickets engineers will take seriously.

The sites that win at search aren’t the ones with the cleverest strategy or the most content. They’re the ones where strategy and code line up, release after release, without friction. That alignment is the whole game. Everything else is noise.

What is SEO development?

SEO development is the engineering work that makes a website rank in search engines: clean HTML rendering, fast server responses, valid schema markup, Core Web Vitals optimization, proper redirects, canonicals, and indexability controls. It’s the plumbing underneath content and links.

Do SEOs need to know how to code?

SEOs don’t need to write production code, but they need to read HTML, understand HTTP status codes, interpret JavaScript rendering behavior, and write precise tickets. The best technical SEOs can read a crawl report, diagnose the root cause, and hand engineering a reproducible fix.

What Core Web Vitals scores does Google require?

Google uses three Core Web Vitals thresholds at the 75th percentile: LCP (Largest Contentful Paint) under 2.5 seconds, INP (Interaction to Next Paint) under 200ms, and CLS (Cumulative Layout Shift) under 0.1. Missing any single metric drops you from Good to Needs Improvement.

Does Google render JavaScript for SEO?

Yes, but on a delay. Google crawls the HTML first, queues the page for rendering, then returns hours or days later to index the JavaScript-rendered version. Server-side rendering (Next.js, Nuxt) or static generation gets you indexed immediately. Client-side-only rendering (CRA, vanilla Vue) often causes indexing delays or gaps.

Which schema markup should I add to my site?

Add schema that unlocks an actual rich result. Article for blog posts, Product for ecommerce, LocalBusiness for local, FAQPage for FAQs, Event for events, VideoObject for video, Review for reviews. Add BreadcrumbList everywhere. Skip schema types that don’t produce SERP features, because they add markup weight without ranking benefit.

What’s the difference between robots.txt, noindex, and canonical?

robots.txt blocks crawling (Google won’t fetch the page). Noindex allows crawling but prevents indexing (Google sees the page but keeps it out of search). Canonical picks the authoritative version when similar content exists on multiple URLs. Never combine noindex with robots.txt disallow, because Google can’t see the noindex tag if it can’t crawl the page.

How often should I run a technical SEO audit?

Bake SEO checks into every PR (Lighthouse CI, schema validation, link checks) so regressions get caught before deployment. Run a full technical audit quarterly. Review Search Console weekly for crawl errors, coverage issues, and manual actions. A quarterly-only cadence means you spend three months fighting fires you could have caught at merge time.

What’s a good TTFB for SEO?

Google’s stated target is TTFB under 600ms, but under 200ms is where competitive sites live. Below 200ms, you have headroom for LCP and INP. Above 600ms, everything downstream suffers. The usual fixes are object caching (Redis or Memcached), page caching, database query optimization, and moving off shared hosting to a VPS with at least 4GB RAM.

Leave a Comment