Business and Scale

Post-launch rot: why your site falls apart 6 months after handover

Production sites decay after handover. Dependencies drift, certificates expire, Web Vitals slide. Why six months is the breakpoint, and how to prevent it.

May 8, 20267 min read
Post-launch rot: why your site falls apart 6 months after handover

Post-launch rot is the gradual decay of a production website after handover, where dependencies drift out of date, certificates expire, Core Web Vitals slide, and content stops matching what users actually search for. The site shipped on time. Six months later, nobody is on call, nothing is monitored, and the small fixes that would have cost an hour in week one now cost a quarter of the rebuild budget.

This is the most common reason mid-market companies hire us for a second engagement on a site that was working fine at launch. The codebase has not gotten worse on its own. The world around it has moved on, and the site has not.

Why this hurts more than people expect

The numbers are not subtle. The IEEE Computer Society puts software maintenance at 60 to 80 percent of total lifecycle cost, with only 40 percent going to initial development. McKinsey reports that 30 percent of CIOs see more than 20 percent of their technology budget diverted to resolving technical debt. Keyfactor's 2024 PKI and Digital Trust Report found that 88 percent of companies experienced unplanned outages from expired certificates in the previous two years.

On the dependency side, the picture is worse. Research from 2025 shows that 80 percent of application dependencies remain un-upgraded for over a year, even when 95 percent of vulnerable components have fixed versions available. The average npm project pulls in 79 transitive dependencies, and in 2025 alone attackers published 454,648 malicious npm packages. Every untouched site is sitting in that current.

Add Core Web Vitals drift, content decay, and broken internal links, and the site that ranked at launch loses ground week by week. Google's algorithm does not announce the demotion. The traffic just bleeds.

Why the obvious fix doesn't work

The default response is to call the original developer back for ad-hoc fixes. It feels efficient. They built it, they know it. We have watched this break in three predictable ways.

First, the pricing is wrong. A break-fix arrangement pays the developer to wait for things to break, which is the opposite of the incentive that keeps a site healthy. Second, the knowledge has already drained. Six months after launch, the developer who shipped the site has shipped four others. The git log is the only thing remembering why a particular middleware exists. Third, the cycle is too long. By the time a stakeholder notices the dashboard is slow, three months of Core Web Vitals data have already moved against the site in Search Console.

What actually works

Treat the site as a product with a small recurring backlog

Maintenance is not a phase. It is a posture. We run a single recurring sprint of two to four hours every two weeks for sites we keep on retainer. The work fills itself: a Renovate PR to merge, a Lighthouse regression to investigate, a deprecated API to swap, an article whose stats are now wrong. Two hours a fortnight beats one panicked rebuild.

Automate dependency hygiene before it automates you

Renovate or Dependabot, configured to open pull requests immediately for security updates and weekly for minor versions, catches roughly 90 percent of dependency drift before it matters. The job is not to merge every PR. It is to look every week. With a CI suite that actually runs, the merge is a one-click decision. Without one, dependency updates are a cliff and the site stays on old versions for years. The September 2025 npm supply chain compromise, which hit chalk, debug, ansi-styles, and strip-ansi (collectively 2.6 billion weekly downloads), was a reminder that even popular packages are not safe by default.

Watch what users actually feel, not what your synthetic monitor says

Real User Monitoring on Core Web Vitals beats synthetic checks every time. The reason is content drift. New images, new ad scripts, new analytics tags creep in over months. A synthetic test on a fixed page misses what a real visitor on a real device with a real network is going through. We wire Vercel Analytics or Cloudflare Web Analytics on every site we ship, and we set alerts when LCP, INP, or CLS cross thresholds for two consecutive weeks.

Audit content every 90 days

Pages not updated within 90 days lose AI citations and ranking weight. The fix is a calendar, not a panic. Every 90 days we run a content audit on the top 20 pages by traffic and the top 10 by conversion: stats current, sources still live, internal links pointing at pages that still exist, FAQ matching what users actually ask. Most pages get a five-minute touch. A few get a real refresh. The aggregate effect on search visibility is significant.

Treat certificates and DNS like fuses

SSL certificates and DNS records are the easiest part to automate and the most expensive when ignored. The 2018 Ericsson outage took down O2's network in the UK and cost roughly $1.4 billion in remediation, all because a certificate expired. The CA/Browser Forum is reducing public SSL/TLS validity to 200 days starting March 15, 2026, then to 100 days, then 47. Manual renewal is no longer viable. Use Let's Encrypt with autorenew, monitor expiry on every endpoint with a service like Uptime Kuma or BetterStack, and set DNS TTLs that let you fail over without a four-hour cache delay.

Build the runbook before you need it

The site is going to break at 11pm on a Saturday at least once a year. The decision is whether the response takes 20 minutes or three days. We write a five-page runbook for every site we ship: who to call, where the secrets live, how to roll back the last deploy, how to put up a static maintenance page from CDN, where the Sentry dashboard lives, how to rotate a leaked key. Stored in the company password manager, not on a developer's laptop.

What this looks like in practice

A site we audited in early 2026 had been live 11 months. The original team had moved on. Lighthouse scores had dropped from 96 to 71 on mobile because three rounds of marketing edits added 800KB of unoptimized hero images. Eight dependencies were out of date, two with known CVEs. The blog had 14 broken outbound links, six pointing at competitors who had since rebranded. The contact form was sending leads to an SMTP password that IT had rotated four months earlier.

None of those problems were dramatic. Together, they had cut organic traffic 38 percent over six months. The fix was not a rebuild. It was four weeks of structured work: Renovate, image pipeline, content pass, link audit, runbook. Six months later, traffic is back above launch and the team is on a two-hour fortnightly retainer that catches the next round before it accumulates.

Post-launch rot is a budget question first and a code question second. The cheapest moment to fix a site is before it has rotted. The most expensive moment is after a slow drift has compounded for a year and the rebuild is on the table. Two hours every two weeks is the cheapest insurance product on the internet.

Sources

Photo by Yiquan Zhang on Unsplash

Studio

Start a project.

One partner for companies, public sector, startups and SaaS. Faster delivery, modern tech, lower costs. One team, one invoice.