Published on
· 6 min read

What Went Wrong: My Vinext Migration to Cloudflare Workers

Authors

What Went Wrong: Vinext Migration to Cloudflare Workers

Before I migrated this blog to Astro, I tried a different approach. I spent weeks attempting to modernize the existing Next.js setup — first replacing the abandoned Contentlayer with Velite, then moving to Vinext (Vite-based Next.js), and finally trying to deploy it all to Cloudflare Workers. After 17 commits and more debugging sessions than I care to count, I abandoned the entire effort. This is the story of what went wrong.

Why Contentlayer Had to Go

The blog was built on the Tailwind CSS Next.js Starter Blog template, which used Contentlayer to process MDX files into typed content. Contentlayer was a great idea — it gave you type-safe content with a nice API. But the original project was abandoned by its maintainer. The community fork, contentlayer2, kept things compiling, but it was clearly a temporary fix. I did not want to build on a foundation that could break with any Next.js update.

Contentlayer to Velite

Velite seemed like the natural successor. It offered a similar developer experience — define your content schema, get typed data — but with active maintenance and a cleaner architecture. The migration itself was straightforward: define schemas in velite.config.ts, update the imports, adjust the content querying patterns.

The initial commit landed cleanly. Content was loading, types were working, and the development server ran fine. At this point, things looked promising.

Velite to Vinext

Around the same time, I wanted to move away from Webpack. Next.js had been pushing toward Turbopack, but the ecosystem was not fully there yet. Vinext — a community project that swapped Next.js’s bundler from Webpack to Vite — offered faster builds and better compatibility with Vite-native tools like Velite.

The migration to Vinext was more involved but went reasonably well in development. Hot module replacement was faster, builds were quicker, and the Vite ecosystem played nicely with Velite. Locally, everything worked.

The problems started when I tried to deploy.

Enter Cloudflare Workers

I wanted to move from Vercel to Cloudflare Workers for a few reasons: the generous free tier, global edge deployment, and the appeal of not being locked into a framework vendor’s hosting platform. Vercel is excellent, but coupling your hosting to your framework provider felt like unnecessary lock-in for a static blog.

Cloudflare Workers uses the V8 JavaScript engine directly, not Node.js. This matters because Workers enforce a strict security model — certain JavaScript features that work fine in Node.js are simply not available. The most painful restriction: new Function() is not allowed.

This single constraint broke server-side rendering. Many JavaScript libraries — including parts of the MDX processing pipeline — use new Function() or eval() internally. In Node.js or Vercel’s serverless functions, this works fine. In Cloudflare Workers, it throws a runtime error.

The Fix Spiral

What followed was a series of increasingly desperate attempts to work around the Workers restrictions. Over 13 commits, I cycled through build fixes, routing workarounds, SSR hacks, and pre-rendering experiments — each one addressing a symptom while often introducing a new problem. The issues cascaded:

Build errors came first — Workers could not bundle certain dependencies. I worked through three iterations just to get the build to succeed.

404s on blog posts — the routing worked in development but posts returned 404 in production. The Workers routing model handled paths differently than Vercel’s serverless functions.

Blank pages after refresh — this was the most persistent issue. Blog posts would load fine on initial navigation from the home page (client-side routing), but refreshing the page or navigating directly to a post URL would render a blank page. The SSR path was broken because of the new Function() restriction.

The @mdx-js/rollup workaround — I tried replacing the MDX processing pipeline with @mdx-js/rollup, hoping to pre-compile MDX at build time rather than at runtime. This partially worked but introduced its own set of issues.

Image flickering — with the SSR workarounds in place, images would flash or flicker during hydration as the client-side React took over from the server-rendered HTML.

Draft posts appearing — the content filtering logic that hid draft posts in production broke somewhere in the chain of workarounds.

Each fix addressed one symptom but often introduced another. The codebase accumulated conditional logic, deployment-specific workarounds, and configuration hacks that made it increasingly fragile.

Why I Gave Up

The site technically worked after the Vinext migration. You could visit it, read blog posts, navigate around. But “it works” was not good enough.

The accumulated workarounds made the codebase brittle. Every change risked breaking something in the delicate chain of fixes. The performance was not where I wanted it — the SSR workarounds added latency and complexity. And the developer experience had degraded significantly from where I started. A blog should not require this much infrastructure to serve markdown files.

Sometimes the right decision is to stop fixing and start over.

What I Learned

Check your deployment target’s constraints before migrating. If I had tested a simple Next.js SSR page on Cloudflare Workers at the start, I would have discovered the new Function() restriction immediately. Instead, I built up a full migration and only hit the wall at deployment time.

“Works locally” is not enough. Every one of those blank page fixes worked in the local development server. The production environment was fundamentally different, and I should have set up a staging deployment pipeline earlier.

Know when the approach is wrong, not just the implementation. I kept trying to fix the deployment when the real problem was the architecture. Next.js with SSR and Cloudflare Workers were a bad match for this use case. The right answer was not better workarounds — it was a framework that produces fully static output.

The sunk cost trap is real. After 10+ commits of fixes, it was hard to walk away. But continuing to invest in a broken approach would have only made the eventual rewrite harder.

The answer turned out to be Astro — a framework that generates static HTML by default, with no SSR required. No new Function() issues, no blank pages, no hydration flickering. Just HTML, CSS, and a tiny amount of JavaScript for interactive components. The Cloudflare Workers deployment that took 13 commits to partially work with Vinext took 2 commits with Astro.