In full-stack development, we often joke that if the site is up and the latency is low, it’s a good day. But sometimes, the most dangerous signals are the quietest ones.
Last week, we were conducting a routine audit for a client’s web environment. What started as a standard review of traffic logs turned into a live fire exercise involving a Next.js application, a container vulnerability, and a hidden Monero miner.
Here is the technical breakdown of what we found, how we killed it, and why we’re changing our CI/CD pipelines because of it.
The Anomaly: When 500s Aren’t Just Bugs
The application in question is a standard Next.js build hosted on an App Service, containerized for easy deployment. It’s a setup we’ve seen a hundred times.The first sign of trouble wasn’t a crash. It was a subtle pattern in the application insights. We noticed a slow, rhythmic spike in 500 Internal Server Errors mixed with 404 Not Found responses.

As a lead, my first instinct was a bad deployment or a misconfigured API key. But when we drilled down into the request logs, the “users” triggering these errors weren’t navigating the site. They were hitting specific endpoints repeatedly. This wasn’t a confused user; it was a “Traffic Suspect Requester”—an automated script probing for weakness.
Root Cause Analysis: The RCE Vector
We ssh’d into the container to investigate. A quick top command revealed the truth immediately. The CPU usage wasn’t spiking due to rendering React components; it was pegged by a process that had no business being there.

We had found a Remote Code Execution (RCE) vulnerability.The attackers had utilized a “spray and pray” strategy—scanning public IP ranges for unpatched dependencies in the container’s OS layer. Once they found a foothold, they didn’t try to steal data or deface the site. Instead, they injected a payload to download and run a Monero (XMR) mining script.

The container had effectively become a zombie worker in a crypto-mining botnet. It was stealthy by design: keep the app running, but siphon off just enough compute power to generate profit without immediately tripping availability alarms.

The Fix: Nuke, Patch, and Pave
One of the benefits of containerized architecture is disposability. We didn’t waste time trying to “clean” the infected environment.
Step 1: The Nuke We immediately killed the compromised container instance. In a cloud environment, you don’t fight hand-to-hand; you burn the ground. We spun down the service to stop the bleeding.
Step 2: The Audit We reviewed the connection strings and environment variables. Although the attack was purely resource-parasitic (mining), RCE implies total control. We rotated all secrets, API keys, and database credentials as a precaution to ensure no lateral movement had occurred.
Step 3: The Pipeline Upgrade This was the most critical step. A human can catch a log spike, but only automation can prevent the hole in the first place. We integrated a stricter layer of Static Application Security Testing (SAST) and Dependency Analysis into the build pipeline.
We moved to a model where the build fails automatically if:
- The base image has known CVEs.
- NPM packages are outdated or flagged by security databases (like Snyk or SonarQube).
- Code analysis detects unsanitized inputs that could lead to injection.
The Lesson: Security is Continuous, Not a Milestone
In the retrospective, I told the team: “We have to make every mistake a learning opportunity.”
The reality of modern full-stack engineering is that we are dependent on thousands of libraries and base images. Trusting them implicitly is a risk we can no longer afford.
Moving forward, we are treating our build pipeline not just as a delivery mechanism, but as a gatekeeper. By implementing automated vulnerability scanning – an investment in the mid-5-figures that pays for itself in a single prevented breach, we ensure that our containers remain boring, predictable, and miner-free.