The best way to handle a challenging situation is sometimes to do nothing.
I know how that sounds. Especially coming from a startup CEO. We’re supposed to be biased toward action, right? Move fast, break things, fail forward?
But here’s what nobody tells you: acting fast on the wrong problem is worse than acting slowly on the right one.
Last month, we learned this the expensive way.
We were hit by a tsunami of new users at NonBioS. We added as many users in a single month as we had added in the entire previous year. The challenge that naturally arose—our systems were unable to keep up. Our users were getting hit by serious performance degradation.
Customer experience is the top priority at NonBioS. When the experience is compromised, all hands are called on deck. We leave everything else to focus on the customer. It is a central tenet we’ve kept since we started.
So the war room was activated. Everybody went heads down to surmount this challenge. We scaled our systems for 7 days, confident this would take care of the degradation. As the weekend rolled in, we were confident that NonBioS would calm down. For our sirens to finally go silent. But on the 8th day, the sirens started blaring again. The performance degradation was still where it was.
Exhausted, we stepped back. This wasn’t a simple scaling issue. This was something else.
Over the next few days something dawned on us. What if the problem was upstream? You see, NonBioS relies on several core AI inference providers. But the specific provider we suspected was pretty much an industry standard. They had raised almost 100 million dollars in funding. Their systems could surely keep up.
But NonBioS is not a standard customer. Our requirements are unique. At times of heavy traffic, NonBioS can issue hundreds of inference calls in a very small amount of time. No one else that we know of does this at the scale we do. So we reached out to their customer support. The engineer on call immediately replied that their systems were stable and the issue was at our end. We escalated to their CEO to dig deeper. And the issue is finally being addressed.
Here’s what haunts me. We never asked the basic questions. What if this surge was temporary? What if the problem isn’t our infrastructure at all? What if there’s an upstream issue we should be investigating in parallel? We spent seven days heads down, tunnel vision, solving what we assumed was broken.
When you’re a horse in a race, the blinders come on naturally. You lose perspective. Maybe this isn’t your race. Maybe you’re running hard in the wrong direction.
As a resource-constrained startup, our scarcest resource isn’t money or engineering time—it’s attention. When we panic and act immediately, we commit our most valuable asset to potentially the wrong problem.
The week we spent frantically scaling our systems wasn’t wrong exactly—we needed to scale eventually. But it wasn’t the urgent crisis we treated it as. We could have paused. We could have asked better questions first. We could have investigated multiple possibilities before committing our entire team to one assumption.
The discipline to pause, even when systems are on fire, even when customers are complaining, is what separates reactive teams from strategic ones. Sometimes the fastest way to solve a problem is to spend an hour making sure you’re solving the right problem.

