A lot of the work I do comes down to negotiation not just with people, but with the reality of trade-offs. Every decision in an early-stage startup has constraints, and it’s rarely a clean win. You’re always giving something up: time, capital, team bandwidth, or a future opportunity. So I’ve learned to slow down and articulate clearly: what are the pros and cons here, and who is responsible for the outcome if we commit to this path?
Take product decisions, for example. Deciding which feature to build next is a negotiation in itself. You weigh the cost to build, the perceived benefit to the user or the business, and critically, when that benefit is likely to materialize. That timing dimension matters more than most people admit. Some bets might pay off fast and help us hit a revenue milestone. Others are long-term compounding bets. Both are valid, but in a cash-constrained environment, the short-term usually needs to fund the long-term.
One thing I’ve had to become very strict about is not building features just because we can. Early on, we fell into that trap building elegant or broad features that weren’t directly tied to user behavior or validated pain points. There’s always a tendency to think that way- especially if you believe “more features means easier to sell”. This is not often the right way to think about designing a product. This taught me to press pause and ask: What hypothesis are we testing with this feature? If we can’t articulate that, it’s probably not worth building yet.
Startups at the pre-revenue stage exist to figure out what the market actually values enough to pay for. That means we’re in the business of testing hypotheses, not just shipping software. It is about constant learning from evaluating experiments and deciding on which hypotheses we wish to test, aligning new information into our daily tactics, and ensuring that the strategy aligns with our company’s direction. But building even a simple MVP still costs real time and money. So we need to be surgical about how we validate.
Scoping out a feature
This July at Socratics.ai, we scoped out a feature (bottoms-up revenue modeling, price times quantity) that in my initial estimates based on the requested requirements would’ve taken three to four weeks to build end-to-end. This initial idea was expansive: it had a lot of new capabilities such as breaking apart a financial model’s net revenues into separate lines similar to how our ICPs do it. This allows investment bankers and analysts a more granular forecasting capability per revenue component (e.g. products, services).
Together with my colleagues Varun Desai, our financial analyst intern who also previously worked at an investment banking firm, and Pattaradorn Chaianunsawad, our product owner, we reviewed all the necessary components for this new feature and found that it was composed of two parts: the first is a separate worksheet analysts use called a historical revenue model which may contain backlogs, and the second is the ability to separate components of the net revenue in the actual financial model itself. Before we touched a line of code, we re-evaluated and reframed the hypothesis to be tested as: users value the ability to load a historical revenue model more than being able to visually present the revenue components separately in the core financial model.
From that reframing, we realized that we could run a rapid 2 week prototype that can ingest a revenue model from an Excel file, render it on our web platform, and allow the user to connect this as a newly designed forecast driver on Socratics.ai. This saved us roughly two weeks, and more importantly, got us to build out the feature quicker so that we can present it to actual users and get their direct feedback on what is essential. Elon Musk captures this with a simple saying: make the requirements less dumb. Part of that is removing the ‘frills’ and ‘nice-to-haves’ until you get to the core value you wish to test for. He claims that you’ve found the sweet spot once you find yourself bringing back 10% more of what you’ve removed. Sometimes the fastest way to move forward is to test the riskiest assumption before you build.

Ownership of outcomes
This thinking also shapes how I run engineering teams. I push for ownership of outcomes, not just tickets. If someone is shipping a feature, I want them to know: what problem are we solving, what’s the intended user behavior, and what will we learn from this and how? It creates alignment between engineering and the broader strategy and helps people take pride in why they’re building, not just what they’re building.
One of the hardest parts of startup leadership is managing uncertainty without falling into indecision. I have felt that temptation to wait until I have perfect clarity. In reality, clarity comes after action, not before. The way I reframe it for myself now is: after identifying critical hypotheses to test, what’s the cheapest ways to test it? Sometimes testing it doesn’t mean needing to build the whole thing. This process helped me learn to make decisions with imperfect information, document the trade-offs, and own the consequences. Sometimes we get it right. Sometimes we learn something useful the hard way. But either way, we move forward.