“You can’t improve what you can’t measure” is probably the most commonplace aphorism you would hear in startup circles — among founders, execs, growth hackers, hustlers, et al. What you measure, of course, depends largely on your context. A great deal has been written about metrics you should take seriously, and you can choose what suits you best. An important thing you’ll notice is that all these metrics are essentially outward facing metrics. These metrics are also of the first-order and can be measured directly.
In addition to these, I’ve realized that second-order metrics provide a higher-level view of a company’s progress over a longer time horizon. These metrics are introspective in nature, as opposed to outward facing like the metrics I mentioned earlier. They’re not very straightforward to measure but are very easy to understand nonetheless. And they uncover a number of things that the first-order metrics will miss.
What the engineering team is spending time on is one such metric. In most product startups, what the engineering team’s building can be categorized into two broad buckets: optimizing existing stuff, and building new things.
When the company is in its nascent phases and doesn’t have a product-market fit, nobody really knows what’s going to work. So they speculate what could work, and the engineering team spend their building new, speculative things. As time passes and the company gains users for these speculative things, the product inches closer to a market fit. The focus of the engineering team now must gradually shift towards optimizing these existing products as opposed to continue building new things. The ratio of time spent on optimization vs. building new things keeps increasing as the scale of usage of the product increases. And this becomes a necessity as well: hitting scale opens a Pandora’s box of problems that didn’t exist before. The code that worked well for the MVP a year back must be refactored and cleaned, lest it hold back the new people who’ve joined the development team. You must work on adding a robust caching layer now, and yeah — you’re probably going to need to rewrite that subsystem in Go because the existing implementation chokes up when concurrent requests burst. ¯\_(ツ)_/¯
In mature product companies, the engineers spend 70%-80% of their bandwidth in optimizing the existing products that serves most of their user base. Most of these changes are invisible to the users, even the internal non-engineering teams. These changes, one could argue however, are what sustain the product growth. The rest of the time and effort goes in building new, speculative things — hoping that some of these might cross in the former category in the near future.
A visible proof of this is easy to detect: take any popular application that you’ve been using for a long time — like Gmail, Facebook, or Uber. Now think about the last time you had noticed a major change in the existing UI or workflows, or addition of a new feature. Although you might notice, the one essential thing that you do in these apps has somehow kept on improving. If there are changes, they are in these core workflows to make them better.
Here’s another way to look at this metric: it can indicate the business progress of the company and provide an early sign of impending trouble. If the engineering team is regularly working on new things and not spending time with the existing code base, it clearly means non-adoption of existing products and non-achievement of product-market fit yet. If these new things are major new products, then each of these can be termed as pivots. It’s normal for companies to try multiple things before finding their fit — many great companies have taken up to 3 years to achieve it. But a recurring pattern of jumping on new things is generally a sign of serious trouble.