The book speaks to the C-Suite Core (CEO, COO, CFO) because they have the greatest incentive (and authority) to introduce change in this area. Although other CXOs might have less authority, they might still have enough incentive to drive change. If you belong to this category, one of the following ways of framing the topic might resonate more with you.
A level playing field for Commercial Leaders
Commercial Culture vs. Data Culture
Funding: Eminence vs. Evidence
Do you lead a commercial line or a function like marketing, sales, or operations? Do you feel that your initiatives are set aside or underfunded in favor of those pitched by your colleagues? Do you worry that the squeaky wheels get all the grease? Do you crave a level-playing field? Is a lack of measurement infrastructure coming in the way of demonstrating the success of your initiatives? You could choose to lead by example. Activate impact intelligence for your book of work. In six months, you would be able to share your impact network and your performance in terms of proximate and downstream impact. Soon enough, the C-Suite core might start wondering why others shouldn't follow suit.
When asked about impact intelligence, a finance chief at a B2B SaaS said they were focused on progressing towards the top-right of Gartner's Magic Quadrant for their product category. They had made good progress in the last two years and so they did not “micromanage” individual investments. For sure, some paid off much better than others, but they had no mechanisms in place to achieve a shared data-driven understanding. Times were good and there was little interest in putting such a mechanism in place given its perceived overhead.
I find this a bit disconcerting. When the market turns every investment would come under the lens. It would be too late then to put an impact-feedback loop in place. They must develop the muscle now when the going is good. Bad idea to start exercising only after finding out that the arteries are clogged.
Data leaders are often in a tough spot. An unhappy data scientist once said, “A data scientist's job is to launder business leaders' intuition using quantitative methods.” Does this sentiment exist in your team as well? Do you worry about how to achieve a true partnership with your business colleagues? You might want to consider expanding your portfolio beyond predictive use cases. Read Chapter 7 of the book and then pitch impact attribution to your COO or CFO. Pitch it as a way to understand and improve the true business impact of the initiatives they invest in.
Excellence ends when eminence eclipses evidence. In science and in business, continued funding (for projects/initiatives) ought to be evidence-based. But as Professor Donald Braben quipped, it is often eminence-based. No doubt, people rise to eminence with evidence of performance. But then, they sometimes grow to believe that their eminence grants them immunity from providing further evidence. The practice of impact intelligence helps organizations stem this sort of descent into mediocrity.
CTOs have been saying that the demand for new functionality far exceeds delivery capacity and tech budgets. Thanks to AI, delivery expectations have only increased. What's a CTO to do? Read my Reformist CTO's guide to impact intelligence
Chasing speed (of delivery) at the potential expense of accuracy (business impact) is a symptom of siloed Product and Engineering. And in large orgs, I see a peculiar dysfunction. Engineering is held accountable to speed (and quality) way more than Product (or the BRMs) is held accountable for business impact (not just proximate impact). As a result, we see lots of action (and output) without commensurate business impact. There's a huge opportunity to do less and achieve more!
I was asked how I went from agile org design (transformation) to impact intelligence. On the face of it, these topics are a bit apart. But if you ask, “What's the impact of our transformation efforts?”, you end up connecting the two topics. I didn't do this when I first began. It was a mistake. But as I started helping clients with project to product transitions, I realized the importance of having a business-relevant measure of success. Otherwise, leaders thought they had effected a successful transformation merely on the basis that they had moved from temporary project teams to long-lived (persistent) teams.
Soon, it led me to inquire about the business impact of other initiatives. One thing led to another, and the iRex framework took shape. It is explained in the book.
These days, no one knows the return on investment of an initiative. You may know the projected ROI because that's how it gets funded typically. But as they say, there's many a slip betwixt the cup and the lip. The actuals may be quite different from the projections, and you may never know. Even projections might not be in strict ROI terms. It might just say that by executing this initiative, some important metric would improve by 5%. Given this state of affairs, it is not possible to determine ROI.
But with a bit of effort, you might be able to calculate the next best thing, the return on projection (ROP). If the said metric improved by 4% as against the projected 5%, the ROP is 80%. Knowing this is way better than knowing nothing. It's way better than believing that the initiative must have done well just because it was executed (delivered) correctly.
ROP is a measure of promise vs. performance. It's a kind of say-do ratio.
You should track ROP to make better investment decisions. Asking for a thorough justification is good, but they are based on assumptions. A promise is invariably embedded in the justification. If you only decide based on promises, it incentivizes people to make tall promises. Business leaders might be tempted to outdo each other in making tall promises to win investment (or resources like team capacity). After all, there is no way to verify later. That's unless you have an impact intelligence framework in place.