Once upon a time I got an urgent call from a nationally known auto parts company. They described significant performance degradation in their online product catalog application, used in all of their stores. Upon arriving on site, meetings were held across the organization, representing all the stakeholders. This was all very normal, but I noticed that there were few technical people made available, nor could any actual users corroborate the described degradation. In fact, about the only quantifiable information made available was from a technical director, sharing network data. It was also the only group that had any kind of analysis tooling in place and was the only group with any real longevity with the company.
Generally in an application performance triage effort, you get an early sense of the scope and shape of the problem. By the end of the first day, you’ve begun gathering together real data, or you’ve at least formulated a plan for deploying tooling to get the data. By the end of day one on this project, I’d met with multiple Directors and Vice-Presidents, all of whom regurgitated the company line that the catalog application was slow, but almost none of whom had any personal experience using the application.
Something didn’t seem right.
So I went to the project sponsor and I spelled it out: “Who am I here to fire?”
The CIO had overpromised a major project and was in trouble with line-of-business executives. That CIO and the management team they brought with them were likely to lose their jobs if they couldn’t deliver. But what if there was a major catastrophe? An unavoidable distraction away from this pending project failure? The product catalog, along with the network team, had been selected as the disposable scapegoats.
In two weeks of work, no meaningful technical problems were found, and our team was railroaded out of there quickly once it became clear to the project sponsorship that they were at a very real risk of being exposed.