Part 1- Reduce Cost and Risk Through Application Portfolio Rationalization

Traditional application rationalization efforts have historically been too narrow in their focus and often leave value on the table.  The starting point for most rationalization efforts is to gather information about an enterprise’s existing application portfolio and use it to make recommendations for retaining, enhancing, re-platforming or retiring.

After running a number of portfolio rationalization engagements in this manner, I found the results to be lacking specificity in their recommendations.  Without very clear recommendations and supporting business cases, these initiatives tend to stall out over time as executives lose patience with high-level strategy in favor of wanting to see results.

To remedy this problem, IT organizations should focus on optimizing the application portfolio instead of rationalizing it.  Application portfolio optimization (APO) implies a much more granular approach to developing cost and risk reduction recommendations.  Let’s explore this in more detail…


3 reasons why portfolio rationalization goes wrong

Far too frequently, IT organizations kick off a rationalization initiative in response to external pressure to reduce IT costs.  Efforts often lose momentum after year 1 and the inventory information grows stale until the next initiative is launched some years later.  Such well-intentioned initiatives often fail to achieve their stated objectives due to:

  • Unrealistic expectations – Many portfolio optimization initiatives won’t break even until 1-3 years out. It takes time to renegotiate contracts, redeploy unused licenses, achieve cost avoidance and realize savings.
  • Lost momentum – Functional consolidation is expensive and time-consuming, and thus rarely pays off based purely on IT cost savings alone. Without a compelling business imperative and benefits (e.g., single view of the customer, integrated S&OP, global process harmonization), initiatives will often lose momentum.
  • Poor planning – Failing to adequately think through exactly what information is required to make and act on, decisions can result in multiple data requests that can quickly try the patience of the application owner community.

An approach to success

1. Start with the end goal – By carefully thinking through each of the potential application disposition patterns and the information required to recommend and act on a specific pattern, leaders can avoid multiple follow-ups and keep the APO initiative on track. To think through every scenario, decision trees can be helpful. To understand how decision trees work, click below for an example for easy reference.

Decision tree example

Decision tree diagram

2. Only collect what’s needed – When compiling inventory it is tempting to ask a group of experts— “What data should we collect about each application?” Resist the temptation to take this approach.  The key is to minimize your data collection requirements by only asking for what you absolutely know you are going to use.  A few guidelines will help avoid over-engineering decision trees:

    • Limit questions to what’s on a decision tree – If the information isn’t on one of your decision trees, ask yourself why you need it and how you are going to use it.  An application inventory questionnaire should take no more than 1 hour and no more than 100 questions per application.
    • Avoid fast-moving data – Avoid collecting information that will change quickly and be outdated before you can use it (e.g. ticket counts, downtime, etc.).
    • Pre-populate data – If you can pre-populate answers to questions for some applications based on existing inventory information, it will reduce the amount of data collection effort expended by the application owners and improve your data quality.

3. Data collection – Getting hundreds of application owners to complete application surveys can be a daunting and time-consuming task.  To help increase the quality and timeliness of the survey responses:

  • Enlist strong sponsorship – The vocal and enthusiastic support of the CIO is critical to the success of any APO initiative.
  • Define what is and what is not a business application – I like to define a business application as a combination of software components used with a persistent data store with multiple concurrent users that solves a business problem.  This helps focus the effort on applications that optimization patterns are applicable to.  Things that are not applications include:
      • End-user applications – Software like Microsoft Office or CAD are not considered business applications because they lack a persistent data store and don’t have multiple users.
      • System software – Like operating systems, databases and app servers are the individual components that make up business applications, but, are not solving a problem until they are combined with other components.
      • Integrations – Interfaces, batch jobs, EAI, web services and APIs lack a persistent data store, so, they are components that might be used by multiple business applications but are not stand-alone.
      • BI Technology – BI tools like Tableau, Microsoft Power BI, and Business Objects lack a specific persistent data store and are considered a component of a business application, but are not a business application in their own right.
    • Identify application owners – Organizations must constantly strive to better align IT with the business.  Therefore, consider identifying both an IT owner and a business owner for each application – and then assign the IT owner the task of following up with the business owner to collect the required information.
    • Leverage transition efforts – APO initiatives will sometimes run concurrently with an outsourcing effort to a managed service provider (MSP).  The MSP will often need to complete an application information document as part of the knowledge transfer, so it is quite easy to extend the knowledge transfer activity to include the application survey data.
    • Use low-cost locations – For a globally-distributed IT organization or one that already makes extensive use of low-cost location strategies, it can be efficient to co-locate a portion of the APO team in a low-cost location.
    • Escalating follow-ups – A four-staged, five-week approach to application survey response follow-ups works well, with ongoing escalations up the organization with each follow-up (cc’ing the respective managers to ensure accountability).

4. Create a common language – A functional architecture that all business applications can be mapped to is key to identifying potential functional redundancies.  A well-thought-out functional architecture should be:

    • At the appropriate fidelity – Too high level and you end up with too many applications mapped to the same capabilities, ending up with lots of false positives. Too detailed and application owners won’t fill it out correctly.  A capability map should have three levels with a total of 200-300 sub-capabilities and no more than 15 sub-capabilities per capability.
    • Easy to understand – A well-designed functional architecture should allow an application owner to quickly identify all the capabilities that their application supports.  Each capability should be expressed as a two-word noun/action combination.  (e.g., opportunity management, account management)
    • Capability based, not process based – A process represents ‘how’ you do something (the steps); A capability represents ‘what’ you do.  The idea behind business architecture is to segregate those capabilities that are common across processes into a common capability that can be leveraged by multiple processes.
    • MECE (Mutually exclusive, collectively exhaustive) – Each capability within an enterprise functional architecture should be mutually exclusive (no overlap), and when taken as a whole they should be collectively exhaustive (cover the entirety of what the enterprise does).

5. A mile wide and an inch deep – To make efficient use of time and resources, it is best to start out with a high-level analysis that goes a mile wide (looks for all potential opportunities) but only an inch deep (keeps the analysis fairly mechanized without any hands on the code or detailed architecture analysis).  By first identifying promising opportunity areas before taking a second pass where you can go a mile deep but only an inch wide, you can avoid lots of wasted effort.

In my expert view, an application portfolio optimization provides a much more pragmatic approach to addressing the application sprawl endemic to large IT organizations. By taking a much more systematic approach in identifying optimization opportunities, such an effort focuses on tangible recommendations with measurable cost and risk reduction.

In the third installment in this series, I will discuss specific cost reduction and risk reduction strategies that can be enabled by a systematic approach to application portfolio optimization.

Part 3- How to Leverage Application Portfolio Optimization for Risk Reduction and Cost Savings



Joshua Biggins

Joshua Biggins

Partner – Enterprise Strategy & Architecture

Joshua Biggins is a partner with Infosys Consulting where he leads the Enterprise Strategy & Architecture practice for a number of industry verticals.  For the last 22 years he has focused on helping clients leverage technology to transform business models and unlock value.  His experience is focused on the most pressing issues on the CIO agenda, including AI and automation, IT cost reduction, application portfolio rationalization, managed services transformation and technology modernization.

Pin It on Pinterest

Share This