Application Portfolio Management: TIME for the Application Masses
- Large inventories of small applications can often be categorized rapidly by frequency of use.
- Many infrequently used applications can be retired or consolidated.
- Proactively proposing retirement can accelerate portfolio simplification
- Big, complex applications are often a fairly small percentage of the total application count., support a lot of business value, and merit a detailed analysis
- How can we economically categorize and overhaul the thousands of smaller applications that form the rest of the portfolio?
- ” Application-hunting license,” prescreening applications by usage and requiring user involvement in creating appropriate life cycle strategies.
- Create a routine process that will rapidly align the thousands of applications into tolerate, invest, migrate and eliminate (TIME) categories — ideally driving extensive application elimination
- Application Hunting
- Start this routine with any applications that haven’t been accessed in a year, then systematically bring the time down to nine months, then six months and then three months.
- It’s Rare that anyone thinks of retiring applications after task is done or the project is completed. In extreme cases, none of the people who used the application remain with the organization.
- Send out a message listing five to 20 applications that have not been accessed in several quarters.
- In the case of nobody responding, the exercise is straightforward to retire the system
- Otherwise ask How often is the application used? Could you live without it? Is there another way to get that job don
- One less system to maintain, fewer licenses for which to pay maintenance fees, and less storage and power being consumed
- “mock” retire the system — that is, bring it offline for a full quarter, and if no one complains, then officially retire it
Those that know me will contest that I tend to take a lot of notes. I do so because I truly believe that to remember is to record. This quote, "I'm not writing it down to remember it later. I'm writing it down to remember it now" really put this practice into perspective for me. As such, most of my field notes never see the light of day, not even for me.
1. Disaster Recovery – Cloud providers such as Amazon AWS offering “pay as you go” pricing enable reduced cost for disaster recovery. Essentially, one only pays when disaster happens and a recovery is needed. To be more accurate, the 24/7 activity of replication and storage of data from the production environment to the DR environment is the fixed cost. At the same time, however, the application and data servers do not cost a single penny unless a disaster happens, in which case the servers are started up. Even if a disaster lasts for months (e.g. Katrina), this is still considerably less expensive than an in-house data center that must purchase all the hardware upfront for the application and data servers.
2. Batch Computing – Batch applications often follow a predictable “batch window” of high and low processing requirements. For example, nightly batch processes may require 1000 servers of processing to complete it’s processing from 12am-8am before the next business day starts. These 1000 servers must be purchased up front and may not be used (or used very little) during day-time business hours, resulting in a very low CPU utilization rate of 33% (8/24). With IaaS cloud computing and the ability to scale (or auto-scale) when needed, the CPU utilization rate is theoretically 100% or realistically at least in the 90s. Major savings.
3. Short-term Web Site – For example, a marketing professional may create a dedicated web site for a product. If that web-site is mentioned in a commercial during the Super Bowl with 100M+ viewers, there a good chance that web site will get hammered, potentially with 100′s or 1000′s or more unique hits within a few minutes, potentially requiring 100′s or 1000′s of servers. A few days after the Super Bowl, the marketing web site requires 2 servers for the rest of the year. Again, with a pay as you go and auto-scaling capability, the cost savings in comparison to traditionally purchasing all the equipment up front are through the roof.
4. Test & Dev – Cloud computing is also cost effective for test and development environments that may not need to be running 24/7. Again, pay only for the time the system is running.
IaaS Cloud Computing will not always reduce costs
It’s important to call out that IaaS Cloud computing may not be cost effective for large business steady state workloads for many use cases and may even be more expensive. I predict, however, this will change due to improved automation capabilities that enable IT Operation teams to perform more efficiently. The technology of automation capabilities are still lacking and not yet mature enough to provide real steady-state savings. Examples of automation capabilities include automated patches, backups, database replication (e.g. Amazon AWS RDS), and the ability to quickly deploy and configure a complex, integrated environment of web, application, data and network components components in an automated fashion. Again, the tools exist but are still several years until mainstream adoption in my opinion. Put another way:
Sufficiently mature and integrated automation capabilities will be the tipping point for mainstream enterprise adoption of IaaS Clound Computing. We are still several years away from this reality. Do you agree or disagree? Your thoughts are welcome.