Monday, June 29, 2009

Financial Order

An interesting, and in my opinion, welcome, trend taking place nowadays is the financial accountability that the IT departments are being held up to. In some cases, companies are choosing to subcontract out the work to outside IT firms that provide the results without going over-budget rather than their own IT departments that are constantly in arrears and perpetually asking for more funds.


The old paradigm was that IT would ask the board of directors for as much as they could and then try to deliver the promised work with the given funds. If the money ran out, then they would go back to the board for more. Needless to say this was not a solution that was going to work in the long term and IT departments are now facing the fruits of what they have sown. Namely, being thrown to the side as the work is given to those that can deliver as promised.


The long and short of it is that IT, even if it only service internal customers within the organization, must now handle its finances adequately. While it does not need to go into the complexities that the accounting and finance department of the organization may go into such as 401(k)s and IRA accounts and so on, certain basic bookkeeping and accounting is now mandatory for IT departments. Essentially, an IT department must now consider itself a separate company within its parent company; at least as far as its accounting and finances are concerned.


The solution is the application of adequate financial management within the IT department. The ITIL body of knowledge provides a framework for the application of financial management within the IT department as part of the larger framework of the overall financial department of the organization. Financial management essentially consists of three areas:


  • Budgeting

  • Accounting and

  • Charging


Budgeting consists of predicting the spending of money by the IT department during a budget period (usually a quarter or year). This involves creating spending plans and estimating the work to be performed and the cost of performing the work. The monitoring of the actual expenditure vs. the estimated budget and making correction to the budget as needed is also part of the budgeting process.


Accounting consists of providing detailed information of the expenditure incurred by the IT department on a day to day basis, comparing the actual expenditure to the budgeted expenditure and taking corrective action as necessary.


Charging consists of the recovery of the cost of IT expenditures in a simple, fair and affordable way. The IT department may choose to charge or not charge and simply provide budgeting and accounting information based on the policy set by the organization.


When Financial Management is successfully applied, the organization enjoys:


  • Superior financial compliance and control

  • Enhanced decision making thanks to availability of financial details

  • Superior portfolio management due to understanding of costs and benefits and the ability to accurately compute the ROI

  • Better operational visibility and control

  • Improved value capture and creation


The serious application of financial management is now a necessity in every IT department and seeking a bailout from the board is fast disappearing as a possible option. IT must now consider itself accountable over providing services as well as handling finances adequately.

Monday, June 22, 2009

Support Amnesia

The product or service lifecycle can usually be broken in to four parts: the strategy for providing the service, the design and development of the service, the transition of the service to the live environment and the support of the service in the live environment. Very often, the estimate for the cost and effort required during the support stage is unrealistic and below the actual numbers that are experienced. Why is this?


Organizations too often have an inflated sense of their ability to create a product or service that falls within the customer’s requirements of quality and functionality. Due to this, they assume that once the product or service is released to the live environment, there is no further work involved besides taking a few help desk calls from some “non-techie” customers that don’t understand the” jargon” of the manual. The reality, of course, is that several requests for change will be demanded from the customers, which in turn will require corresponding rework, retest and release cycles. This, of course, requires time and resources that were not planned for which in turn throws the scheduling and budget in disarray. A great deal of trouble and anguish might have been averted if only the effort required during support had been accurately forecasted.


Ideally, the product or service should match the needs of the customer so well that very little effort should be required during the support stage. Indeed, the less effort required during support, the more successful the organization has been in delivering the product to the customer. However, as organizations are generally not at this level of delivery competence, they should realistically factor in the effort that will be required during the support stage. At the same time they should also continuously improve their strategy, design and transition stages so that less support effort will be required in the future and the maturity of the organization increases with time.


Having stated the ideal case, the reality is that most organizations do not adequately evaluate the effort required during the support stage. This is caused, in my opinion, by the following factors:


  • An inability to face up to the unpleasant news of poor delivery and the subsequent requirement for further resources. Management here can play a crucial role by not shooting the messenger and dealing with the bad news in a mature and capable fashion.

  • Lack of company maturity and metrics in place to adequately understand the status of the product and its true state. A collective effort by the entire organization will have to be made to resolve this.

  • Unrealistic demands by management on the system. Management should not allow themselves to be seduced by market conditions and demand more than is possible from the capabilities of the organization. It is really their job to have properly forecasted market conditions and planned their strategy in advance with respect to the capabilities of the organization.

  • A poor understanding of the support stage itself by the stakeholders will naturally result in the inability to accurately forecast the time and resources required to implement support adequately.


When cost and schedule are thrown into disarray by the emergency allocation of resources to support, the areas where these resources were drawn out of will then also suffer. The breakdown of one project tends to have a “ripple” effect that then affects the entire organization’s activities in varying degrees.


Support is easily the most visible stage because this is where the customer actually makes contact with the organization, typically with an issue or problem needing resolution. Of course, in the previous stages customer representatives were in communication with the organization in order to provide requirements and evaluate and approve progress. However, this is where the customers and not just representatives actually connect with the organization for problem resolution. This is where the organization can really impress the customer despite the fact that the customer may have a problem or be in a state of dissatisfaction. Despite all efforts, there will be problems and issues that the customers will experience. It is the way the organization handles the problem and the unsatisfied customer that will determine whether the customer takes their business elsewhere or not.


Therefore, it is evident that support must be planned for with an honest look at the organization’s capabilities and adequate time and resources should be allocated in order to provide proper support. Failure to do this will only cause needless chaos and instability with disgruntled customers taking their business elsewhere.

Monday, June 15, 2009

Too Much Quality?

Quality is one of those concepts like money, enlightenment or status that everybody wants but never can quite get enough of. What is insidiously dangerous about organization’s and people’s desire for quality is that they usually do not realize that it is simply a metric or measure of the customer’s expectation and must be delivered at that level: no more and no less.


Does that mean providing a lower level of quality because the customer wants it even if the organization is capable of producing a higher level? Yes! That is exactly what I am asserting. To make sense of this seemingly contradictory statement, let us first understand quality and its implied characteristics.


The definition of quality as per ISO is "the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs." This includes the characteristics of availability, reliability, maintainability, serviceability, performance and security as defined by the customer in their requirements.


Availability may be thought of as the ability of the product or service to perform its agreed function when required and is usually expressed as a percentage.


Reliability is the measure of how long a product or service can perform its agreed function before interruption. It is usually expressed in units of time.


Maintainability is a measure of how quickly and effectively a product or service can be restored to normal working after a failure. It is usually expressed in units of time.


Serviceability is the degree to which the servicing of an item can be accomplished with the given resources and timeframes. This is usually performed by a third party supplier and is measured in units of time.


Performance and security are specific to the customer requirements of what task the product or service should perform and what levels of security are necessary. These characteristics vary widely from product to product.


So it may be stated that the sum total of these characteristics at levels specified by the customer make up the overall characteristic of quality which then must be delivered to the customer by IT. But the question under discussion is should an organization provide a higher than requested level of service to the customer if it is capable of doing so? While this was an accepted practice in the past, in today’s environment, it is not a recommended practice. But why is it not recommended? Let us consider an example.


An internet service provider delivers two levels of internet service. One level is at 250 Kbps and the other is at 500 Kbps speed. The price is $30 per month for the 250 Kbps and $40 per month for the 500 Kbps service. Now it might be logistically attractive for the company to provide 500 Kbps to everybody and simply charge $30 to those who signed up for the 250 Kbps service. However, once the $30 per month paying customers get used to the higher level of service, they will complain of service degradation if the service falls below 500 Kbps which is what they are now used to. Even though the company is technically not failing to provide the agreed level of service, as the speed is still above 250 Kbps, the customers will in all likelihood switch to the competitor even if the competitor was providing 250 Kbps speeds. Furthermore, the company will be shortchanging itself because they could enjoy lower operating costs (and therefore higher profits) if they provided 250 Kbps service to the $30 customers and held them at that level.


Organizations with high impact of failure such as hospitals, military, NASA etc. may choose to pursue higher than promised quality levels simply as a buffer to shield themselves against the catastrophic cost of failure. But this is a pre-planned, thought out action and not simply a blind surge towards more quality whether the customer wants it and is willing to pay for it or not.


By and large though, quality is not a holy grail that should be pursued blindly to perfection but in reality simply a metric that should be analyzed for customer demands, cost-effectiveness and return on investment and set to levels that make sense. Then the quality levels should be achieved and delivered to the customer at exactly the stated amounts. Any other path of action will lead to a reduction in competitiveness for that organization.

Monday, June 8, 2009

Testing Tribulations

Testing is an essential component of the system development lifecycle. However, typically too much emphasis is placed on it to make up for earlier inefficiencies. I deliberated on this topic two weeks ago in the “Building Quality” post where I recommended that quality has to be built into a product or service by performing each step in the lifecycle correctly with frequent checks and balances. That being said, testing is an important part of the overall lifecycle and must be given the importance it deserves.


My goal in this post is to highlight some of the incorrect approaches to testing that I have observed during my travels. I do not propose to go into the details of proper testing methodologies here, as that is beyond the scope of a blog post and moreover is readily available on the internet.


As mentioned, one typical incorrect approach is to rely on testing to rectify poor requirements and design magically. Another common mistake is to produce inadequate documentation and information during the earlier stages resulting in a challenge for testing staff to fully understand the scope of the testing required. This, then, results in poorly prepared test plans and test cases which result in incomplete and inadequate testing that ultimately allows defects to appear in production.


Even if an organization chooses to not incorporate frequent checks during its entire lifecycle, at the very least, care must be taken to ensure that the testing team is given the information that is required for them to complete their task successfully. This consists of involving representatives of the testing team in the requirements discussions at the requirements stage. Test personnel should also review requirements documents and provide feedback from a testing perspective. Design and development should also involve the test team in the creation of the functional specifications and obtain review and feedback from them. Testers should be involved in the display of mockups and prototypes as well, in order to adequately prepare for the testing to come. In this way, when the time for testing actually arrives, the test team will have a deep understanding of the product to be tested and have well thought out test plans and test cases produced. This, then, will result in thorough and detailed testing performed that will effectively detect defects in the product.


The second misstep commonly made is the inadequate allocation of resources to the test team. I have personally worked at a company where they had 3 testers when they really needed 20. As might be imagined, not only was the testing inadequate, it was an extremely stressful and frustrating time for the testers most of whom left the company. Management often does not have an accurate idea of how time consuming writing and executing a test case can be. Management also often insists on operating with the best case scenario forecasted as if simply wishing for the best will result in the best actually happening. Adequate resources must be planned for right at the beginning, during the project charter and made available at the correct times for testing to be performed appropriately.


The third mistake frequently made is that testing is often the first candidate for outsourcing. A very important rule of thumb for outsourcing that I advocate is that an organization should never outsource a functionality that is not under a high level of control and maturity at its home location. If you don’t have it under control at your own place, how on earth are you going to make a success of it when it is on the other side of the planet? Things will only go from bad to worse. However, management taken in by superficial analysis of reduced costs make the transition of testing to outsourced locations far from home which then results in more problems.


The fourth mistake typically made is the excessive trust on automation and other “tools” to bring about magical savings in cost and time. I have covered this issue in my previous posting titled “Automation Angst” which explains that automation can easily result in increased time and cost if improperly implemented. Care must be taken to setup automation for only those situations that will provide a positive return on investment.


With these mistakes being made, testing usually (and naturally) performs inadequately after which it comes under fire. I have always been fascinated by the blame always falling on testers while incompetent business analysts and programmers that are also responsible (if not more so as they are the ones who introduced the defects in the product in the first place) get away scot free. Perhaps readers might comment on this observation? This pressure results in test personnel leaving the organization which only worsens the situation as new personnel have to then be hired and brought up to speed.


Truly, testing must be set realistic expectations and supported in a mature and educated way by management as opposed to the typical “test us out of trouble” mentality. It is in the organization’s best interest to make the right choices as the competitive nature of today’s world will not allow for this kind of immaturity.

Monday, June 1, 2009

Automation Angst

Automation.


The mere mention of the word evokes feelings of magical fulfillment to the uninitiated. A promise of utopia is invoked; where machines perform the work, leading to reduced operating costs and improved profits. Unfortunately, the reality is that automation is a quagmire of decisions and implementation choices that if undertaken incorrectly can result in wasted resources and delayed and defect ridden products and services. I, myself, have learnt this the hard way in the past during an automated testing implementation project that I was involved in.


The definition of automation is “the use of control systems and IT applications to control machinery and processes resulting in the reduced need for human intervention”. Put another way, automation consists of programming applications to perform tasks that humans would otherwise have to do. Sounds good so far and you would be right. Even the ITIL body of knowledge devotes a section to tools and technology emphasizing the benefits of automation. However, while manufacturing industries have employed automation quite successfully, implementing automation in the world of IT involves added challenges and obstacles.


Let us first consider the areas that could potentially be automated; if not fully, to a partial extent. A sample list of areas that could be setup for automation is listed below:


  • Testing: Testing and particularly software testing has been a candidate for automation for a long time now and many organizations implement automated testing to some extent. Tools like Mercury Interactive’s Winrunner, QTP and so on are quite common in the industry nowadays.

  • Service Desk and Incident Management: While at some point in the incident handling process, human interaction is inevitable, a large portion of the process can be automated successfully, particularly for simple queries (like password resets). Automated phone menus that answer frequently asked questions, interactive web sites that answer customer queries etc. are examples of this concept.

  • Design: Again, while human involvement is inevitable, a lot of work can be automated utilizing design tools. The ability to perform modeling and forecasting during design, utilizing tools, is extremely beneficial.

  • Process automation: A variety of tools and applications exist that assist with the performance of various processes and procedures. Hewlett Packard and Computer Associates to name a few offer tools that are ITIL aligned.

  • Availability Monitoring: If an organization provides a network service, it will also have to monitor the network for disruptions. The ability to automate this monitoring capability is very helpful in providing good reliability. Other areas like security, capacity and continuity can also benefit from automated monitoring.


While this list is by no means comprehensive it gives us an idea of the potential for automation implementation that exists in IT. Now let us consider the potential benefits of automation listed below:

  • Reduced cost due to fewer requirements for human resources and reduced operation expenses.

  • Higher quality products and services (if implemented correctly).

  • Better repeatability as machines will have less variations than humans. This in turn leads to reduced defects.

  • Faster production as machines are generally faster than humans. This results in quicker delivery times.

  • Ability to operate at all times as machines can be run at night and on holidays.


Attracted by these potential benefits, most organizations take a stab at implementing automation in the workplace. However, there are pitfalls that exist that most often go unconsidered. The potential dangers of automating are:

  • Overly unrealistic expectations of automation. For example, while some part of testing can be automated, a machine will only report that which it has been programmed to check. If any defects exist outside of its programming they will go undetected. Therefore, the correct balance of automated and manual testing must be implemented.

  • Inadequate understanding of all the costs and resources involved to implement automation. Often, only the cost of the automated tool is taken into consideration. However, the cost for setting up the automated system and the constant adjustments that must be made as the product or service changes is generally not taken into account resulting in budget overruns.

  • Incorrect identification of areas that would benefit from automation. A product or service that is in a constant state of change is not a good candidate for automation as the effort expended in constantly updating the automated tool to accommodate the changes will nullify the benefits. An area that consists largely of repetitive actions is a better candidate for automation than an area with complex functionality that does not have repetitive tasks.


Therefore, it is clear that the implementation of automation deserves a great deal of analysis and consideration. Generally a combination of manual and automation implementation is necessary and the correct ration of the two must be properly understood. Furthermore, the pros and cons of implementation should be clearly analyzed and understood. The correct areas for automation should be chosen to provide maximum bang for the buck. Automation should not be viewed as a one time implementation but a repeated iteration of phased implementations that provide more and more automation coverage with each implementation.


To summarize, intense competition and evolving technology necessitate the use of tools and applications to take away the burden of work from humans. However, a well thought out plan to implement automation must be undertaken otherwise the consequences are unforgiving.