Tuesday, December 29, 2009

Giants of Quality

The area of quality (for both IT and non IT) has had a few champions that completely redefined the epistemology of quality and the means of achieving it. I speak of W. Edwards Deming, Phillip B. Crosby and Joseph M. Juran. It is especially impressive to me as a quality “evangelist” myself, that they achieved what they did at a time when quality was not as well understood and significant as it is today. Championing quality today is a task I find extremely challenging and difficult to sell after the initial enthusiasm that top management displays. So the mind boggles at the difficulty that these champions must have faced and overcome back half a century ago. This week’s post is a dedication to their tenacity and passion for quality.


While Deming performed significant work for the World War II effort that resulted in improved statistical process control techniques, his true success came in Japan. The US experienced a great demand for its manufactured goods across the world and quality was sacrificed for mass production. The Japanese, however, understood the importance of quality and made the sacrifices necessary to achieve world class quality. This, of course, resulted in the Japanese overtaking the US in terms of desirability of their manufactured products and transforming a small island nation into a major world power and economic giant. Deming’s work in quality improvement was so effective that he was awarded the Order of the Sacred Treasure award by the Prime Minister. He then returned to the US to teach, author and consult. His years in Japan, however, remained the most effective in terms of the adoption and utilization of his techniques.


Crosby was famous for his zero defects philosophy and his belief that “Quality is Free” authoring a book with the same title. He also championed the concept of “doing it right the first time”. He contributed greatly with his lectures and seminars towards quality as a practice.


Juran pushed for the education and training of personnel and introduced the composition of three managerial processes: quality planning, quality control and quality improvement. Like the others, he authored, lectured and consulted about quality.


In writing about these giants of quality, I feel that we can learn a great deal from their life and work and the contribution they made to the world. It must have been difficult: but they persevered and won. How many like these exist today?

Monday, December 21, 2009

A Plethora of Sourcing

Procuring resources and capabilities for the tasks that require to be completed has been an important part of IT management since the introduction of IT into the business model. In the past, however, procuring talent meant visiting university campuses for entry level positions and posting advertisements in newspapers and job-websites for more experienced candidates. With the advent of greater complexity and faster changing technology and best practices, consultants were brought in to fill the gaps.


However, with the availability of significantly cheaper and at the same time good quality resources being available in foreign countries, outsourcing was the buzzword for a while. Now, however, many types of sourcing possibilities exist and IT executives have a smorgasbord of options to choose from. Some of the lesser known types of sourcing that also exist are:


  • Multi or Co-Sourcing: where tasks are performed by both the internal organization and an external provider

  • Knowledge Process Sourcing: is a type of sourcing where highly knowledge intensive work is carried out by highly skilled staff. E.g. Sox auditing may be assigned to a third party organization that specializes in SOX audits.

  • Global Sourcing: Global sourcing often aims to exploit global efficiencies in the delivery of a product or service which could include low cost skilled labor, low cost raw material and other economic factors like tax breaks and low trade tariffs

  • Strategic Sourcing: which consists of techniques to optimize the procurement of services and overall sourcing strategy of the organization

  • Corporate Sourcing: where divisions of companies coordinate the procurement an distribution of materials, parts, equipment, and supplies for the organization
    Second-tier Sourcing: is a procurement policy that rewards those suppliers that achieve or attempt to achieve the minority-owned business (MBE), spending goals of their customer

  • Crowd Sourcing: a technique of assigning a task to a group of people or community as an open call. Beta testing by PC game companies is an example of this technique where a group of typically teenage game enthusiasts perform testing for a small fee or even free.

  • Open Sourcing: utilizes previously proprietary software under an open source/free license. This may not always be a good choice but the price is certainly right.


So it emerges that there are quite a few type of sourcing techniques available and are no longer rare occurrences as they used to be in the past. The method of choosing which type of sourcing to use remains the same, however. A careful analysis of the needs of the organization along with consideration of its long term goals and objectives and an evaluation of the pros and cons of each type of sourcing possible will result in a mature procurement decision. A key here is to keep an open mind to the sourcing possibilities and to not be guided by one’s own prejudices in the matter.

Monday, December 14, 2009

SaaS: Pros and Cons

Software as a Service (SaaS) is a technique of software deployment whereby a provider licenses an application to customers for use as a service on demand. SaaS software vendors may host the application on their own web servers or download the application to the consumer device, disabling it after use or after the on-demand contract expires. The advantage of this is the transfer of the risks and responsibilities from the customer to the SaaS provider. There is also a potential benefit in cost for the customer as the “on demand” aspect of the billing only charges the customer for when the application is utilized. This also reduces the administrative burden of maintaining and tracking licenses across the organization for customers. Furthermore, cost savings may be realized due to a multitenancy approach to the architecture of the application and its data handling. While this entails a greater initial development effort for the provider, economies of scale are achieved by only requiring one instance of the application to service multiple customers.


So to itemize the benefits that SaaS offers:


  • Cost: SaaS delivers application at a lower cost than delivering them in-house.

  • Risks and responsibilities transferred: The risks and ownership of resources and capabilities required to deliver the applications are transferred from the customer to the provider. This is typically very attractive to smaller companies.

  • Efficient resources utilization: Freed up from delivering technology, IT resources can utilize their time on issues that impact the organization and business urgently.

  • Flexibility: The SaaS provider will typically offer flexible contracts and charging models. The customer will also be able to easily and with minimal risk try the service before committing to a contract. The ability to switch between providers is easier than with traditional outsourcing.


However, the following cons also exist:

  • Limited customization: As the SaaS provider caters to multiple organizations, they may not be capable of customizing the application for each individual organization to the extent required by them.

  • Scalability: Currently SaaS offers scalability to service smaller organizations but this is expected to improve as technology evolves.

  • Reliance on another: while this is listed as a benefit, it can also be a risk as all control of the application is placed on the provider and if they fail, then the customer organization suffers.


While SaaS is not a magic solution, in my opinion it does offer some benefits for organizations that have specific conditions and requirements that match the benefits that SaaS has to offer.


SaaS does not replace in-house IT; however, research indicates that it could well represent 25% of the software market by 2010. Therefore, SaaS should be kept in mind as an alternative should the situation and conditions merit it.

Monday, December 7, 2009

How Much is Enough?

This week’s post sparked from a phone conversation I had with a friend who is now an IT QA Manager at a company in Los Angeles (to go unnamed). What struck me was his comment on how there was a lot of chaos at the company due to a rapid rise in new business which was not matched by a proportional rise in IT resources and capabilities. When I commented that it sounded like poor management to me, he countered by claiming that the IT management was doing well to manage the situation. But to me the balancing act of taking on new business in proportion to the resources and capabilities available is under the domain of management as well.


Which brings us to the question of when to say “no” to the customer. Or to handle it another way, the company could raise prices high enough so that demand falls to levels that the organization can provide at adequate quality levels and without putting undue stress on staff. I suppose marketing purists might insist that any and all new orders must be taken on at all costs or there will be irrecoverable market share damage. However, I would counter that taking on new business to the point that your quality levels drop and disruptions and defects are common is no way of maintaining market share either. In any case, this particular company (that my friend works at) has obviously chosen the take all customers at any cost approach. My personal experience in my own career has been that most companies tend to make this choice. But is this wise?


Now there are no obvious answers here and a lot depends on various factors such as the economy, the goals and objectives of the organization (long term and short term) etc. However, in my experience, it has always been negative in the long term when an organization has adopted the approach of taking on all orders and actively seeking out more orders even when the rest of the organization is struggling to keep up with demand. This is especially puzzling when we consider how easy it is to manage demand by simply charging higher and allowing market forces to balance things out without hurting customer’s feelings. When the organization has upgraded its capabilities and capacity, it can always lower prices to re-stimulate demand for its service.


To me, it seems like the goal of meeting large quarterly targets is based on a desire to rake in bonuses and stock price returns at the cost of the company’s long term success. In other words it is a case of greed. However, unlike as depicted in fiction, greed is not good. How much is enough, Mr. Gekko?

Monday, November 30, 2009

The Art of Release

As customers expect modifications to services to be made more and more quickly, the ability to actually make these modifications successfully becomes more and more crucial. Now, a lot of processes and capabilities need to be in place for this to happen, but it is in Release Management that the actual update to the live environment happens. Therefore, Release Management is a member of that special clique of processes that actually have a direct contact with the customer.


Release Management is thought of in many organizations as scheduling and simply making the update in the live environment. However, this is more of a departmental oriented organization’s view of the process. In a process oriented organization, the Release Management process covers the tasks of building, testing and releasing to the live environment. These tasks are carried out using resources and staff from functions (departments) like Development, QA etc. Release Management interfaces significantly with the Change and Configuration Management processes in order to communicate the change information back and forth as needed. The Release Management process also takes ownership of a central location of storage of the master software and hardware spares. This is formally known as the Definitive Software Library (DSL) and the Definitive Hardware Store. The DSL need not be a physical location but could be a database where final builds are stored. This should not be confused with a day-to-day version control tool. The DSL is an important way of ensuring that the latest builds are kept separate and there are no confusions during release implementation. Licenses are also stored in the DSL making it a useful tool in maintaining legal compliance and identifying and locating unused licenses which are a complete waste to the organization.


Details of the Release Management process are freely available on the net. My goal here is to highlight its usefulness and benefits. The benefits include:


  • fewer disruptions in the live environment due to changes

  • standardization of hardware and software versions

  • better management of risks involved in releases including the implementation of a rollback plan

  • legal compliance with licensing

  • better utilization of licenses


It is, therefore, in the organization’s best interest that Release Management is taken as seriously as possible and steps taken to implement it systematically and rigorously. In today’s competitive world, every little bit makes a difference.

Monday, November 23, 2009

Stress Point Analysis

Stress Point Analysis is a new technique that assists management in understanding the state of an operation; its strengths and weaknesses and where the effort to improve should be expended for maximum results and returns. It is a data driven technique in which most (if not all) members of the organization complete a web based questionnaire providing their input on the state of the operation. This data is analyzed and the state of health of various stress points in the organization is made available.


Stress Points in this model are defined as barriers to operation excellence. They are defined as:


  • Improvement & Innovation

  • Alignment & Fit

  • Measurement & Control

  • Resource & Demand Management

  • Process Capability

Each of these five areas can be operating at the following possible levels:

  • Outstanding

  • Scope for Development

  • Cause for Concern

  • Stressed


Analysis and evaluation of the stress point areas can give management an idea of where they are and the steps required to improve the stressed areas. They can then take the required steps to reduce problems in the stressed area so that all the 5 areas are at a high operational level.


All this is the theory proposed by Stress Point Analysis. In my opinion, it is a useful tool that could be of value to an organization but it is not a magic solution that will solve all the problems. Like all other methodologies, much depends on the successful implementation and day to day carrying out of the system. Still, it is something new that is out there now and I wished to bring to light for you to have a look at and decide on its potential value.

Monday, November 16, 2009

The Honesty Policy

Last week’s post on Business Cases sparked some interesting feedback with one reader’s assertion being that business cases were always written with a bias benefitting the originator of the case with the committee in charge of analyzing and approving the case, unable to catch the bias and correct it. This got me thinking on the complex people dynamics present at all work environments and the even more complex dynamics present in an IT environment (due to the extraordinarily rapid change ever-present in IT). Honesty is vital for any type of improvement to be successful including IT Processes and I feel this topic deserves a post even if it isn’t “technical”.


I would like to focus on honesty as pertains to evaluating and stating the state of the organizations capability and maturity. I am not focusing on employee theft and feigning a sick day type of dishonesty. Recounting my personal experiences on this issue, I have always suffered whenever I have been honest. No matter how diplomatically or at the other extreme, bluntly, I stated the truth; it wasn’t what people wanted to hear. But I ask myself, would the same people who hated me for speaking the truth would have also liked their doctor to lie to them about the state of their health and sugar-coat their true medical situation? Clearly people do not consider their work and source of income to be as important as their body and health even though it brings home a paycheck. However, the principles of medicine are similar to an organization’s efficiency or process improvement initiative. One must first get an honest and competent diagnosis after which options can be evaluated and a course of treatment pursued. This is true with medicine as well as organizational processes. However, people tend to not welcome an honest approach at the workplace even though it ultimately affects your ability earn and provide for yourself and your loved ones.


Partly, it is the Ostrich approach where an Ostrich buries its head in the sand when it sees a lion attacking. Its logic being that if I can’t see the danger, the danger will pass me by without hurting me. Partly, it is also self-interest in that new methodologies will usher in change that will result in those having a power base within the organization losing it and ending up less powerful than before. Of course, if these folks would simply keep up with the latest techniques, they would never be threatened. However, they wish to reap the fruit of the work without performing the hard work that is keeping up with the latest in their profession. Staying at the cutting edge is hard work which not everyone is willing to perform.


On the other hand, an honest approach is extremely important, even crucial in today’s competitive world. An organization has to make the correct decisions based on the reality of the situation that it is facing. If it doesn’t it will end up fooling nobody but itself and misalignment with the customer’s needs, defects, rework and other assorted problems will inevitably arise. If the competition is brave enough to face its problems squarely and head-on, then the competition will inevitably end up the winners with superior market share. So obviously an environment of honest evaluation must be fostered and maintained.


How might this be achieved? As always, the foundation remains in the hands of top management. They must lead by example and display to the rest of the organization that they stand for honest evaluation and a “don’t shoot the messenger” approach. The implementation and education of the staff of cutting edge methodologies and best practices is also important and sends a positive message across the organization. Moreover, educating the staff, results in improved awareness effectively banishing the fear of the unknown that causes so much staff discontent and resistance. Finally, effective discouragement should be meted out to those who pursue a dishonest approach therefore discouraging any further such behavior from others.


In the end, all members of an organization have to record, evaluate, analyze and report in an honest fashion for the organization to remain competitive and profitable. A dishonest approach results in the organization and ultimately the employee’s loss and misfortune. Truly a dishonest approach is fooling no one but itself.

Monday, November 9, 2009

A Case for the Business Case

Being in business requires making decisions based on what makes the most sense and is most aligned to the organizations goals and objectives among the choices available. For each decision, various alternatives will typically exist and different paths or avenues will be available that could be followed with their specific pros and cons. To make sense out of this situation and to work out the correct decision requires the implementation of Business Cases.


A Business Case is a decision making tool that captures the reasoning behind initiating a project or task and the effect it will ultimately have on profitability. The financial impact of spending money is analyzed including the rate of return, cash flow, length of payback period and other financial criteria as appropriate.


Very often, the decision making is performed far too informally with top management, making snap decisions base don their past experiences. While past experiences of senior personnel is a valuable input, a formal business case analysis that includes background analysis of the project, expected business benefits, options considered and the reasons for accepting or rejecting the options, expected costs to be incurred, gap analysis and potential risks is a far superior technique of decision making that results in far more mature and responsible decisions being made that are in better alignment with organizational strategy and goals.


The benefits of proper business case analysis and implementation are:


  • Proper Investment decisions are made with fewer budget shortfalls during the course of the project

  • Proper understanding of the scope of the project resulting in adequate resource allocation and schedule expectations which in turn leads to superior project management

  • Correct decisions made on whether to take on the project or not due to a good understanding of the project requirements and the organization’s capabilities to meet those requirements.

  • Proper prioritization of projects

  • Good understanding of inter-dependencies within projects and the rest of the environment so that fewer unexpected errors occur.


Ultimately the implementation of business cases should bring about a change in epistemology within the organization that causes all personnel (from the lowest to the highest) to think in terms of the benefits to the organization and comparison of alternatives based on alignment with organizational objectives for each decision.


It is only when all the members of an organization make decision (large or small) with a systematic, structured decision making process that the organization’s decision making will be fully optimized and the organization will reap the benefits presented above. If top management consider themselves above the need to perform business case analysis, the organization will pay the price for their arrogance with problems and issues that are caused by poorly thought out decisions.

Tuesday, November 3, 2009

The Design of Design

As the marketplace has transitioned from primarily products to mostly services, the need to design services has emerged as an important area of knowledge and specialization. The design of products has been well understood and established within IT now. The design of software utilizing object-oriented principles and methodologies is well known. The design of networks and firewalls is well understood and performed efficiently nowadays which was not the case, say, a decade ago. However, the design of services is still not approached with the level of understanding and maturity that other areas of IT have achieved.


This state of affairs is understandable as the concept of services within IT still elicits a great deal of confusion. To clarify, services differ from products in that while both satisfy customer’s needs, in the case of a service; the customer does not take ownership of the resources and risks associated with the providing of the service. Furthermore, the service generally consists of providing the customer with a complete experience as opposed the solitary experience of purchasing and utilizing product. Therefore, the design of a service involves certain special considerations that are listed below:


  • Services must be designed to satisfy business objectives, based on the quality, compliance, risk and security requirements

  • Services must be designed that can be easily and efficiently developed and enhanced within appropriate timescales and costs

  • Identification and management of risks so that they can be removed or mitigated before services go live

  • The design of secure and resilient IT infrastructures, environments, applications and data/information resources and capability that meet the current and future needs of the business and customers

  • The design of measurement methods and metrics for assessing the effectiveness and efficiency of the design processes and their deliverables

  • The production and maintenance of IT plans, processes, policies, architectures, frameworks and documents for the design of quality IT solutions, to meet current and future agreed business needs

  • Contribute to the improvement of the overall quality of IT service within the imposed design constraints, especially by reducing the need for reworking and enhancing services once they have been implemented in the live environment


To accomplish these objectives, the design of services can be broken down in to the following aspects:

  • Service solutions, including all of the functional requirements, resources and capabilities needed and agreed

  • Service Management systems and tools, especially the Service Portfolio for the management and control of services through their lifecycle

  • Technology architectures and management architectures and tools required to provide the services

  • Processes needed to design, transition, operate and improve the services
    Measurement systems, methods and metrics for the services, the architectures and their constituent components and the processes


These areas of design can be performed by the implementation of Design processes like Availability Management, Capacity Management, Security Management etc. Further information regarding these can readily be obtained online by the interested reader.


Design must evolve from product design to service design as the paradigm shifts from products to services. Attempting to design services with a product design structure in place will result in poorly thought out services that do not satisfy the customer and result in defects and incidents in production. Clearly the changes required in today’s IT environment reach deep down in the organization’s structure and are not superficial by any means.

Monday, October 26, 2009

ISO Issues

A quick thanks to all who have commented and contributed to the blog site. To clarify some issues that have arisen, it is beyond the scope of this blog to provide detailed educational training. My vision with this is to get folks started off on a particular topic. Those who have expertise in the topic may not learn something new, but could (and should) contribute and add to what is presented by posting comments. On the other hand, those who are new to the topic can gain an introduction by reading the post and then further pursue the topic by obtaining the relevant study material if they are so inclined. With that stated, let’s move on to this week’s topic – ISO.


ISO (the International Organization for Standardization) has existed for a long time (Feb 23, 1947 to be exact) and caters to a lot of different industry domains and knowledge areas. Headquartered in Geneva, ISO is a non-governmental organization but is well known all over the world with significant influence and power. As its name implies, the organization is primarily concerned with the setting and maintenance of worldwide industrial and commercial standards. ISO provides guidelines for over 17,500 standards. While numerous standards exist that relate to technology, the standards most relevant to this blog site are the ISO 20000:2005 (IT Service Management) and the ISO 27000 (Information Security Management) standards.


As a consultant, I am passionately in favor of standards. One of the most frustrating things for me is to spend my time (and therefore the client’s money) in the attempt to understand the way things are setup and the terminology used at each organization that I consult at. What is fascinating is that each organization has its own “lingo” and way of defining items and resources. One might expect that their processes would differ but the very language they speak differs as well. This is not just inconvenient for a consultant or new employee but leads to confusion and problems/defects when interaction between other organizations is carried out. In today’s age of inter-dependency and outsourcing, it is important that all organizations speak the same language. Other benefits of implementing standards include compliance with governmental and regulatory requirements and the ability to enter global markets (some foreign countries require ISO certifications as a mandatory qualification to enter their market). Last but not least is the organizational efficiency and quality improvements inherent in improving the organization’s processes.


But for standards to work, they have to be implemented. So, how does one go about implementing an ISO standard? First, the decision must be taken and supported at the top management level and then accepted at the organizational level. I have observed too often the adaptation of some standard or methodology by the top brass while the cubicle level folks are dead-set against it. This almost always leads to the failure of the standard being employed. If not all at least a significant majority of the organization’s staff must be in favor of implementation of the standard.


Next, adequate resources must be planned for and set aside for the implementation of the standard. Training should be provide to key players in the implementation and outside consultants brought in as necessary.


If certification is desired, then an independent audit to assess and certify compliance to the standard’s requirements should be obtained.


ISO is a vast organization with a huge body of knowledge and my attempt to bring some of the IT aspects of it to light is a only but a first step in the right direction. Interested readers may pursue the subject in more detail via numerous resources available online.

Monday, October 19, 2009

Problem Management

In most IT organizations, a systematic process to handle problems does not exist. Rather, the functions of a problem management process are carried out by Project or Program Managers or some sort of committee or advisory board. A well thought out problem management process is only rarely setup unless the organization is under some sort of ISO 20000 certification program.


Problems are underlying reasons for incidents. Incidents being disruptions to expected levels of service experienced by customers. Problem management aims at resolving incidents and problems caused by end-user errors or IT infrastructure issues and preventing recurrence of such incidents. Therefore, there are two aspects to problem management: a proactive aspect and a reactive aspect. In the proactive aspect, the services are monitored for possible problems and steps are taken before thresholds are breached. In the reactive aspect, a problem has already occurred and steps must be taken to resolve it. Problem management then works with other processes to resolve the problem in question.


The major sub processes within Problem Management are:


  • Problem and Error Control: To constantly monitor outstanding Problems with regards to their processing status, so that where necessary, corrective measures may be introduced.

  • Problem Identification and Categorization: To record and prioritize the Problem with appropriate diligence, in order to facilitate a swift and effective resolution.
    Problem Diagnosis and Resolution: To identify the underlying root cause of a Problem and initiate the most appropriate and economical Problem solution. If possible, a temporary Workaround is supplied.

  • Problem Closure and Evaluation: To ensure that - after a successful Problem solution - the record and prioritize the Problem contains a full historical description, and that related Known Error Records are updated.

  • Major Problem Review: To review the resolution of a Problem in order to prevent recurrence and learn any lessons for the future. Furthermore it is to be verified whether the Problems marked as closed have actually been eliminated.

  • Problem Management Reporting: To ensure that the other Service Management processes as well as IT Management are informed of outstanding Problems, their processing-status and existing Workarounds.


The advantages of Problem Management are:

  • Reduction in service disruptions to the customer

  • Proactive identification and prevention of failures which leads to fewer defects experienced by the customer

  • Quicker resolution of an existing problem

  • Better communication and information management regarding problems and known errors

  • Better problem analysis and understanding of trends that could be utilized in a proactive manner


Therefore, it is clear that Problem Management provides significant benefits to an organization and should be implemented with the seriousness that it deserves.

Tuesday, October 13, 2009

Supply Stability

Supplier management in the past was usually handled by the departmental secretary who chose which corner shop to buy the paper clips and pads from. Advanced version of this function also included choosing the best take-out joint for lunch or snacks. Nowadays, however, supplier management is a major process that is becoming more and more crucial in an organization’s ability to function efficiently and remain competitive due to the increasing complexity of inter-dependency between organizations.


The products or services that are being supplied by the supplying organization are numerous and complex. Consulting, material, equipment, information, knowledge and people are a few examples of resources and capabilities that are exchanged between organizations. While products need to be monitored for quality, price, delivery punctuality etc., the more intangible resources such as consulting and knowledge require further specialized skills in the management of its suppliers and delivery.
Suppliers can be broken down into the following categories by importance:


  • Strategic Suppliers: Where goods and services are hard to obtain and require adequate stockpiling for safety. The goods and services being supplied are crucial to the operation of the organization.

  • Tactical Suppliers: Less difficulty in obtaining goods and services. The items are not as crucial to the successful workings of the organization.

  • Operational Suppliers: Goods and services are relatively easy to obtain and there are alternatives to choose from. The items are not so crucial to the running of the organization.

  • Commodity Suppliers: Goods and services are easy to obtain and there are many supplying organizations to choose from. The items being supplied are not crucial to the opration of the organization.


Supplier Management is the process that ensures that external services and configuration items, which are necessary for the service delivery, are available as requested and as agreed at the service level. Some of the responsibilities of this process are:

  • To ensure that the supplies are made as per the pre-defined requirements and service levels.

  • To ensure that every supply runs through a set of standardized steps and procedures in order to ensure repeatable and predictable results every time.

  • To manage the risk to normal service operation due to lower control levels and accessibility inherent in using external suppliers. This involves the periodic assessment and testing of supply quality and service levels provided with the supplying organization.

  • To document analyze and review every supply decision and activity.


The best way to handle all this is to implement a well defined and formal Supplier Management process complete with a Supplier and Contracts database and Supplier Manager. The basic sub processes within the Supplier Management process are:

  • Supplier Request Recording

  • Supplier Selection

  • Supplier Evaluation

  • Supplier Negotiation

  • Supplier Service Delivery

  • Supplier Renewal /Termination


The proper execution of these sub processes will ensure the smooth and efficient function of the Supply chain. The act of receiving supplies from another organization is, therefore, seen to be an important one and should be given the importance and respect it deserves by proper planning and execution of a formal process for it.

Monday, October 5, 2009

Testing Maturity & Improvement

Testing is an important and significant part of the product or service lifecycle. This is true of any industry but more so in the case of IT where the sheer complexity of the trillions of bits and bytes zipping around bring about an incredible permutation and combination of possible ways things could go wrong. To counter for this complexity the implementation of a high level of testing maturity is essential within the IT organization.


For overall organizational maturity, organizations can avail of CMMI and its 5 maturity levels. Testing also has 5 levels of maturity within the Testing Maturity Model (TMM) that integrates well with CMMI and other methodologies. Furthermore, the Test Process Improvement model also exists that has been developed based on practical experiences and knowledge of test process development.


TMM was developed in 1996 at the Illinois Institute of Technology, and was designed to be a counterpart to the CMMI model. The 5 maturity levels are similar in definition to CMMIs levels which can be easily viewed online. TMM advocates the implementation of various test processes that increase testing maturity within the organization.


Similarly, TPI offers 20 Key areas with increasing levels of implementation for each area. Some of the key areas (not all) include:


  • Test Strategy

  • Moment of Involvement

  • Estimating and Time Planning

  • Metrics

  • Test Tools

  • Evaluation

  • Communication

  • Reporting


As may be deduced, this is a far more structured approach than the old fashioned write a few test cases at the last minute and frantically test till midnight strategy that some organizations are utilizing to this day. Moving beyond simply preparing for testing by creating test cases and test plans is simply not enough. It is now imperative to be optimizing the test processes and continuously improving to be at the right combination of efficiency and quality.


Simply put, organizations should make themselves aware of the latest in testing techniques and methodologies like TMM and TPI and implement the recommended processes before the competition does. Not doing so will only put the organization at an unnecessary disadvantage that is a great handicap in today’s difficult times.

Monday, September 28, 2009

The Rain in Spain

When Dr. Higgins attempts to improve Eliza Doolittle’s speech in My Fair Lady, he starts with the basics: practicing speaking with marbles in her mouth, repeating basic sounds and words, the most famous being “the rain in Spain is mainly in the plain”. The parallels with an organization seeking to improve its processes are similar in that the basics must be mastered first before one can be the belle of the embassy ball.


What are some of the basics that an organization can put into place while attempting to improve? Some choices are:


Strategy: Easily the most neglected area of organizations worldwide and IT organizations in particular, certain basic techniques of Strategy should be implemented. While full blown strategy methodologies might be a bit much for the beginning effort towards improvement, fundamental techniques of demand analysis, financial management and portfolio management should be implemented.


Customer Point of Contact for Negotiation: While organizations do have this in place in some fashion, it is rarely enacted formally enough to bring its true value and benefits to the table. ITIL’s Service Level Management process is a well defined methodology for achieving this objective. The ability to not merely interact and form a point of contact with the customer but to build a relationship and understand their needs allows for superior alignment of IT with customer’s requirements. This effort returns rich rewards and is definitely much value for money.


Change & Configuration Management: Again implemented by most organizations but not adequately. A good first step for organizations committed to improvement would be to evaluate what they have in place and tighten up and further align with what users require. At an organization that I consulted for in the past, they had a home grown Change/Configuration management tool that had fields and options that users did not need or use and did not have needed fields and options. Clearly they could have benefitted immensely with a properly thought out tool that fitted with their needs better.


Service Desk, Incident and Problem Management: Another set of those processes that most organizations do have in place but could desperately use an overhaul and update of. Common service desk shortcomings are lack of current information made available to service desk personnel, increasing call volumes and increasing and more complex changes to the service. Incident and Problem management also suffer from lack of communication from change and configuration management typically.


Continuous improvement: While there may not exist an organizational maturity to reach six sigma levels at the present, certain basic improvement techniques can certainly be implemented. A basic technique of Root Cause Analysis and resolution to prevent similar mishaps occurring in the future is easy and requires minimal investment. Therefore, there is no reason to not implement a RCA system of continuous improvement, no matter how limited resources are available in the organization.


It is often argued that times are too challenging or resources not available to implement process improvements by those not enthusiastic about improvements. However, there are small and simple steps that can be carried out that yield rich returns for the effort expended. It is possible to get started without a great deal of investment and disruption. With the improvement and stability gained with these initial steps, further and more complex process improvement endeavors can then be undertaken. Even if an organization is dedicated to a large scale process improvement effort, the basics must first be completed successfully. Remember, the rain in Spain is mainly in the plain.

Monday, September 21, 2009

Continuity

Service continuity is now an expected feature in any organization’s portfolio whether IT or non-IT. In the past, customers were sympathetic and understanding regarding disaster events that unexpectedly disrupted services. However, nowadays, organizations are expected to have accounted and planned for possible disaster events and to prepare and execute continuity plans in the event of the disaster actually occurring. Finally, after the dust clears, the operations should be brought back to a normal state.


IT organizations are expected to manage service continuity and this is generally included in the Service Level Agreements when the services are being negotiated and agreed upon with the customer. An IT Service Continuity Process with a Service Continuity Manager as the process owner should be established to carry out this activity on an ongoing basis. The process should then create a set of IT Service Continuity Plans that support the overall business continuity plans of the organization. The plans should identify possible disaster events and the contingency and continuity activities that should occur if the disaster does strike. Furthermore, the plans should include a description of how a return to normal service operation should occur after the disaster is over and the contingency plan is no longer necessary.


After the creation of the continuity plans, regular Business Impact Analysis (BIA) activities should be carried out to ensure that all the plans are in sync with changes that have been made to the service and organization.


Other activities of the Service Continuity Process include assisting the Change Management in assessing changes for any possible impact to service continuity and working with suppliers and the Supply Management Process to ensure supplies are made during a disaster event.


Of course, during the occurrence of the disaster event, the IT SCM process comes into the forefront and initiates the contingency plan in order to continue service delivery to the customer. Service Continuity monitors the situation until the disaster event subsides and then presides over the transition back to normal operations. To conclude, the process records the success of the continuity event and makes notes for future improvement.


Disaster recovery and service continuity are no longer a luxury but a necessity in today’s market. Organizations must take service continuity seriously in order to maintain customers in the competitive environment we live in now.

Tuesday, September 15, 2009

Security

In the good old days, security meant a guard with a gun or a well trained Doberman that refused food from strangers. Now, we have hacking, phishing, identity theft, viruses, spyware, adware and a host of other malicious attack techniques. Over and above this, an aspect of security that is generally not considered as deeply, there exists the possibility of problems and issues occurring simply due to non-intentional, non-malicious errors. An example of this might be that due to a bug in the code, sensitive client’s information is available to view by everybody. This wasn’t a deliberate move on the programmer’s part but simply an error in the code. However, the net result was a compromise in the security level of the application.


The solution to security issues is, of course, a well defined and implemented security management process. The cornerstone of the security management process is the overall security policy for the organization. The Service Level Agreements of each service should also include security requirements that can then be individually addressed.


Security activities can be divided into the following steps:


  • Planning

  • Implementing

  • Evaluating

  • Maintaining

  • Reporting

  • Controlling

Security activities can also be broken down into the following types:

  • Preventative – such as firewalls, login requirements, ID cards etc.

  • Reductive – backups and testing etc.

  • Detection – Antivirus and antispyware software, network intrusion monitoring etc.

  • Repression – Blocked login after 3 failed login attempts, card retention after failed pin entry etc.

  • Correction – restoring backups, removing viruses that have entered the system etc.

Therefore, it is clear that a lot of thought and work must be devoted to security in order to maintain the security requirements that are considered part and parcel of any product or service nowadays. Security must be a consideration right from the very beginning when a service is being conceived at the strategy stage and should be designed into the service. Too often, very superficial security considerations are made in the beginning which results in inadequate security of the final product. Organizations must now consider security as important and significant as any other aspect of their organization’s functioning.

Monday, September 7, 2009

Taking Stock

In my experience, most organizations do not have a good understanding of their capabilities. I do not mean that they have not taken a good inventory of what they possess. Sure, they probably have a list of how many laptops and desktops are scattered around the office and the number of employees pounding the keyboards. They know how many licenses of Windows and Office are out there and the number of desks and chairs. The problem is that they do not have a good understanding of their organization’s capabilities; what the organization can achieve in how much time and more importantly what it cannot achieve.


The definition of an asset in the context of IT is a combination of resources and capabilities. Resources are defined as direct inputs for production and some examples are financial capital, applications, infrastructure and people. Capabilities represent an organization’s capacity, competency and capability for action. Some examples of capabilities are management, knowledge and processes. Generally organizations maintain a good checklist of their physical resources but have a poor idea and understanding of the less tangible capabilities that they possess. This lack of understanding makes management more challenging and in particular, makes improvements difficult to implement. After all, how can you improve that which you don’t understand in the first place?


Improvement is by no means the only aspect that suffers when an organization does not have a good understanding of itself. The ability for IT to align itself with business and to support business processes also suffers. So does agility and the ability to make quick changes which is crucial in today’s world. Financial estimating is also highly inaccurate when the capabilities of an organization are not understood completely.


Therefore, it is clear that an organization must understand its capabilities completely and move beyond just an inventory stock keeping of its resources. How does an organization go about understanding its capabilities properly? The first step, of course, is to keep a good stock of the organization’s resources as they are the building blocks of capability. A well setup Configuration Management system is crucial in achieving the ability to keep tabs on the resource items. The configuration Management System should also maintain relationships between the items that allow for an understanding of how a change in one item will affect another item or a system.


Next, a reliable process of documentation must be setup and maintained. Arm in arm with the documentation process, a system of collecting and analyzing metrics must be created and maintained as well. Metrics must be carefully collected and archived for future reference.


Finally a system of modeling should be setup that utilizes all the aforementioned data to provide a realistic estimate of the organization’s capabilities. The modeling should be set up to predict finances and costs, schedules, technical complexities and project and service deliverables.


With all this setup and relevant information available, management can make crucial decisions with confidence. Furthermore, the organization will make accurate estimates and will be extremely agile and better aligned to customer’s requirements. Simply by understanding one’s own self.

Monday, August 31, 2009

Booms and Busts

The current financial conditions are unlike any previously seen in the history of the world. A perusal of the last 15 years indicates that cyclical patterns of high growth and frenzied activity alternate with periods of decline and layoffs. This pattern does not seem to be abating in the near future and it seems that booms and busts will be part of a way of life for all of us for the next 15 years as well, whether we like it or not. This situation only reinforces the need for both individuals and organizations to position themselves strategically for the turbulent future advancing upon us. While it may not be possible to exactly predict the timings and nature of the booms and busts, certain basic steps can be taken to ensure a smoother ride.


Firstly, for individuals, training and certifications in their chosen area of expertise should be undertaken in order to separate themselves from the herd. Continuous learning and self-improvement are no longer the activities of a few “nerds” but a necessary part of survival for everyone nowadays. Individuals must also keep up with the latest in industry innovations and stay aware of the latest tools, techniques, methodologies and standards. Those who have kept themselves at the cutting edge will be in a superior position for advancement as companies scramble to make themselves more efficient and competitive.


Which brings us to organizations and what they can do during financial swings from a process standpoint. During the boom periods, companies have a tendency to focus entirely on taking advantage of the business available and not caring much about the way the growth and the new business is being handled. This, then, translates to a skewed and mismanaged growth that is inefficient and costly. Furthermore, the profit generated during good times is rarely saved and kept aside for the rainy day. Companies, like individuals, must save and set aside revenue for use during lean times. The tendency to operate only for the quarterly result is not a good strategy for the long term and senior management and the board should understand and support this way of doing business.


During the busts, the companies should then call upon the revenue saved from the good times and instead of laying off people, put them to work in making improvements and efficiencies for the future. A lean period is a good time for a company or organization to become CMMI certified or ISO certified utilizing staff that are freed up due to diminished business. That way, when the good times roll in again, the company is now more efficient and better positioned to take advantage of the new business.


Granted that this is very theoretical and a bit on the Pollyanna, “in a perfect world” perspective, but what are the alternatives? Haphazardly growing frantically during the boom period and then laying of people and losing market share during the bust? Clearly, both individuals and organizations must plan for the cyclical market conditions that are now a way of life in the most intelligent manner possible. Assuming that things will go smooth and steady in the future is hazardous and foolhardy at best.

Monday, August 24, 2009

The Need for Strategy

In a study, it was determined that the area of strategy within IT organizations and for that matter even non-IT organizations) is the most undeveloped and under-utilized with the greatest scope of improvement and realizing benefits. I have certainly found this to be true in my own career and dealings with various organizations.


The word strategy instantly brings to mind the concept of long-term planning. A highly reactive response to solving a customer’s immediate problem as quickly as possible is not a strategic activity. However, deciding what new products and service to introduce 3 years down the line is an example of strategic activity. What I have noticed too often in the past is that organizations get into a constant state of firefighting and reactive problem solving which results in adequate strategy never being realized. It is up to management to ensure that sufficient resources are dedicated to strategic activities and kept free of the day to day firefighting tasks.


Strategy is important because it provides the initial roadmap or path to the organizations long term goals and objectives. A wrong decision taken in the initial plan can have disastrous consequences in the long term. Furthermore, possible risks and downturns need to be evaluated and accounted for in the future planning. Over and above all this, the strategy team should evaluate the current products and services and the customer’s happiness with respect to them and make course corrections based on this as necessary. Therefore, it is apparent that the strategy step is crucially important and should not be neglected.


So now that we are convinced of the importance of strategy, how do we go about strategizing? The different areas of strategy, in my opinion, can be broken down to three main components. Understanding of your organization (which includes current products and services, resources and capabilities etc.), understanding the customer (demand patterns, market conditions etc.) and financial information (including Budgeting, Accounting and Charging). These are found in the ITIL body of knowledge as the Portfolio Management, Demand Management and financial Management processes within the Service Strategy Module.


Therefore, with the information needed to adopt strategy for IT services being readily available, there is really no excuse for the implementation of poor strategy. All the greatest generals in history, considered strategy the most important part of their military campaign, beyond even the number and strength of their armies and the technological sophistication of the weapons being used. Indeed, Napoleon Bonaparte won numerous battles simply because of his superior strategic planning. In the battlefield of business, the implementation of correct strategy will ensure economic victory.

Monday, August 17, 2009

IT Information Systems

It is important that IT organizations design and maintain adequate information systems to facilitate the flow of information necessary to achieve their goals. There exist certain guidelines for these information systems within the ITIL body of knowledge.


The overall information database that houses all the others is called the Service Knowledge Management System (SKMS). All the information systems mentioned below as well as any other custom systems are housed within this system. Some of the recommended information systems are:


The Configuration Management System (CMS) which contains the details of the Configuration Items that exist within the organization and their relationships with each other. This system is within the purview of the Configuration Management Process and the Configuration Manager.


The Service Desk System which contains logs of all service requests and customer incidents. This is managed by the Service Desk function and the Service desk manager.


The Capacity Management Information System (CMIS) which contains details of the capacity requirements for the business, service and components. The existing capacity specifications for the systems in place and the Capacity Plan for all the services also exist in the CMIS. This is managed by the Capacity Management process and the Capacity Manager.


The Availability Management Information System (AMIS) which contains details of the availability requirements at both the service and component level. The existing availability specifications and the availability plan for all the services also exist in the AMIS. This system is managed by the Availability Management process and the Availability Manager.


The Security Management Information System (SMIS) which contains the Security Policy for the organization and the various details of the security system in place. This system is managed by the Security Management Process and the Security Manager.


The Supplier and Contracts Database which contains information pertaining to the supplier and contractors of the organization. This system is maintained by the Supplier Management Process and the Supplier Manager.


These are the primary Information Systems that ITIL recommends that It organizations maintain. Of course, there could be other specific to the company systems as well that are beneficial if created and maintained by the organization.


The inter-communication between these systems is also crucial in making the whole communication flow work and must be undertaken by care by the organization. However, if implemented correctly these information systems provide a useful framework for communication and document control within the IT organization.

Monday, August 10, 2009

Excess Capacity

The characteristic feature about IT that makes it different from other types of industries is that there is very little potential for storing or maintaining an inventory of the services being provided. In manufacturing, for example, the manufactured product can be stored in a warehouse and sold later. However, in IT that is usually not possible. Of course, in the case of a manufactured application like say Windows Vista, the boxes of Vista could be stored in a warehouse but due to the short lifespan of software products, this could only be done for so long before the app is obsolete and incapable of being sold. Furthermore, as the software apps can’t be recycled (like steel pipes for example) the stored quantities that aren’t sold are a complete loss. And in the case of non-product services the resources (people, tools, apps, computers) are simply sitting idle if not used to full capacity. The loss in this case is instantaneous and unrecoverable.


Now a certain amount of buffer capacity is necessary so that in the event of some problem or spike in customer requirements, things are still under control and manageable. However, a disturbing trend that I have seen very often is that a lot of capacity is kept as a buffer to compensate for poor management of IT. The efficient and consequently competitive and profitable IT organizations manage their capacity so that they are only taking on the amount of resources and capabilities that deliver value and no more. How can this be accomplished?


To fine tune the resources so that just what is needed is being delivered requires a number of factors to be in place. The first and most important is the correct analysis and understanding of customer demand cycles. This is where the demand management process is of great value.


An ongoing formal relationship with the customer utilizing Service Level Management is also crucial to establish the correct point of contact with the customer in order to fully understand requirements and to implement continuous improvement measures.
Financial management is important in keeping track of the expenses with respect to the planned budget. This monetary bookkeeping can greatly assist with keeping track of customer demand patterns.


At the center of it all, of course, is the capacity management process which plans for and monitors service capacity. However, this process cannot function adequately without correct inputs from the aforementioned processes and other sources.


It is possible to fine-tune and optimize capacity delivery to the customer but only after a proper planned effort is made with other processes in place that provide the relevant information. Organizations seeking to be competitive must make the effort to optimize their delivery or else they will be overtaken by competitors that make this effort.

Monday, August 3, 2009

Product and Service

An observation from being an ITIL teacher is that a common challenge people learning ITIL face is the ability to differentiate between a product and a service. Typically these are folks from a software development and QA background and can only see the world through the actions taken to develop the application.


Let us consider a situation where the IT department develops and releases an application to the business. Now it is easy to consider that the application itself is all that is being provided to the customer. However, the application must also typically be maintained and supported for the customer. The factors involved in this are:


  • help desk support

  • incident and problem management

  • regular contact with the customer

  • evaluation and analysis of changes needed by the customer

  • making the changes

  • releasing the changed application to the live environment


Furthermore, proactive monitoring of availability, capacity, security and disaster recovery must also be performed to ensure that the agreed upon uptime of the application is maintained.


All these actions together provide the overall service for the application to the customer. The application as a product itself delivered to the customer is one thing. But the application functioning as it should at the agreed upon levels for the agreed upon period of time is quite another thing.


Therefore, we see the reason for processes like availability management, capacity management etc. Simply having a department of programmers is not enough as IT now requires the ability to handle all aspects of service provision to the customer. IT departments must evaluate the processes that are required for them to provide service to the customer and then set up and manage these processes within their department. Even programmers should at least be aware of the service aspects of the application being programmed by them.

Monday, July 27, 2009

Connecting with the customer

There are typically two situations when connection of an IT department with the customer at a significant level occurs. One when initial requirements are being threshed out and the other when there is a problem or issue that needs resolution. Organizations nowadays are relatively mature in their handling of the second situation where issues and problems are reported and resolved. This has been mostly due to implementation of incident handling and help desk processes and advents in help desk tools and applications. However, the interaction with customers during requirements is usually not handled very well. Furthermore, a more proactive approach to customer management is largely missing in most organizations.


The ITIL body of knowledge provides the Service Level Management process for exactly this purpose. The SLM process, which exists in the Design stage of the ITIL lifecycle, essentially performs two main functions:


  • To determine the level of IT service needed by the business (customer) and

  • To identify whether the required services are being met or not and if not, why not?


By the successful performance of these two functions, SLM helps to maintain and improve the IT service provided to the business. But more significantly, the SLM process and the Service Level Manager (the SLM process owner) create and maintain a relationship with the customer. It is through relationships that the ability to truly satisfy and even delight the customer can be achieved. The reason for this is that it is rare for the customer to truly understand what they want in technical terms. Therefore, they do not put down in a detailed requirement specifications document all aspects of what they want. The IT staff members following the spec document then faithfully produce a product or service that does not truly delight the customer but meets specifications. To truly understand the unstated and unspecified needs and desires of the customer, a relationship must them be established and maintained by the IT organization. That way, the customer can be guided into including, in a spec document, what they really want but are unable to put down specifically.


The service level manager, therefore, should have both technical skills and relationship skills which include communication and negotiation skills. The service level manager should also be able to act as an emissary for both sides, the customer as well as the IT service provider.


The SLM process consists of the following high-level steps:


  • Cataloguing the services

  • Implement Service Level Agreements

  • Monitoring, reporting and review of actual service levels

  • Review of Service Performance and adherence to SLAs

  • Implementation of service improvements as needed


Of course each of these steps has several steps of their own and relevant inputs and outputs. However, it is by following these steps that an IT organization can ensure proper customer service in a proactive fashion.


The old style of informal and unregulated contact and interaction with the customer is no longer the appropriate method of carrying out business. Every IT department should have a formal point of contact with the customer(s) and a proactive process of ensuring service quality and constant improvement. There is no way of avoiding or bypassing this requirement any longer.

Monday, July 20, 2009

Making it Happen

While it is possible to pontificate and theorize till the end of the millennium, at some point in time, the rubber has to hit the road and certain process improvement steps actually performed. The situation is not unlike those who read up on working out, watch YouTube videos on working out, buy exercise DVDs, talk about working out a lot but never actually work out. They obviously see no fitness improvements and the same is true of process improvement efforts that never actually make the improvement.


A degree of sympathy for this situation is understandable, however. After all, there are numerous challenges in the way of implementing process improvements which I have mentioned in previous posts. A major concern that all stakeholders have with process improvement efforts is the fear that the improvement may end up causing more trouble than benefit. This is particularly true of IT process improvement endeavors that attempt to make significant changes in a short time or the “big bang” style of improvement. I, personally, usually advocate a phased, iterative approach to most organizations. This way, the benefits while not staggering are visible and the risks suitable diminished. The situation is mostly psychological as when management and staff see small changes making a difference, they are more open to the larger improvement efforts and a cascading effect of improvement kicks in. However, all this only happens only when a start is made.


The major shift in paradigm for most organizations now is to go from a department oriented approach to a process oriented approach. However, this involves major changes, upheavals and most importantly the potential disruption of everyday activities that could result in the inability to meet customer targets. The mistake that is usually made is that a proper preparation and accommodation for making process improvement changes is never adequately made. The erroneous expectation is that the IT department can perform and deliver in spite of all the changes and upheaval taking place all around. This is something like expecting your mood to be the same even though the in-laws have showed up to visit.


While it would be great if a large improvement could be easily be made, the reality of the situation usually is that improvements have to be made in small increments. This is because of the aforementioned problems and the fact that people generally have a psychological block towards change and deviation from the status quo. Over and above this, extra staff members have to be hired, proper training doled out and adjustments to the workload and time lines made. In short, the matter has to be looked at from all angles and planned properly. You can’t suddenly decide to implement a process improvement effort after getting charged up attending an ITIL or Six Sigma seminar. The motivation after these kind of events should spark a planning effort for improvement and not the full fledged improvement itself.


On the other hand, with competition as intense as it is, continuous improvement is no longer an option or a luxury. It is a basic necessity and should be included as part of the organization right from the organization’s inception. Therefore, at some point organizations have to make the jump into the water.


Sooner or later, improvement efforts will have to be made. The question is how open and prepared will organizations be to make the efforts. Those that have understood the importance of continuous improvement will reap the benefits while those who attempt to make changes at the last minute will struggle for survival.

Monday, July 13, 2009

Communication Conundrums

Perhaps the greatest challenge and the main cause of issues and problems in IT (or anywhere else for that matter) is lack of effective communication. This is paradoxical as on first thought, the advance in technology and mobile devices nowadays should actually enhance communication. However, we see that in spite of the high tech capabilities to communicate being available, we still run into many “he didn’t tell me” or “ I was never informed about such and such” scenarios. Why is this?


In my opinion, the primary cause for poor communication is a lack of emphasis on this area by senior management. Communication must be deeply embedded in the very fabric of the organizations architecture and the ones to drive this through are the top management. Communication must be encouraged and rewarded, while a “shoot the messenger of bad news” predilection strictly discouraged.


Therefore, achieving successful communication can be divided into two parts: setting up the infrastructure for great communication technically and setting up the environment for communication “mentally” or “psychologically”.


The first part is relatively easy, as there exists an abundance of technology, applications and devices that create the infrastructure for effective communications. A great deal of information on this topic exists on the net and it is beyond the scope of this blog post to go into it in great detail. Furthermore, it is the other part of the problem, the psychological one that I feel deserves greater attention.


I have yet to meet a senior executive who did not believe in communication and publicly acknowledged the importance of it and yet most organizations that I have interacted with suffer from poor communication. Furthermore, these were organizations that had the latest infrastructure, applications and setup to communicate effectively and yet were facing significant shortcomings in their exchange of information, which in turn led to ineffective performance and low quality products and services delivered to the customer. The answer was that although the environment for effective communication was created technically and logistically, it was not created mentally or psychologically in the staff members minds.


Some significant barriers to effective communication in an organization are:


  • A change in predisposition required to communicate effectively. Most IT staff members are not used to efficient communication and have to make a shift in their habits to become proficient in this capability.

  • The formation of tribes or silos that have poor communication outside of their structure. This is something we have all seen and experienced. The QA department, for example, may deal well with each other internally but are in poor communication with the requirements group or the development group.

  • Too much information. If staff members are overwhelmed by too much information, then it becomes difficult to separate the wheat from the chaff and important information can get missed or ignored.

  • Lack of standardized terminology within the organization. I have personally experienced test cases being described using three different names at an organization I worked at in the past. Needless to say, the testing was extremely poor and out of control there. This is where standardized methodologies like ITIL and Six Sigma can assist in bringing a standardized terminology throughout the organization.

  • Control issues & political games. Certain staff members might wish to deliberately avoid the circulation of information for their personal gains. Tactics of scapegoating, blaming, silence and exclusion are typically utilized here to achieve the control goals by people. It is imperative that management discourage this type of self-centered behavior and set a high standard themselves.


With each of these problems, management can play a crucial role in providing a solution by discouraging negative behaviors and setting themselves up as a role model of positive conduct. Especially important is the avoidance of “shooting the messenger syndrome” by management. Of course, over and above management guidance, the organization needs to foster an environment of easy and efficient communication by incorporating a planned communication strategy. The PMI body of knowledge offers the following communication management processes:

  • Communications Planning – determining the information and communication needs of project stakeholders

  • Information Distribution – making needed information available to project stakeholders in a timely fashion

  • Performance Reporting – collecting and distributing performance information. This includes status reporting, progress measurement and forecasting

  • Manage Stakeholders – managing communications to satisfy the requirements of and resolve issues with project stakeholders


Each of these processes is outlined in greater detail in the PMI Body of Knowledge publication and can be modified to suit the organizations individual needs. Clearly, the tools and techniques are available. What seems to be lacking is the determination by all concerned to make it successful.


Communications is the lifeblood of IT and just as a body with poor circulation will be host to disease and degeneration, so will an organization suffer from an array of problems and inefficiencies if communication is not managed and cultivated properly. It is the organizations own interest that the implementation of world class communication practices is given high priority and attention.

Monday, July 6, 2009

Portfolio Management

The products and services that an organization offers to its customers have a life span encompassing conception, development, introduction, growth, maturity, decline and termination as shown in the figure below.



It is up to the business to analyze the market conditions and customer needs and determine when a product or service should be introduced and what the functionality and specifications of the product or service should be. Business, also determines when the product or service has run its course and should be retired from the active pipeline. IT, too should view its connection to the business as a set of services that it provides to its customer (the business).


IT at a fundamental level is a set of services utilized by the business, typically applications and infrastructure provided by either internal IT departments or external service providers. Organizations are now less focused on IT infrastructure and applications than on coupling the infrastructure and applications internally to automate end-to-end business services and to manage them the business services efficiently. The challenge here is the successful matchup of business needs with IT infrastructure. Service Portfolio Management is the process that at the strategic level ensures that IT provides the business with what the business needs presently and will need in the future. The steps involved in achieving this Service Portfolio Management are:


  • Define: what IT services exist and what would be needed in the future

  • Analyze: based on company’s goals and objectives as well as finances available

  • Approve: formal decision of stakeholders on what course to take

  • Charter: officially begin the action that has been decided on whether, to create a new service, refresh an existing service or to retire an obsolete service


The point here is that it is not just the business services that are being defined, analyzed, approved and chartered but the relevant IT services that are needed to make the business services work. Companies must now think in terms of IT as a set of services to the business and not just a group of applications and infrastructure. The apps and infrastructure make up the IT service which then supports the business service. The business service makes the sale which brings in the cash.


Therefore, it is important that IT services are planned for simultaneously as business services being planned and implemented with appropriate communication between business and IT. For example, in the Steel Pipe company example in a previous post, the business portfolio would consist of steel pipe 2 feet long, 4 feet long and six feet long. However, the IT services would include email services, laptop and desktop services, networking and internet services, CRM services and programming services of the heavy machines utilized in manufacturing the pipes. If the company was attempting as part of its business to add a new range of steel products to its business portfolio, the IT department of this company should understand what modifications it would need to make to the IT services being offered to the business. If this new business required IT to now develop its own applications as opposed to purchasing off the shelf applications, IT services would have to add Application development and testing to its “menu” of services. This would logistically entail hiring staff and purchasing desktops and servers and operating systems etc. However, looking at IT as a service to the business, we now have a new service that needs to be setup by IT. Clearly, the earlier IT sets about setting this new service up the better for the organization.


Very often IT is not involved in the analysis of how the services it offers to the business should be modified until far too late. It is a sign of high organizational maturity when the alignment between business and IT occurs at the strategic planning stage and not later on in the game. This then leads to a smoother delivery of the required services with fewer defects and less chaos and inefficiency as things have been planned early on and not at the last minute. Less problems, better service and higher employee morale are the result. All of this makes Portfolio Management a necessary process for any IT organization.

Monday, June 29, 2009

Financial Order

An interesting, and in my opinion, welcome, trend taking place nowadays is the financial accountability that the IT departments are being held up to. In some cases, companies are choosing to subcontract out the work to outside IT firms that provide the results without going over-budget rather than their own IT departments that are constantly in arrears and perpetually asking for more funds.


The old paradigm was that IT would ask the board of directors for as much as they could and then try to deliver the promised work with the given funds. If the money ran out, then they would go back to the board for more. Needless to say this was not a solution that was going to work in the long term and IT departments are now facing the fruits of what they have sown. Namely, being thrown to the side as the work is given to those that can deliver as promised.


The long and short of it is that IT, even if it only service internal customers within the organization, must now handle its finances adequately. While it does not need to go into the complexities that the accounting and finance department of the organization may go into such as 401(k)s and IRA accounts and so on, certain basic bookkeeping and accounting is now mandatory for IT departments. Essentially, an IT department must now consider itself a separate company within its parent company; at least as far as its accounting and finances are concerned.


The solution is the application of adequate financial management within the IT department. The ITIL body of knowledge provides a framework for the application of financial management within the IT department as part of the larger framework of the overall financial department of the organization. Financial management essentially consists of three areas:


  • Budgeting

  • Accounting and

  • Charging


Budgeting consists of predicting the spending of money by the IT department during a budget period (usually a quarter or year). This involves creating spending plans and estimating the work to be performed and the cost of performing the work. The monitoring of the actual expenditure vs. the estimated budget and making correction to the budget as needed is also part of the budgeting process.


Accounting consists of providing detailed information of the expenditure incurred by the IT department on a day to day basis, comparing the actual expenditure to the budgeted expenditure and taking corrective action as necessary.


Charging consists of the recovery of the cost of IT expenditures in a simple, fair and affordable way. The IT department may choose to charge or not charge and simply provide budgeting and accounting information based on the policy set by the organization.


When Financial Management is successfully applied, the organization enjoys:


  • Superior financial compliance and control

  • Enhanced decision making thanks to availability of financial details

  • Superior portfolio management due to understanding of costs and benefits and the ability to accurately compute the ROI

  • Better operational visibility and control

  • Improved value capture and creation


The serious application of financial management is now a necessity in every IT department and seeking a bailout from the board is fast disappearing as a possible option. IT must now consider itself accountable over providing services as well as handling finances adequately.

Monday, June 22, 2009

Support Amnesia

The product or service lifecycle can usually be broken in to four parts: the strategy for providing the service, the design and development of the service, the transition of the service to the live environment and the support of the service in the live environment. Very often, the estimate for the cost and effort required during the support stage is unrealistic and below the actual numbers that are experienced. Why is this?


Organizations too often have an inflated sense of their ability to create a product or service that falls within the customer’s requirements of quality and functionality. Due to this, they assume that once the product or service is released to the live environment, there is no further work involved besides taking a few help desk calls from some “non-techie” customers that don’t understand the” jargon” of the manual. The reality, of course, is that several requests for change will be demanded from the customers, which in turn will require corresponding rework, retest and release cycles. This, of course, requires time and resources that were not planned for which in turn throws the scheduling and budget in disarray. A great deal of trouble and anguish might have been averted if only the effort required during support had been accurately forecasted.


Ideally, the product or service should match the needs of the customer so well that very little effort should be required during the support stage. Indeed, the less effort required during support, the more successful the organization has been in delivering the product to the customer. However, as organizations are generally not at this level of delivery competence, they should realistically factor in the effort that will be required during the support stage. At the same time they should also continuously improve their strategy, design and transition stages so that less support effort will be required in the future and the maturity of the organization increases with time.


Having stated the ideal case, the reality is that most organizations do not adequately evaluate the effort required during the support stage. This is caused, in my opinion, by the following factors:


  • An inability to face up to the unpleasant news of poor delivery and the subsequent requirement for further resources. Management here can play a crucial role by not shooting the messenger and dealing with the bad news in a mature and capable fashion.

  • Lack of company maturity and metrics in place to adequately understand the status of the product and its true state. A collective effort by the entire organization will have to be made to resolve this.

  • Unrealistic demands by management on the system. Management should not allow themselves to be seduced by market conditions and demand more than is possible from the capabilities of the organization. It is really their job to have properly forecasted market conditions and planned their strategy in advance with respect to the capabilities of the organization.

  • A poor understanding of the support stage itself by the stakeholders will naturally result in the inability to accurately forecast the time and resources required to implement support adequately.


When cost and schedule are thrown into disarray by the emergency allocation of resources to support, the areas where these resources were drawn out of will then also suffer. The breakdown of one project tends to have a “ripple” effect that then affects the entire organization’s activities in varying degrees.


Support is easily the most visible stage because this is where the customer actually makes contact with the organization, typically with an issue or problem needing resolution. Of course, in the previous stages customer representatives were in communication with the organization in order to provide requirements and evaluate and approve progress. However, this is where the customers and not just representatives actually connect with the organization for problem resolution. This is where the organization can really impress the customer despite the fact that the customer may have a problem or be in a state of dissatisfaction. Despite all efforts, there will be problems and issues that the customers will experience. It is the way the organization handles the problem and the unsatisfied customer that will determine whether the customer takes their business elsewhere or not.


Therefore, it is evident that support must be planned for with an honest look at the organization’s capabilities and adequate time and resources should be allocated in order to provide proper support. Failure to do this will only cause needless chaos and instability with disgruntled customers taking their business elsewhere.

Monday, June 15, 2009

Too Much Quality?

Quality is one of those concepts like money, enlightenment or status that everybody wants but never can quite get enough of. What is insidiously dangerous about organization’s and people’s desire for quality is that they usually do not realize that it is simply a metric or measure of the customer’s expectation and must be delivered at that level: no more and no less.


Does that mean providing a lower level of quality because the customer wants it even if the organization is capable of producing a higher level? Yes! That is exactly what I am asserting. To make sense of this seemingly contradictory statement, let us first understand quality and its implied characteristics.


The definition of quality as per ISO is "the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs." This includes the characteristics of availability, reliability, maintainability, serviceability, performance and security as defined by the customer in their requirements.


Availability may be thought of as the ability of the product or service to perform its agreed function when required and is usually expressed as a percentage.


Reliability is the measure of how long a product or service can perform its agreed function before interruption. It is usually expressed in units of time.


Maintainability is a measure of how quickly and effectively a product or service can be restored to normal working after a failure. It is usually expressed in units of time.


Serviceability is the degree to which the servicing of an item can be accomplished with the given resources and timeframes. This is usually performed by a third party supplier and is measured in units of time.


Performance and security are specific to the customer requirements of what task the product or service should perform and what levels of security are necessary. These characteristics vary widely from product to product.


So it may be stated that the sum total of these characteristics at levels specified by the customer make up the overall characteristic of quality which then must be delivered to the customer by IT. But the question under discussion is should an organization provide a higher than requested level of service to the customer if it is capable of doing so? While this was an accepted practice in the past, in today’s environment, it is not a recommended practice. But why is it not recommended? Let us consider an example.


An internet service provider delivers two levels of internet service. One level is at 250 Kbps and the other is at 500 Kbps speed. The price is $30 per month for the 250 Kbps and $40 per month for the 500 Kbps service. Now it might be logistically attractive for the company to provide 500 Kbps to everybody and simply charge $30 to those who signed up for the 250 Kbps service. However, once the $30 per month paying customers get used to the higher level of service, they will complain of service degradation if the service falls below 500 Kbps which is what they are now used to. Even though the company is technically not failing to provide the agreed level of service, as the speed is still above 250 Kbps, the customers will in all likelihood switch to the competitor even if the competitor was providing 250 Kbps speeds. Furthermore, the company will be shortchanging itself because they could enjoy lower operating costs (and therefore higher profits) if they provided 250 Kbps service to the $30 customers and held them at that level.


Organizations with high impact of failure such as hospitals, military, NASA etc. may choose to pursue higher than promised quality levels simply as a buffer to shield themselves against the catastrophic cost of failure. But this is a pre-planned, thought out action and not simply a blind surge towards more quality whether the customer wants it and is willing to pay for it or not.


By and large though, quality is not a holy grail that should be pursued blindly to perfection but in reality simply a metric that should be analyzed for customer demands, cost-effectiveness and return on investment and set to levels that make sense. Then the quality levels should be achieved and delivered to the customer at exactly the stated amounts. Any other path of action will lead to a reduction in competitiveness for that organization.

Monday, June 8, 2009

Testing Tribulations

Testing is an essential component of the system development lifecycle. However, typically too much emphasis is placed on it to make up for earlier inefficiencies. I deliberated on this topic two weeks ago in the “Building Quality” post where I recommended that quality has to be built into a product or service by performing each step in the lifecycle correctly with frequent checks and balances. That being said, testing is an important part of the overall lifecycle and must be given the importance it deserves.


My goal in this post is to highlight some of the incorrect approaches to testing that I have observed during my travels. I do not propose to go into the details of proper testing methodologies here, as that is beyond the scope of a blog post and moreover is readily available on the internet.


As mentioned, one typical incorrect approach is to rely on testing to rectify poor requirements and design magically. Another common mistake is to produce inadequate documentation and information during the earlier stages resulting in a challenge for testing staff to fully understand the scope of the testing required. This, then, results in poorly prepared test plans and test cases which result in incomplete and inadequate testing that ultimately allows defects to appear in production.


Even if an organization chooses to not incorporate frequent checks during its entire lifecycle, at the very least, care must be taken to ensure that the testing team is given the information that is required for them to complete their task successfully. This consists of involving representatives of the testing team in the requirements discussions at the requirements stage. Test personnel should also review requirements documents and provide feedback from a testing perspective. Design and development should also involve the test team in the creation of the functional specifications and obtain review and feedback from them. Testers should be involved in the display of mockups and prototypes as well, in order to adequately prepare for the testing to come. In this way, when the time for testing actually arrives, the test team will have a deep understanding of the product to be tested and have well thought out test plans and test cases produced. This, then, will result in thorough and detailed testing performed that will effectively detect defects in the product.


The second misstep commonly made is the inadequate allocation of resources to the test team. I have personally worked at a company where they had 3 testers when they really needed 20. As might be imagined, not only was the testing inadequate, it was an extremely stressful and frustrating time for the testers most of whom left the company. Management often does not have an accurate idea of how time consuming writing and executing a test case can be. Management also often insists on operating with the best case scenario forecasted as if simply wishing for the best will result in the best actually happening. Adequate resources must be planned for right at the beginning, during the project charter and made available at the correct times for testing to be performed appropriately.


The third mistake frequently made is that testing is often the first candidate for outsourcing. A very important rule of thumb for outsourcing that I advocate is that an organization should never outsource a functionality that is not under a high level of control and maturity at its home location. If you don’t have it under control at your own place, how on earth are you going to make a success of it when it is on the other side of the planet? Things will only go from bad to worse. However, management taken in by superficial analysis of reduced costs make the transition of testing to outsourced locations far from home which then results in more problems.


The fourth mistake typically made is the excessive trust on automation and other “tools” to bring about magical savings in cost and time. I have covered this issue in my previous posting titled “Automation Angst” which explains that automation can easily result in increased time and cost if improperly implemented. Care must be taken to setup automation for only those situations that will provide a positive return on investment.


With these mistakes being made, testing usually (and naturally) performs inadequately after which it comes under fire. I have always been fascinated by the blame always falling on testers while incompetent business analysts and programmers that are also responsible (if not more so as they are the ones who introduced the defects in the product in the first place) get away scot free. Perhaps readers might comment on this observation? This pressure results in test personnel leaving the organization which only worsens the situation as new personnel have to then be hired and brought up to speed.


Truly, testing must be set realistic expectations and supported in a mature and educated way by management as opposed to the typical “test us out of trouble” mentality. It is in the organization’s best interest to make the right choices as the competitive nature of today’s world will not allow for this kind of immaturity.