Sunday, December 19, 2010

The Cubicle Shuffle

At my client organization’s offices, an interesting and traumatic event for those affected by it was the usual employee cubicle shuffle, where recently moved people had to box up and shift elsewhere. Now, there are many reasons why such a sequence of events would take place (and sometimes they are out of human hands). Generally, however, this is due to poor resource management.


When folks hear about People Management, they generally think along the lines of motivation and psychological counseling and support. However, the most basic part of the organization is the staff and their management at the most basic level involves management of their shelter and place of work. This repeated cubicle shuffle simply denotes the lack of organizational planning to accommodate the most basic requirements of your most basic components.


Effective people management begins with a CMDB that records not only the basic information about the employee such as cubicle number, email address, phone number etc. but also makes note of their skills and capabilities including past experience and skill sets. Much can be researched and perused on the net regarding this topic, but the magic is in the implementation of it.

Monday, November 29, 2010

The Magnificent Seven

In the pursuit of quality, it is necessary to utilize techniques to analyze and evaluate metrics in a quality-oriented fashion. There are seven basic techniques that have been utilized for many decades now in the world of quality management. The utilization of these techniques has resulted in organizations making great strides in their quality management efficiency and quality delivered to the customer.


The seven techniques are as follows:


  • Cause and Effect Diagram (Ishikawa Diagram): This breaks down the possible cause of a variation from specifications into six different areas – People, Methods, Machines, Materials, Measurements and Environment. These can be further subdivided into smaller components. The basic idea is to link these areas to the process in order to evaluate which area (or sub-area) could be causing problems. This can be used proactively to evaluate the process for problems before they happen or reactively to zero in on a problem once it has manifested itself.


  • Check Sheet: This is a simple document that is used for collecting data in real-time and at the location where the data is generated. The document is typically a blank form that is designed for the quick, easy, and efficient recording of the desired information, which can be either quantitative or qualitative. When the information is quantitative, the check sheet is sometimes called a tally sheet. There are 5 basic types of check sheets: Classification, Location, Frequency, Measurement Scale and Check List.


  • Control Charts: A control chart consists of points representing a statistic with
  • mean, standard deviation and upper and lower control limits also displayed. If analysis of the control chart indicates that the process is currently under control then data from the process can be used to predict the future performance of the process. If the chart indicates that the process being monitored is not in control, analysis of the chart can help determine the sources of variation, which can then be eliminated to bring the process back into control.


  • Histogram: A histogram consists of tabular frequencies, shown as adjacent rectangles, erected over discrete intervals (bins), with an area equal to the frequency of the observations in the interval. Histograms are used to plot density of data, and often for density estimation: estimating the probability density function of the underlying variable. The histogram provides insight on the problem (or potential problem) that may be related to the data being plotted.


  • Pareto Chart: A Pareto chart contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line. The purpose of the Pareto chart is to highlight the most important among a (typically large) set of factors. In quality control, it often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, and so on.


  • Scatter Diagram: A scatter diagram uses Cartesian co-ordinates to display values for two variables for a set of data. A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it suggests a negative correlation. A line of best fit (alternatively called 'trend line') can be drawn in order to study the correlation between the variables.


  • Stratification: Stratification is a technique that separates data gathered from a variety of sources so that patterns can be observed. These patterns can then be further analyzed to zero in on the root cause of the problem.


The seven basic techniques of quality management have been staples in the toolbox of any quality professional for a long time now. It does not take a lot of effort to start utilizing these techniques in your organization quickly and efficiently. There is really no reason why any and every company shouldn’t be using these tools extensively.

Tuesday, November 16, 2010

Business’ Disappointment

At my client organization’s offices, while passing through, I overheard two ladies in the marketing team expressing their dissatisfaction with IT and the help desk in particular. I stopped and spoke with them a little bit about what was troubling them. What emerged was the usual lack of quality and support provided by IT for the applications that they use to perform their job functions.


What IT must always keep in the forefront of their minds is that they are ultimately servicing the business. If staff in business cannot access the applications that they require to perform their duties, then this will result in their incapability to bring in new business and increase sales volume. This, in turn, will lower the organization’s competitiveness and damage the brand image. Ultimately, this will result in lower profits and less resources available to all departments including IT.


I have always emphasized that IT is key in today’s times. Most other departments within the typical organization (Sales, Marketing, HR, Accounts etc.) are fairly mature. However, IT is relatively new in that processing information utilizing computers has only been going on for a few decades or so. Sales and marketing have, in their way, been occurring since the dawn of time. What this means is that, generally, IT has the greatest potential for improvement within the organization. A 10% improvement is usually quite easy to achieve in the IT department of most organizations, if not a much higher percentage. If an organization can improve their IT by that amount, it is obvious that they will surge ahead of the competition due to the efficiencies that will be inherent in this improvement to the entire organization. IT is, therefore, the most significant catalyst to an organization’s success nowadays.


It is easy for IT to pigeonhole itself inconstantly putting out fires and only focusing on meeting quarterly numbers. However, this is a short sighted strategy that will hinder the organization and ultimately hurt IT. Constant improvement is not a luxury but a necessity for all of us, especially IT.

Monday, November 1, 2010

Skills Management

With IT being more knowledge centric and requiring an ever greater array of skill sets to get things done, one of the major challenges facing organizations today is the effective management of skills.


Now the skills can be brought on board in a number of ways. There is the option of acquiring In-House talent a.k.a. Full Time Employees. One could get Contractors (which is essentially the same thing nowadays). Consultants could be brought in as well. And then we have the ever present outsourcing option as well.


It is in the regular evaluation, analysis and relevant action in the area of employee/supplier skill set that the effective management of the organizations skills can be successfully undertaken. The ITIL body of knowledge refers to this as Supplier Management and outlines a strategy of classifying suppliers (which can be said to include employees as well) into long and short term suppliers as well as strategic or commodity suppliers. There are, of course, many different techniques and tools to perform the task of managing the skills and suppliers of the organization and these are readily available online. The focus of this post is to emphasize the need for these techniques and the warning to avoid the trap of forever being in fire-fighting mode and not ever getting to perform this important task.


There are many aspects to IT management. Some tasks are considered “essential” such as the successful completion of a critical project. Other tasks such as Skills Management are generally fall into the “we’ll get to them if we can” category. While the completion of the critical project will keep the lights on for tomorrow, it is the other tasks that distinguish a ordinary organization form a world class one.

Monday, October 11, 2010

The Wisdom Hierarchy

The Wisdom hierarchy is often mentioned in passing and for a lot of folks is a new topic. It is, however, a great way to gauge the state of an organizations’ (or teams, or even individuals) depth of understanding of a topic.


The components of the Wisdom Hierarchy consist of Data, Information, Knowledge and Wisdom. The first component, Data, is simply that. Usually data is a list of numbers that provide almost no meaning to anyone, unless they are connected in some way to some context. Typically data is unorganized and unprocessed.


Information, the next evolution in the hierarchy, is described as “organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant”.


Knowledge is then derived from information, also applying experiences and insights gained through true cognitive and analytical activity. If someone "memorizes" information, then they have amassed knowledge. This knowledge has useful meaning to them, but it does not provide for, in and of itself, a way to further the knowledge.


Wisdom is the accumulation of the previous levels along with a deep understanding that enables the ability to increase effectiveness.


As readers may have guessed, it is the ability to operate at levels of wisdom that every individual, team and organization should aspire towards. However, how is this to be accomplished? Why, with proper process improvement techniques in place, of course.

Sunday, September 19, 2010

The Strategic Plan

The one thing most companies do not create or work out of is a strategic plan. Now I am not talking about a project plan that should be (and is also usually not properly) created for every project but a master strategic plan for the IT department of the organization. This plan should cover a lot of different issues and plan for the long term growth of the company.


The strategic plan should of course first and foremost define and build upon the relationship between the products and services that a company produces and the utilization of these by its customers. This, in the end is the life blood of any business endeavor whether it is IT or retail sales. Financial management, Demand management and just good old fashioned service strategy must be performed to achieve this.


Of course, technology must be involved to decide what should be researched and developed in the future in order for the organization to be more competitive. Constant technological improvement is the lifeblood of any technological organization or department. This is obvious.


Furthermore, supplier management should also be considered in a longer term role within the overall strategy. Possibly, certain suppliers should be handled as longer term partners while other suppliers have a less intimate relationship with the organization. By the same token, a plan for the staff should be thought of and monitored in order to reduce the dramatic changes brought on people’s lives with changes in their employment status.


In reality all aspects of the organization must be considered at the high level and incorporated in a strategic plan. Individual project plans and so on should be deeper dives in to the overall strategic plan. It is the author’s hope that top management performs strategic planning in a well organized and complete way.

Monday, September 6, 2010

Science is not a Democracy

This post might be one of the more meaningful and pertinent to today’s times that I have published in a while. A phenomenon that I have observed taking place is the inclusion of people of various different educational backgrounds working together in the same team. Now this brings together Engineers with Sociology, Psychology and English majors (with no technical knowledge or training). What proceeds to generally happen next is a free for all with everyone trying to come up with the answer in order to get the much vaunted promotion.


The answer to 2+2 is 4. It is not 3. It is not 5. And it will never be anything but 4. Even if all the non mathematics majors go on indefinite strike insisting that in their opinion it should be 3, the correct answer will be 4. If you were to throw a ball up in the air, it will eventually reach an apogee after which it will fall back down (real estate owners, are you listening?) to where it was projected from. The world of science is not a democracy. The laws of physics are not open to debate. Yes, you might be able to circumnavigate the law of gravity (for example) utilizing an airplane but even that follows certain laws of aerodynamics of its own. So what I am trying to illustrate here is that there is one right answer in science and countless wrong answers. Science is not like a philosophy paper that you handed your professor in college with the knowledge that you would at least be guaranteed a “C” grade. In science it’s either an “A” or an “F”.


Now let us go back to our scenario of many different expertise levels working together on a project. What I have seen happen often and is a major stumbling block to efficiency is that people who have no clue regarding what the right answer is will insist of speaking “their turn” and forcing their incorrect answer on everyone. If an attempt is made to try and shut these people up, they will instantly round on that person and accuse them of attempting to stifle them and be a “bully”. In extreme cases, the “human rights” of these people will be claimed to have been violated. The manager often ends up playing the role of the judge and a great deal of time and effort is wasted not to mention many times the wrong decision being taken because the English majors were feeling “left out” and the consequences of the wrong decision in terms of defects and rework.


What is really to blame here is the old boys (or old girls) network style of doing things where someone with 10 years of experience in the company has to be taken care of even if they have no knowledge of the position that they are now in. Truly, management needs to handle this situation effectively as what will happen is that the people with expertise will simply leave for better environments to be found at other organizations. Then the company will simply be left with “human rights activists” and zero technical expertise. Management really needs to let people with low technical expertise know what the problem is and to get them to stay out of the way of people with technical expertise.

Monday, August 23, 2010

Sneaking in Improvement

One thing that I have noticed recently is the tendency of companies to move operations to other states to avoid higher taxes in certain states. This is particularly true of California which has the highest taxes for businesses anywhere in the USA. Of course a similar phenomenon has been going on for years with outsourcing to other countries. Now I am not arguing for or against this type of action as it varies from organization to organization and the specific issues concerning each organization. However what I am suggesting is that process improvement could be sneaked in during these times of upheaval as they create the opportunity to get past the usual petty politics during normal times.


The great thing about times of change that involve cost cutting is that the petty power games get steamrollered over by the change coming through. This lessening of petty power politics allows the organization to employ process improvement methodologies and best practices far more easily than during normal times. Of course care must be taken to balance the implementation of process improvement along with the organizational changes that are taking place. However, in spite of the organizational changes taking place, I feel that there is an opportunity to make positive changes that would be difficult in normal times.


It is sad but true that organizations must sneak in something as important as process improvement when people’s guards are down. Until people’s attitude towards process improvement changes, it will have to be performed in whatever way it can be done.

Monday, August 9, 2010

Petty Little Power Games

If I could travel back in time, I would among other things (buying Microsoft stock at the right time etc.) ask the titans of Quality (Deming, Crosby, Juran etc.) how they navigated past the petty power struggles.


To clarify what I mean, I was asked recently why best practices are not widely implemented. My answer? “Petty power struggles”. What do I mean by that? Consider an organization that is low maturity and does not implement the best practices out today. Inspite of its low maturity, there is a sort of structure there. People after years of working there have become managers, directors etc. They have a pecking order of sorts. Now consider that a best practice like ITIL is to be implemented at this organization. The first thing about this development that will strike terror in everyone’s heart is the potential damage that this would bring to the various little power structures all over. A person who was a manger may now no longer be one and someone with ITIL certification could possibly be in a more commanding position. This could occur at all different levels across the organization. So how do people respond to this possible threat? By not implementing the best practice and keeping the status quo. If management insists, the “threatened” staff find numerous ways to cause problems, delays and confusion that effectively bring the implementation to a grinding halt. The most common is that the current project that needs to be completed will be delayed if a best practice implementation is to be performed. This effectively frightens the upper management into delaying the implementation until this important project is completed. At this point the game is as good as over. All the staff have to do now is threaten the well being of other projects as they come down the pipeline and the implementation effort is effectively history.


Over and above this, staff can be deliberately difficult, deliberately dense and intentionally make mistakes in the implementation effort to further undermine it. The one thing they usually do not do is study up on it and become experts at it thus ensuring a position of power in the new way of doing things. That would be the obvious and straightforward way of doing things, but human nature being what it is, the more difficult path is generally chosen in order to preserve the present (and inefficient) status quo.


What these obstructers do not realize is that the future will involve best practices whether they like it or not. The only question is how smoothly or otherwise the best practices will be implemented and with what fallout.

Monday, July 26, 2010

Negativity Doesn't Help

Last year this blog site did quite well at the Computer Weekly IT blog awards for 2009. Out of a sense of curiosity, I went to the blog site of one of the other sites that had done well also to have a look at what they were up to. I was surprised and dismayed that this other site seemed to do nothing besides ridicule and put down ITIL and other methodologies. Now, of course if a scam of some sort exists and someone is spreading the word on that, they are doing the world a favor. However, to mindlessly put down something that has been designed to help seem seems quite pointless.


The interesting part of this for me is that ITIL is quite benign. You can do what you want with it. You can turn around and have nothing to do with it or you could partially implement some of it or you could go the whole hog and really implement all aspects of it to a rigorous level. The choice is up to you. So why blame ITIL? Why the negativity?


It would seem that people will do anything and everything except the right thing. There is no use in either being negative or attacking something that is there to help. Particularly if the choice is in your hands and you can use it or not as you please. My experience has been that any methodology works if implemented correctly and all methodologies fail if implemented incorrectly. So really it’s up to you.

Monday, July 12, 2010

Making Matrixed Work

The matrixed style of management is becoming more and more popular in IT project and services nowadays. It would also seem that this style will continue to gain popularity in the future as well. However, like anything in this world, there are advantages and disadvantages to this style of management and there can be problems with this approach if the potential negatives are not handled correctly.


The matrixed style can basically be summarized as the selection of staff from a function or bench to perform tasks in a project which upon completion results in their returning to their function or bench to await subsequent deployment. The advantages of this are:


  • Much greater agility, especially when the organization has to handle multiple projects simultaneously


  • Individuals can be chosen according to the needs of the project


  • Greater individual contribution as the staff members of a matrixed environment have each been chosen for their specific skill set


  • Project managers have greater autonomy and control over the project management


The disadvantages of the matrixed environment are:

  • Conflict between the home department and the project for staff members


  • Difficulty in managing the project of the project manager does not have enough power


  • Staff morale is reduced due to stress of having to find another project to work on


The matrixed environment is attractive because the disadvantages can be managed leaving the organization to reap the benefits of the advantages. So what can be done to ensure that the matrixed environment can work? Some suggestions are:

  • Identifying team members and ensuring the proper line of command over them is established to disallow any chance of conflicting work assignments


  • Establishing effective communication channels. This is crucial because the staff members will be getting potentially conflicting information from their “home’ departments. Therefore, communication has to be extremely solid


  • Effective project information dissemination. The matrixed structure offers a higher potential of staff not getting the information that they should get regarding the project. This should be thought about and planned for right from the beginning


Of course much more information regarding the structure and management of matrixed organization exists and the interested reader can research this online. It is just a shame to me that a efficient way of doing things gets a bad name simply because of the simple avoidance of some the pitfalls.

Monday, July 5, 2010

The Point of Statistics

Out and about in the world of IT, I tend to see a great deal of variety as I meet with different organizations and individuals. One thing that a lot of people (especially without a technical background) tend to be unclear about is the basic reason for the existence of statistics. Improvement initiatives like Six Sigma rely heavily on statistics and it is a good idea for those weak in this area to strengthen up and learn a bit more about it. The purpose of this week’s post is to get folks started with a high level summary of the topic and those that are interested can research the topic further online.


Statistics exists mainly because you cannot measure everything. Let me illustrate this with an example. Let us assume that I own a paper clip manufacturing company. Now this company is manufacturing one million paper clips a day utilizing four different machines. Can I measure and test each of the million paper clips being produced every day? I would require a staff of at least 10,000 to do that which would drive me into a loss making state very quickly. So what do I do? I take a “sample” of the 1,000,000 clips being produced (also known as the “population”). The derivation of the sample could be performed in many thought provoking ways. As there are four machines, perhaps a sample of 1,000 clips could be taken from each machine on the hour every hour for a total of 32,000 clips to be tested for defects. This way if a particular machine is malfunctioning, it will be quickly and easily spotted. Of course, there are many permutations and combinations of deriving the sample units from the population, this being only one of many.


Astute readers will have noticed one problem with all of this and it is the following: we produced 1,000,000 paper clips and we only tested 32,000. How do we know that this sample accurately represented the population? What if we only tested the 32,000 that were good and the remaining 968,000 are bad? And this is where statistics helps us. Not only can we perform useful operations like mean, median and standard deviation on our sample, we can use statistical techniques to tell us how accurately the sample’s data co-relates to the population itself. So, in our example, we can say that the sample of 32,000 turned out to be 98% defect free and we are 90% sure that the remaining units of the rest are the population also are 98% defect free. This, ability to predict quality levels of the units that were never tested is the chief strength of statistics and the various techniques of statistics that exist. Of course there are other applications of statistics, but this is the primary one.


I speak of statistics this week because it is about time that IT organizations start utilizing all the tools available to them through this discipline and improving their efficiency. There are organizations that utilize function points and advanced statistical techniques in a big way and they are at levels of efficiency that are going to be very hard to beat. It’s time for the others to get going.

Tuesday, June 22, 2010

The Love of Ad-Hoc

I was yet again exposed last week to an instance of an organization performing their tasks in an ad-hoc style. I see this seemingly in built preference to ad-hoc so often that it strikes me as remarkable. While there should be a consideration towards reduction of cost and efficiency and optimization, ad-hoc is not the path to effective cost cutting, efficiency or optimization. A lot of work is required for achieving these 3 characteristics and simply avoiding any effort at structuring the way things are performed is not beneficial in any way.


And yet, ad-hoc is so prevalent even today. From an ad-hoc style of gathering requirements to an ad hoc style of designing, developing and testing, the ad-hoc way seems quite ubiquitous. Why is this? I think that it really just boils down to laziness and inertia. Sure, there is some measure of ignorance and lack of awareness of the current best practices out there, but this, too, can be finally attributed to laziness. Perhaps the feeling of comfort that comes from leaving things alone is also a factor. “Don’t fix it if it isn’t broken” seems to be the mantra of safety. However, the problem with this is that your competition isn’t leaving well enough alone. The competition is marching forward and if you don’t you will be left in the dust.

Monday, June 14, 2010

Hoshin Kanri

After postings of a more philosophical nature, let us delve into the nuts and bolts of IT process improvement with Hoshin Kanri. In Japanese, “hoshin” means shining metal, compass, or pointing the direction and “kanri” means management or control. The name describes the alignment an organization towards accomplishing a goal through effective planning.


Hoshin Kanri links the high level executive goals and objectives to increasingly lower levels of management and activities until the lowest level activity is aligned to the organization’s overall objectives.


At the beginning of the Hoshin Planning process, top management sets the overall vision and the annual high-level policies and targets for the company. At each level moving downward, managers and employees participate in the definition—from the overall vision and their annual targets—of the strategy and detailed action plan they will use to attain their targets. They also define the measures that will be used to demonstrate that they have successfully achieved their targets. Then, targets, in turn are passed on to the next level down. Regular reviews take place to identify progress and problems, and to initiate corrective action.


The levels of activity from high to low are:


  • Corporate level objectives


  • Service level Objectives


  • Functional Objectives


  • Team Objectives


  • Specific Activities, Goals & Resources



The advantage of this type of setup is that not only is the organization aligned towards high level objective but it is also very well positioned to quickly make adjustments to change in strategy. That is, a higher degree of agility and ability to rapidly change will be introduced. A better setup of accountability will also exist with Hoshin Kanri.


Hoshin Kanri is another tactical advantage an organization can give itself. It is not an end-all or be-all but it can help optimize things further. The interested reader can find out more about this technique online.

Tuesday, June 8, 2010

The Trust Factor

As I spend more and more time and effort promoting the implementation of best practices, it emerges, more and more that the first and most important step is trust. Trust in the person giving advice and recommendation, trust in the process methodologies and finally trust in themselves: that they can implement the best practices successfully.


Usually all three areas of trust are missing. This then requires the process of gaining trust from people and the organization. However, trust is something that has to be earned and typically earned over time. Consistent reliable, dependable performance over time is usually what builds trust. This can be done to achieve the trust required but can an organization afford the time required for this trust to fall in place? The time involved could easily be months, perhaps years. By this time, the competition could have implemented these strategies and best practices and moved far ahead in the race.


Therefore, it is necessary that organizations and people quickly gain trust in what is best for them. This, however, is something they must decide. And there we have the Catch-22. How do they decide what is right for them when they don’t trust it? The only way out of this quandary is awareness and education. Become more aware of the best practices out there and the ability to make the right choices will get a lot easier.


The conclusion is that a higher degree of awareness and knowledge is needed to have the right level of trust for the right technique. Those who do not improve continuously will pay the price for their lackadaisical attitude.

Monday, May 31, 2010

Decisive Action

There are far too many people out there, at all levels, from top management to entry level, that are allergic to taking decisive action. Now a decisive action does not mean a stupid action. It does not mean an improperly thought out, unplanned, immature action. I mean the type of action where a stand of some sort is made and held to instead of the typical wishy-washy let’s stay forever safe and not get anywhere type of fence sitting that has resulted in the IT industry being so chaotic and unstructured.


Not every minute of every day is going to require decisive action. But when the need for such action is required, it is imperative that it be taken. Perhaps the greatest general ever, Napoleon Bonaparte, was famous for taking decisive actions that would alter the course of history, sometimes in the thick of battle. What he is less famous for is the huge amount of planning and preparation he went through prior to his decision making. The result? A legend, who almost took over all of Europe.


Now, we in our cubicles and offices need not worry about being a Napoleon, making life or death decisions, but the principle remains the same as does the benefit. So how does all this apply to IT Process Improvement? To cite just one example, when a decision to implement a methodology is made, it should be implemented with a decisive, do or die energy. Now this does not mean that it should not be carefully monitored and course corrections taken. This does not mean that risk analysis and mitigation be not performed. It does mean that we don’t give up at the first obstacle that comes by or worse even not take the decision to implement an improvement in the first place. Even an entry level person in his cubicle can apply decisive action taking in small and measured ways. Of course, if the organizational culture is one of non-decisiveness then it may be best to be discrete in this aspect as I can testify by my own experiences. This is an ideology that is most effective when applied by the senior management and it naturally trickles down to the lower levels as the organizational culture becomes imbued with a certain fearlessness.


Plan carefully and then take decisive action. This has always been the way of the truly successful.

Monday, May 24, 2010

The Unwanted Stepchild

Speaking with a client last week, I was yet again made to realize the lack of emphasis placed on continual improvement at all levels within the organization. The roles in their organization were based on frantic attempts to make it through the day with essentially no emphasis on any sort of improvement initiatives. It would seem that improvement is considered the stepchild in most IT organizations. And yet this is the most effective long term strategy an organization can employ to get ahead of the competition.


Oh sure, there is a conceptual agreement that there should always be continual improvement. However, when the rubber hits the road, it all falls apart. Why is this?


Mostly, it’s a lack of commitment and follow-through at all levels within the organization. Lack of planning and not keeping up with changing techniques and methodologies is also to blame. Perhaps most telling of all is what I hear consistently that not all new methodologies are “good”. Or just because some governing body has released a new body of knowledge, it is not necessary to implement it. Perhaps so, but it is surely necessary to be aware of it and at least consider it. On this topic, I would like to emphasize that most reputable governing bodies (PMI, ASQ, QAI etc.) have very deeply field tested bodies of knowledge. These techniques and methodologies were created by academics and industry professionals and then utilized in the field numerous times before even being brought to the public. Most people are unaware that ITIL is over 21 years old. Things don’t stay around for 21 years in the IT industry unless there is something to them.


The fact of the matter is that there are a huge number of improvement tools, techniques and methodologies out there. But the desire to implement all this needs to be realized. Until then, continual improvement will remain an unwanted stepchild.

Monday, May 17, 2010

Sifting Through the Avalanche

With the topic of age and leadership analyzed to death in the last couple of weeks, an issue that came up repeatedly was is there really a necessity for constant learning by everyone. The challenging thing about this issue is that there is a huge volume of information and change coming at us all at the speed of light. So how do we sift through this vast amount of information in a way that most benefits us? Clearly, we cannot ignore all this information and do nothing. And yet it is humanely impossible for one person to learn everything. So what is the answer?


The most obvious starting point would be to specialize in one’s area of expertise. This means that a Project Manager should pay special attention to the changes in tools, techniques and methodologies that are occurring in the area of project management and to ensure that they learnt what was important and relevant to them. However, staying current in one’s area of specialization is the bare minimum required nowadays.


It is at this point that the IT professional must choose where they wish to move towards in terms of their career and longer term goals and keep current with that. Also, one area that most IT professionals would do well to master is the use of proper terminology and an awareness of what the industry standard is even if it is not utilized at their place of work. A higher level learning of overall management techniques such as ITIL and CMMI may also be a good idea, irrespective of specialization.


Over and above this, folks will want to keep up with the culture of the times which is changing all the time as well. It was not very long ago that Facebook, Twitter and MySpace were pretty much unheard of. Now Facebook has overtaken Google for the number of hits worldwide. Clearly this is something both individuals and organizations must keep abreast of now.


It’s getting to be so that constant learning is now a mandatory part of our lives. It’s a question of how smart we can be about it. Like anything else it’s going to be a challenge of getting the most bang for the buck, or in this case, time.

Monday, May 10, 2010

Age and Leadership 2 – Stayin’ Hip

Last week’s post on age and leadership sparked some lively debate on various discussion boards. The range of issues brought forward by various people and their points of view resulted in a continuation of the topic with this week’s post. Many thanks to the people who posted their point of view and sparked debate and discussion. After all that is what a blog is all about.


Let us start at the beginning. As IT projects usually start with requirements definition, let us also start with the requirements necessary for leadership positions. After all how can we discuss the effect that age has on leadership abilities when the leadership abilities themselves are undefined? Some of the criteria for effective leadership are:


  • Communication skills


  • Problem solving skills


  • Adaptability to change


  • Time management


  • Stress management


  • Interpersonal relations and teamwork


  • Ability to set goals and articulate a vision


  • Managing conflict


This is, of course, not a comprehensive list, but it does cover most of the criteria. The interesting thing about this list is that it has been true and relatively unchanged for millennia. So how does the current age of information and technology affect these criteria? How are these issues affected by the fact that we are in the year 2010?


Adaptability to change has never been as important and that too, at all levels. Market changes, customer preference changes, technology changes, methodology changes, tools and technique changes are all occurring at incredible speed. Of course not all changes are necessary or even good. However, the ability to make the decision on which change to implement is necessary and this can only be achieved by being very well informed and in touch with what is current.


Furthermore, all the other criteria now require knowledge of the latest tools and techniques to accomplish effectively. Time management is not possible without the latest handheld device and mail, VoIP and other technologies today.


For true leadership, the ability to read the future, i.e. the visionary capability, is impossible without a thorough understanding of the market and the culture of the times, which is vastly different from just 10 years ago. I, myself, have to keep going to urbandictionary.com to lookup words that did not exist a few years ago.


So what emerges is that a great deal of extra effort is required by all professionals nowadays and senior management in particular to keep up with all that is necessary to perform their duties effectively. What steps could be taken to achieve these leadership abilities relevant to our times? Some effective steps that can be undertaken are:


  • Constant learning (or at the least, awareness) of new technology and methodologies


  • Regular association with younger people and observance of their activities


  • Staying current of the "lingo" used nowadays (which is a job in and of itself)


  • Being open to change and constantly adapting to the new changes in technology AND culture.


In my opinion, the professional who performs these steps will be useful irrespective of age. Even an 18 year old who fails to perform these steps will be obsolete. Perhaps the moral of the story is that we must remain in a state of constant learning more than ever before.

Monday, May 3, 2010

Age and Leadership

Top executives tend to be on the older side and this for the most part makes sense. After all they have accumulated years of experience and this must surely translate to an effective implementation of said experience to drive the company to new heights. Or does it? Could we have a 30 year old CIO performing better than a 60 year old CIO? This week’s post is more about getting your opinion and views on this issue, so please do go ahead and post your comments and opinions.


Let us consider the argument of experience: after all that is about the only positive thing aging brings to you. Everything else is negative in that the body and (to some extent) the mind begin to degenerate, never to reach the prime levels once achieved (typically between age 18-21). So given that experience (and I include the accumulation of knowledge over the course of years as experience) is the prime advantage of an older person in a leadership role, does this really apply to IT leadership and in particular the CIO role?


IT leadership and the CIO role in particular require a great deal of constantly updating your entire paradigm of how IT functions on a regular basis. After all you have to lead the way not only in understanding new techniques and methodologies but also in creating and implementing them. Furthermore, market share must be captured where the market consists of “playas” that SMS, IM, Tweet and MySpace each other. I, myself, have to work consistently towards keeping myself in touch with the terminology utilized by the younger generation and find urban dictionary (www.urbandictioanry.com) an invaluable tool for doing so. But the point I would like to raise is will a 60 year old be able to be in tune with this new market and all its peculiarities? Will a 60 year old have the capacity to constantly learn new techniques and methodologies all the time? I know that is what I find myself doing and believe me there are times when it is a lot of work and I find myself pushing beyond my limits. So there emerges a counter argument that while a older person will have experience, will he/she have the CORRECT experience that will be of value in this brave, new world?


There are, of course, no clear and obvious answers to this quandary. I am in no way suggesting that the old timers be shipped to the glue factory immediately. However, I would certainly recommend that the old paradigm of only allowing older folks to reach high levels of leadership not be adhered to blindly. My prediction is that we will see more and more organizations appoint younger professionals in leadership positions. This is already happening as we blog.

Monday, April 26, 2010

Beyond Application Development

I come from an application development background myself. Moreover, I was involved in all aspects of app dev including Business Analysis, Programming, Quality Assurance and Project Management. It was only when I was first exposed to ITIL that I realized the tiny little well that I was a part of and the vastness of all the other parts of IT that existed that I had tuned myself out off.


SDLC, which, let us assume, for the sake of simplicity consists of the world of app dev is only a part of the going ons of an IT department. For those of us who have been involved in the SDLC most of our careers, there is a tendency to think only in terms of the application development lifecycle. However, the shift to understanding the entire IT infrastructure is important. There is currently a paradigm shift occurring in the IT industry globally where IT’s services to the business are being considered as opposed to the software IT produces only. The difference being that along with the application (or product) comes a host of related services. Consider a software application. The following will need to be considered once it has been released into operation:


  • Support for users during operation including a help desk that will provide at least first line support.


  • Continuous security management. This is particularly true for any sort of application that involves transfer of confidential data and financial information
    Capacity Management to ensure that the application can support the agreed upon number of users or load.


  • Constant Availability Management checking to ensure that the application is performing as per specifications and to ensure quick follow up if it isn’t.
    Service Continuity Management to ensure that in the event of a disaster, the application can be brought back up as soon as possible.


  • A continually evolving relationship with the customer to ensure alignment with customer needs and future needs.


  • A strategy that encompasses customer demands and financial considerations to ensure that the correct portfolio of services and applications is chosen, developed, delivered to the customer, operated and finally retired at the appropriate time.


  • A set of supporting processes that assist in providing the above services to the customer.


All this and more must be performed to ensure overall customer satisfaction over and above the development and testing of the software. The de facto standard for the services described above is ITIL. It is a large body of knowledge that most professionals will need to spend a significant amount of time and financial investment to master. It is recommended that most people get started immediately if not sooner.

Monday, April 19, 2010

The Importance of a PMO

Project Managers are common across organizations all over the planet and their work function is well understood. However, what about a Project Management Office and its relevance to the organization? In most of my experiences with PMOs, there is a great disparage in the way PMOs are set up in organizations which results in confusion and lack of standardization across the industry.


There are two basic ways a Project Management Office can be setup in an organization. The first way is to set it up as a sort of super manager of the project managers and perform project portfolio management and task delegation functions for the entire Project Management function of the organization. Secondly, the PMO could be set up in a consulting capacity where it provides meaningful, training guidance and process improvement capabilities. There are pros and cons to each approach as is the difference in investment cost and return on investment in each case.


The main tasks that a PMO is expected to perform are:


  • Project support: Provide project management guidance to project managers in the organization.


  • Project management process/methodology: Develop and implement a consistent and standardized process and ensure that it is followed by the staff in the organization.
    Training: Conduct training programs as needed.


  • Department for project managers: Maintain a centralized office from which project managers are loaned out to work on projects. This may not be performed if the PMO is being done on a consulting model.


  • Internal consulting and mentoring: Advise employees about best practices.
    Project management software tools: Select and maintain project management tools for use by employees.


  • Portfolio management: Establish a staff of program managers who can manage multiple projects that are related and allocate resources accordingly.



The trick really is to determine at the beginning what kind of PMO would best fit the needs and culture of the organization. The next trick is for the PMO to not get involved in everything right at the beginning but to grow its role and responsibility incrementally. A major risk that PMO’s face is that direct metrics to determine their effectiveness tend to be difficult to set up and there is a grey area regarding their value and effectiveness to the organization. This could lead to a situation where the PMO is under-utilized by staff because they do not have quantifiable proof of the benefits provided. All this must be planned for and thought through as early as possible.


My personal view is that all organizations should have a PMO: the only difference being how much they are involved in the organizations project management activities. If nothing else, there is value in having a body that standardizes processes and methodologies for the organization.

Monday, April 12, 2010

Managing IT and Everything Else

There are 2 main areas that an organization must perform a balancing act in today’s particular marketplace and environment. One: there has to be excellent supply chain, enterprise resource planning HR, Sales & Marketing, Accounts and Finances etc. in place and managed well both in the day-to-day and long term basis. This results in an efficient and cost effective service to the customer. The second is an effective utilization of technology in all its forms that are relevant to the organization. This could include a savvy webpage, an easy to use online store, email and networking services to the employees of the organization etc. It is imperative, however, that both these areas are implemented and managed effectively.


The first part, the “overall management”, is the traditional management style that has been in place for centuries. The danger with this is that organizations strong in this area tend to neglect the IT side of things. Furthermore, they tend to neglect the IT connection to the rest of the organization and fail to seek out new ways to involve effective utilization of IT in a constant and repetitive manner. Recently Wal-Mart has come under fire for failing to maintain high levels of IT capabilities while being quite successful in its supply chain setup.


The second part is of course the effective implementation of IT. The danger here again is that the organizations, strong in IT tend to be weak in the rest of their management. Furthermore, with the rapid changes in all aspects of IT including process methodologies, this is a herculean task for any organization even one with a core strength of IT.


The fact of the matter is both sides of the equation must be taken care of for an organization to perform optimally and the tendency to veer one way or the other must be kept in check. Gone are the days of the tech savvy “whiz kid” creating a multi-billion organization just by his brilliance. Also gone are the days of the old school management style. A strategic all round approach is necessary to survive nowadays with experts being called in to provide advice on best practices in all areas of management.

Monday, April 5, 2010

IT Investments

For all the flak, the Government usually takes for being bureaucratic and slow and inefficient, in the world of IT, the Governments of the western worlds in particular are doing well to adopt a lot of sensible policies and procedures to help increase efficiency. In fact, the US Dept of the Interior’s Information Technology Capital Planning and Investment Control Guide (CPIC) is one of the best investment frameworks out there for IT investments.


It actually all started with the GPRA (Government Performance and Results Act) which mandated that all federal agencies had to be results–oriented. This included defining general goals and objectives for their programs, to develop Annual Performance Plans specifying measurable performance goals for all their programs and to publish an Annual Performance Report showing actual results compared to the projected goals for each program. As a result of this, the Government’s Office of the Chief Information Officer came up with the CPIC guide to govern and manage the IT investments for the Government and to align all IT investments to the to the strategic goals of the Department.


The CPIC process consists of circular flow of 6 phases:


  • Pre-Select Phase: In this phase, the business recommends IT services based on their requirements. A concept is created and a Business Case for the new IT service is developed, evaluated and approved. Based on these actions a final approval to move forward will then be obtained from the relevant stakeholder.


  • Select Phase: In this phase, a project plan is created with established performance goals and quantifiable performance measures. Costs, schedules, benefits and risks are identified and evaluated. With the completion of all steps in this phase, approval is obtained to proceed to the next phase.


  • Control Phase: The goal of the Control phase is to ensure that through timely oversight, quality control, and executive review, the IT initiatives are conducted in a disciplined, well-managed, and consistent manner. It is in this phase that the project is moved from the requirements definition to implementation. The project management occurs here with the project progress being monitored, reported and evaluated with course correction taken as needed. This phase is considered complete when the production deployment or implementation is completed.


  • Evaluate Phase: In this phase, the actual results after implementation are compared to the projected results and any changes or modifications needed are implemented. A Post Implementation Review (PIR) is conducted in this phase and based on the results corrective action is taken. Once this is completed, the next phase is entered.


  • Steady-State Phase: During this phase, analysis is used to determine whether mature systems are continuing to support mission and business requirements. Customer satisfaction is evaluated and opportunities to improve performance and reduce costs are considered. The investment stays in this phase until a determination is made by the appropriate stakeholders to modify, replace, or retire the system. A major enhancement can be defined as, new architecture, or new functionality. The cycle then begins again at the Pre-Select Phase.


The CPIC fits in nicely with ITIL and its Service Strategy Phase. It also fits in well with ITIL’s consideration of IT Services as a portfolio of services which CPIC does as well. The interested reader can easily obtain more information on this and other Investment management frameworks. The question isn’t which one to choose but how well are we implementing and evaluating the one we have chosen. If IT investments are not being performed under a proper investment management process but rather by some sort of emotional, ad-hoc fashion by top executives, then return on investments is going to be low – guaranteed.

Monday, March 29, 2010

Balance Balance Balance

At a recent chapter meeting with IT and Quality professionals, the topic of documentation came up. To my surprise a lot of folks were against documentation: not in principle but in the extreme application of it. But why did they assume that documentation equates to extremely detailed and intensive documentation? A somewhat “light” version of documentation could be implemented which would cover important issues without involving too much expense and effort. A medium level documentation effort might very well be the right one for a specific situation. Why assume a super detailed documentation effort right from the onset and crucify it immediately? This tendency to be either at one extreme (little to no documentation) or the other extreme (super detailed documentation) is a damaging and ultimately self-debilitating style of thinking. This same extreme to extreme thinking occurs when process implementation (or improvement) or any other beneficial initiative is brought up and then creates a significant roadblock in the implementation of the effort.


In reality, any level of documentation or process implementation or Six Sigma effort can be performed. It does not have to be an ultra grand trillion dollar effort. A proper analysis of what best serves the organizational needs must first be performed. With the result of this analysis, a proper, well thought out approach should be planned and implemented. It is usually best to start with a pilot version of the effort as opposed to implementing it all across the organization in one go. A phased approach is also beneficial in that any issues with the effort can be corrected and reworked smoothly and incremental low risk implementations are made. I do not wish to go into the details of an implementation but rather emphasize the benefits of a balanced approach and the needless harm and problems induced by an unbalanced (extreme to extreme) thinking approach.


To be perfectly honest, each moment of each day calls for analysis and a balanced response. I can’t slam on the accelerator of my car too hard or I’ll hit the car ahead of me. If I don’t hit the accelerator hard enough, the car behind me will be frustrated. I must analyze the traffic conditions at each moment and make the correct response. Whether it is driving my car, shopping for groceries of implementing six sigma improvements, each unique situation calls for a unique response. As IT professionals, it is especially necessary for us to keep this balanced approach in mind due to the enormous mix of variables in the workings of IT. Even the effort at achieving balance will bring about major positive results and harmony. And the IT work environment could use all the harmony it can get.

Monday, March 22, 2010

Those Who Help Themselves

Hindu culture pays particular importance to the beginning of anything; whether it is a new life, a new TV brought into the house, a wedding or the start of a new job. In each case, Lord Ganesh (the infamous “Elephant God” to westerners) is invoked and prayed to first so that he may grant a long, smooth and trouble free life for the person or item that is starting out. But what about projects, products and services? It is valuable to apply the same importance that Hindus do to beginnings but in a more scientific and less mystical fashion.


The methodologies and techniques to handle project or service beginnings exist. However, it is typically the usual combination of ignorance and laziness that causes these methodologies to go largely unused. A project is typically started out of a knee-jerk emotional reason or having been a brainchild or dream of one individual. The same is typically true of a service. The result of a improperly planned initiation is that there are numerous difficulties encountered all through the life of the project. While ROI is generally thought of nowadays and some semblance of financial planning is performed, other important questions are rarely asked such as: Who are the key stakeholders and are they all on board for the duration of the project? What resources will be needed and will they be made available to the extent that will be required? What is the end vision for the project and will it really benefit the organization?


Some suggestions on initiating a project correctly are as follows:


  • Create a Business Case for the Project and obtain the relevant approvals.


  • Perform a feasibility study which would include, risks associated with the project and alternative solutions.


  • Create a Project Charter and obtain the appropriate approvals.


  • Set up a Project team and obtain the required resources from key stakeholders.


  • Interface with the PMO office and align with the correct processes, procedures, systems and tools.


  • Perform an initiation phase review to ensure that all initiation activities were performed and the required outputs of the Initiation phase were obtained and the initiation goals achieved.



With these steps performed, a lot of problems that might have occurred are proactively prevented from occurring in the first place. Project initiation is really about analyzing the project to find potential problems and address them right at the beginning. It is best not to simply pray to Ganesh for a smooth project. After all, God helps those who help themselves.

Monday, March 15, 2010

The 8 Dimensions of Quality

According to David Garvin, a Harvard professor and author of the volume “Managing Quality”, quality can be divided into 8 dimensions. The division of quality into sub dimensions provides a way to easily design, manage, deliver and measure the product or service to the customer. Perhaps the best thing about this division of Quality into 8 sub dimensions is the better understanding of the customer requirements that is gleaned from it (sometimes an understanding of the customer is achieved that the customer themselves are not consciously aware of themselves).


Let us consider the proposed dimensions of quality as suggested by Garvin. They are:


  • Performance (or the primary operating characteristics of a product or service): As might be expected, the characteristic of the product or service to deliver on what it primarily does would be on the list. For a car, the torque, horsepower, brake specifications etc. would be characteristics of performance.


  • Features (or the secondary characteristics of a product or service): The extra features available or delivered by the product and service are also a characteristic that determine quality. For example, leather seats and a high-end sound system in a car would be attractive features.


  • Conformance with specifications: The traditional understanding of quality in the old paradigm where the primary importance is given to ensuring that the product or service meets specifications accurately. However, this is useful only if the specifications are correct (i.e. the previous 2 dimensions are accurate).


  • Durability (or Product Life): How long the product or service functions before failure. This is an important characteristic even if not specifically stated by the customer. After all who doesn’t like a product that works for a long time? This was displayed during the 80s when the Japanese Auto makers successfully penetrated the US market with only superior durability on their side.


  • Reliability (or frequency with which a product or service fails): Mercedes Benz automobiles require less frequent oil changes. This is an attractive feature to most customers.


  • Serviceability (or the speed, courtesy and competence of repair): If a product or service requires a great deal of disruption and cost to the customer to repair then even if the frequency of failure is low, it could be unattractive to the customer. Luxury sport cars are very expensive to maintain and are a factor in the decision of the customer to purchase them, over and above the purchase price.


  • Appearance/Aesthetics: A good example of the importance of aesthetics are Apple’s products that bring a distinctive style to the customers which is definitely a part of their success.


  • Image/Brand/Perceived Quality: The positive or negative feelings customers associate with the company based on previous interactions. Ford “Quality Is Job One” and Maytag “the Lonely Repairman” even used Quality as a marketing slogan and positioned themselves strategically in the marketplace with this characteristic.


With the 8 dimensions of quality defined, we may observe that this breakdown provides a useful tool to assist with determining the customer requirements, especially when the customers are unclear on what they want. Furthermore, the design and delivery of the product or service is simplified as well due to the separation of quality characteristics that can then be separately administered. The organization’s marketing and selling strategy can also be influenced with a well defined understanding of the quality characteristics that are being offered. The benefits of paying attention to the 8 dimensions of quality are significant and should be emphasized in all organizations.

Monday, March 8, 2010

The IT Business Gap

Probably the most common phrase heard nowadays is “IT / Business Alignment”. There is also a great deal of information, techniques, methodologies and consultants (myself included) that offer ways and means of making such alignment possible. However, how does one go about it at a basic high level?
One model that comes to mind (and there are various models that exist) is the IT-Business Alignment Cycle which basically consists of 4 stages:

  • Plan: The requisite first step in any model, the planning of what IT must provide to the business must be performed first. This involves understanding Business’s needs and the plan for designing and delivering IT solutions that satisfy these needs. A high level of communication should be formulated and maintained between Business and IT for this to be successful on an ongoing basis. The ITIL processes within the domain of Service Strategy are effective in meeting the needs of the planning stage.


  • Model: This involves the execution of the Plan conceived earlier to the extent that the required IT services are designed and released to the Business’s live environment successfully. The ability to track CI’s via a well defined Configuration Management is crucial. Moreover, the ability to provide for the IT service’s Availability, Capacity, Security and Continuity should also be handled utilizing the corresponding processes.


  • Manage: This involves the successful operation of the IT service being provided to the Business on a day-to-day basis. For this to be successfully accomplished, the IT department must have effective Incident and Problem Management processes in place with a capable Help Desk function in place at the minimum. Effective Change and Release Management processes are also very important. The ability to track and monitor promised service levels is also a necessity in this stage.


  • Measure: If you can’t measure it you can’t manage it. This stage actually applies all across the organization and incorporates itself with the previous three stages. The basic premise here is to verify via metrics that the promised services were delivered and managed successfully. This can and should incorporate measuring at levels that are not visible to business at the component level. Measuring IT performance at a functional silo level is also beneficial in order to measure and improve IT functional capability. Continual improvements are a key goal of accumulating and analyzing the metrics in any organization.


A constant iteration of these stages should provide a basic framework for keeping IT and Business successfully aligned. Of course far more information is available on this topic and the reader is encouraged to springboard off of this post and delve deeper into this extremely crucial and significant topic.

Wednesday, March 3, 2010

The Smallness Excuse

During a conversation I had with an IT executive today, he mentioned something along the lines of how large organizations tend to be more process oriented and smaller organizations tend to be more ad-hoc in their activities. He then also went on to say that it was too much overhead for a 50 person company to employ all the various resources and staff and tools needed to implement processes. It seemed to me, however, that he was committing the usual blooper of going from one extreme to another. That is, either we implement processes in a big way or not at all.


If an organization is small, does that mean it can be chaotic and do as it pleases? Do processes have no place in a 50 person organization? Granted that fewer staff mean lesser communication issues and less complexity in general but does this mean that there needs to be no discipline whatsoever?


As I mentioned previously in the post “Pick and Choose” a while back, organizations are at liberty to implement processes to the extent that they feel is necessary and beneficial. In this week’s post, I would like to make a few suggestions on how smaller organizations can make smaller scale process implementations. First, however, I would like to highlight the importance of processes to a small organization.


First of all, a small organization is just that and more than likely it is up against bigger rivals with access to greater funds and resources. What this translates to is the fact that the smaller organization has to rev up its game to a high level in any way that it can simply to survive. Therefore, it actually emerges that process is more important to a smaller organization than a larger one. Kind of like how a little kid in the schoolyard has to train harder to stand up to the bigger boys. Secondly, any structure laid out when an organization is small will translate to already laid out groundwork when the organization grows. Processes will only have to be modified in the future as opposed to being implemented from scratch. Therefore, it is far more important that smaller organizations pay the appropriate respect to processes, structure and organizational discipline.


So how does a smaller organization implement processes in a cost-justifiable manner? The answer lies in a proper understanding of processes themselves. What is a process in its simplest form? Simply a grouping of related steps that achieve a common goal in a structured manner. Smaller organizations will tend to have a smaller number of steps or may only wish to structure some key steps that are crucial. Therefore, all they have to do is group a smaller number of key steps in a process structure and they too have a process in place. Confused? Consider the Change Management process. At a large organization, they may have a lot of logging in, analysis and authorization steps which could be streamlined to one or two steps in a smaller organization. Likewise some of the implementation and post implementation steps could be streamlined in a smaller organization as well. However, by keeping the main steps of Change Management within a process and implementing it, the smaller organizations are giving themselves the advantage of being process oriented.


Furthermore, smaller organizations can implement a select few processes to start off with and then keep adding others as they grow and can afford more resources for these tasks. Keep in mind that at a small organization, one person could perform multiple roles. So a Change Manager could also be a Configuration Manager as well and two resources are not needed as one will suffice.


It’s really a matter of how much the folks up top are aware and want it. Where there is a will there is a way. Small organizations can be highly process oriented and enjoy the benefits of that. Smallness is in no way an excuse for anarchy. The reality of it is that it is simply laziness that prevents small organizations from being structured and process oriented.

Monday, February 22, 2010

Staff Planning for Critical Tasks

I was recently reminded of the time I spent a few years ago assisting my client in preparing a matrix of critical job functions and primary and secondary staff that would perform the tasks. This was prompted by a sequence of events where the usual staff that perform the tasks were out sick and the secondary were on vacation. Consequently the important work ended not being accomplished as it was of a sensitive security nature and just anyone couldn’t login and perform the task. Senior management then came in with the edict that such a situation must never occur again. Therefore, we found ourselves documenting the matrix and a plan for what should happen if both the primary and secondary staff were unavailable.


Of course, the obvious lesson in all of this is plan ahead for such contingencies and you won’t be caught with your pants down at the crucial moment when they do happen. A plan to prepare properly for critical tasks from a staffing point of view would consist of:


  • Identifying the critical tasks and the resources necessary to perform them and the scheduling limitations involved.

  • Documenting the steps involved and the various procedures

  • Identifying the critical staff members – both primary and secondary

  • Perform practice runs. Simulate a crisis situation and invoke the plan. See if it works out and if not, make the necessary changes to ensure it works right

  • Plan and setup the primary and secondary staff to work from a different location (or home)

  • At a higher level, plan to reduce the necessity of performing the critical task or create a workaround if there is a staff crisis situation. This would possibly necessitate the involvement of senior management and other departments but would be potentially very useful in a crisis situation.


These are some of the typical, basic steps to take to ensure adequate staff coverage at all times. Of course, specific situations will warrant specific steps and readers should plan for their specific situations accordingly. However, the steps above should be a good starting point. The important thing to do is to plan before the crisis actually hits.

Monday, February 15, 2010

Modernizing Legacy Systems

Running into a co-worker at a company I worked at years ago, we sent into the routine of asking how things were with each other etc. When I asked him if they had finally made a move out of their old AS400 system, he replied in the negative. Now this is an old legacy system that has the effect on the company similar to swimming with a pair of 50 pound cement blocks glued to your feet. Of course, the company has made numerous attempts to modernize and move away from the legacy system in the past but they have all been unsuccessful. So we have the situation that the system is still in place and taking up a higher than necessary operation cost with the company unable to upgrade or replace it effectively.


This is actually a very familiar situation for a lot of organizations. Perhaps the legacy applications are not so large and not so old but the various mechanisms that prevent it from being sent into nothingness to rest in peace are the same. First let us look at the advantages and attractions of legacy systems:


  • Over a large span of time, they are firmly entrenched in the organizations way of going about things and are quite stable (even though they may be inefficient).

  • The legacy systems typically run mission critical applications that would disrupt the users/customers a great deal if they had to be replaced.

  • The legacy systems are familiar to large numbers of users and they know all the special ins and outs of the system well. A new system will entail re-education of the new system to users.


The disadvantages of legacy systems on the other hand are:

  • Enormous cost of ownership due to prehistoric technology and underlying systems. Large number of servers and staff are needed to keep it all going and make modifications as and when necessary.

  • Built eons ago with a specific purpose in mind which makes the system extremely inflexible and resistant to modifications. Any alterations take a large amount of resources, time and cost.

  • Typically poorly documented with only a few crusty old timers knowledgeable in the inner workings of the system which translates to difficulty in making modifications or replacing the system. Also the few who are familiar with the legacy systems resist attempts to share the knowledge and produce documentation since keeping things in the dark makes them valuable and reinforces their job security.


So how do we go about replacing the legacy systems? A few guidelines are as follows:

  • Create as much documentation as possible for the existing system. Ideally a complete set of requirement and functional spec documents should be created.

  • Perform proper risk management and mitigation strategies. Monitor risks all through the modernization for occurrences and perform mitigation as needed.

  • Strategize on the best way to perform the modernization. Perhaps a full scale replacement and recoding is required. Perhaps commercial off the shelf software will do the trick. Perhaps it can be replaced in bits and pieces?

  • Understand and educate staff that the disadvantages of clinging on to legacy systems are enormous and their co-operation in the matter will only be to their benefit.


Making fundamental changes to legacy systems is a hazardous task mainly because the inner workings of the systems and the inter-dependencies are so rarely understood. Typically, a small modification can have far reaching consequences. Therefore, it is best to approach this cautiously but not so cautiously that it never gets accomplished.

Monday, February 8, 2010

QA to Developer Ratio

This week, during interaction with potential clients, I was speaking with them about their QA department and asked “What is your QA to developer ratio?” The answer was an embarrassed laugh followed by an explanation of how there were very few QA team members compared to the development team. This gave me a good idea not only of the immediate problems faced by the organization but also the lack of strategic thought, the lack of executive planning, and the longer term problems that the organization will face in the future.


I did not even bother to ask why they had a low QA to developer ratio as the guaranteed answer was going to be “lack of funding” or some variation thereof. Which, therefore, indicates that the management does not consider quality an important part of what the organization provides to the customers. Oh sure, if I were to state this directly to them, they would deny it vehemently but actions speak louder than words and the true meaning of their actions is that they do not give quality the importance that they claim to. Now, in certain rare cases, a low QA to developer ratio is acceptable and makes sense. This would be in low price, commodity items where the development process was very mature and error free and not a lot of QA was needed nor made financial sense to be deployed. However, in the case of complex software with a not so strong development team producing it, a QA to developer ratio of less than 1 to 1 is simply stating that you do not consider quality important. There is, of course, no one specific ratio that serves all organizations. However, in my opinion for most IT and software type of situations, at the minimum, a 1 to 1 ration of QA to developers is necessary. To really provide “Cadillac” service, in my opinion, a 2 to 1 ration of QA to developers should be implemented. The 2 to 1 ratio, while being expensive, really puts a lot of pressure off the QA staff and makes the QA process fun and not such a pressure cooker kind of an environment. However, most companies are very far from the 1 to 1 ratio so I won’t put too much emphasis on anything higher than that. Of course, in mission critical software where lives are at stake, the QA to developer ration has been known to go as high as 4 to 1 or even more which illustrates that organizations do spend on QA when they have to.


It really boils down to whether the goal is to squeeze out as much of a profit as possible for the quarter, or to truly plan for the future and be as well setup to deal with it as possible. As a QA team member in the past, I can assure readers that a high QA to developer ration is very, very beneficial and ultimately cost effective to the organization in the end.

Monday, February 1, 2010

The Right Way to Reduce Cost

When organizations are faced with the task of reducing their cost, very often, they instinctively think of the removal of personnel. While this may be the correct course to take (especially in extreme market conditions such as the present), generally a great deal of cost savings can be obtained from the removal of waste.


IT waste is unique in that it generally cannot be inventoried and stored for later sale like steel pipes or copper wire. If a developer sat on the bench for a day then the company just wasted a man-day and the equivalent dollar amount and there is no way that this expenditure can be recovered. Therefore, a great deal of care and effort should be expended towards ensuring that waste does not occur in the first place. The second source of waste is needless rework due to defects and misalignment with business requirements. This is particularly true for organizations that perform application development. So another great way to streamline costs would be to ensure products and services are created right the first time which then minimizes the cost of performing testing and rework.


I am reminded of my time consulting at a large mortgage bank. The application being updated and released monthly always had issues in production after each monthly release. Multiple efforts of QA and user acceptance testing had to be performed, in spite of which defects would find their way to the end user. The following highlights my strategy as a consultant to resolve this situation:


  • My first step was to create a system of metrics for measuring and analyzing defects so that we knew where we were and how changes were improving the performance or otherwise. After all if you can’t measure it, you can’t manage it.


  • Next, I ensured worked with QA to re-strategize their approach and to create new test plans and test case documents. This ensured that the application was tested thoroughly and defects were at least found and not missed and sent on to the customer.


  • At this stage, a great deal of pressure was taken of user acceptance testing and those personnel could be partially taken off testing and utilized elsewhere (which was a cost saving already). The defects found by QA were then analyzed for their root cause by development and this information was then utilized to ensure that the error did not occur again.


  • The result of all this was that development began to produce software that was relatively defect free and the pressure on QA was significantly reduced while UA only performed a cursory check of software to be released. A number of personnel were freed up to work on other tasks and customers began to see zero defects in production.


Therefore, a great deal of cost savings was achieved along with improvement in quality and increased customer satisfaction. The alternative, which would reduce headcount and therefore cost, would still leave the organization with the issues and inefficiencies it had before but with fewer people to solve them with. Clearly, the former is the better way to go.

Monday, January 25, 2010

IT Risk to the Organization

As IT is a department within an organization with the goal of typically servicing other departments, there is a set of risks that IT poses to the organization. What I am talking about is different from the risks within an IT project execution or the day to day functioning of the IT department. I am focusing on the risks the IT department as a whole poses to the organization that it services.


The risks can be divided into the following main groups:


  • Consequences of failure of services provided by IT

  • Security risks

  • Outsourcing and Partners failure risks

  • Governmental and Legislative Risks

The IT head as well as senior management within the organization should consider these risks and work in tandem to manage them. This can be accomplished in the following ways:

  • Create a risk management strategy and monitor and act on it regularly

  • Engage outside auditors to analyze the risks from a new perspective

  • Always be on the lookout to transfer risks

  • And strengthen the quality of IT processes within the organization

In this way, organizations can get a proactive handle on the potential risks and manage them before they become a critical issue. It all really boils down to taking the effort and making it happen. There exist endless possible excuses to not do it, but in the end you have to consider that the competition is doing it so can you take the risk of not managing your risks?

Wednesday, January 13, 2010

Levels of Cost Optimization

If an organization wished to optimize its costs, there are numerous ways it can go about it. The question is which method of optimization will bring about beneficial results in the long term and which methods are knee-jerk reactions that bring about a short term benefit (and even a long term loss).


Gartner provides us with a framework of cost optimization that consists of four levels, each at a higher level of maturity and benefit. The broad categorization of these four areas is:


  • IT Procurement: which consists of smarter procurement techniques and buying from cheaper and better vendors etc. This is also the least “mature” of the techniques and only provides very low level benefits that have little lasting impact.

  • Cost Savings within IT: which consists of identifying opportunities to reduce IT costs. This usually ends up being lay-offs or outsourcing. While these are valid steps to take, they are again not “high maturity” decisions that will have long term and strategic benefits to the organization.

  • Joint Business and IT Cost Savings: This is one level more strategic than the previous method, where IT confers with business to come up with areas of cost optimization that will have minimal negative impact on the business.

  • Enable Innovation and Business Restructuring: This consists of encouraging innovation, implementing process improvements and restructuring business to align with customer demands. This is by far the best technique to bring about cost optimization with long term strategic benefits.


Organizations, however, rarely take the long term, visionary approach and approach cost optimization with the attitude of haggling with vendors and laying off people. This kind of cost cutting will rarely result in lasting benefits.

Monday, January 4, 2010

Eight Percent

As per Gartner, the cost involved in developing an IT application and bringing it to the live state is only 8% of the cost required in keeping it live for 15 years. And this, in a nutshell, is where most organizations do not plan properly and run into problems. The “whole life cost” or total cost of ownership (TCO) is rarely computed in a responsible manner. Rather, a knee-jerk reaction to changing market circumstances prompts the decision making process (if there is one) and a project is hastily assembled. After the project is completed, and the maintenance costs start mounting, there is “surprise” at the mounting maintenance costs. Then IT has to request more funding for its operations.


This whole sequence of events can be avoided if organizations simply add up the TCO and make responsible decisions in conjunction with business. The areas of expenditure that should be taken into account are:


  • Planning

  • Design

  • Construction/acquisition

  • Operations

  • Maintenance

  • Renewal/rehabilitation

  • Financial (depreciation and cost of finance)

  • Replacement or disposal


A great many tools and techniques for TCO exist and are readily available on the web. However, my goal here is to emphasize the importance of performing TCO and to be aware of the pitfalls involved in failing to perform this step.


The reason, in my opinion, that a lot of organizations suffer from poor TCO calculations in spite of the information being easily available and not very difficult to compute as well is because they often emotionally stake the next project as the “magic” deliverer of their present dilemmas. It is this emotion and lack of calculated analysis that leads organizations into the quicksand of wrong decisions and incorrect cost computation.


Organizations must make an accurate and well though out business and financial analysis of every proposed undertaking. If they neglect this step, they will pay for it later as 92% of the unaccounted cost is waiting to hit them where it hurts.