Monday, February 22, 2010

Staff Planning for Critical Tasks

I was recently reminded of the time I spent a few years ago assisting my client in preparing a matrix of critical job functions and primary and secondary staff that would perform the tasks. This was prompted by a sequence of events where the usual staff that perform the tasks were out sick and the secondary were on vacation. Consequently the important work ended not being accomplished as it was of a sensitive security nature and just anyone couldn’t login and perform the task. Senior management then came in with the edict that such a situation must never occur again. Therefore, we found ourselves documenting the matrix and a plan for what should happen if both the primary and secondary staff were unavailable.


Of course, the obvious lesson in all of this is plan ahead for such contingencies and you won’t be caught with your pants down at the crucial moment when they do happen. A plan to prepare properly for critical tasks from a staffing point of view would consist of:


  • Identifying the critical tasks and the resources necessary to perform them and the scheduling limitations involved.

  • Documenting the steps involved and the various procedures

  • Identifying the critical staff members – both primary and secondary

  • Perform practice runs. Simulate a crisis situation and invoke the plan. See if it works out and if not, make the necessary changes to ensure it works right

  • Plan and setup the primary and secondary staff to work from a different location (or home)

  • At a higher level, plan to reduce the necessity of performing the critical task or create a workaround if there is a staff crisis situation. This would possibly necessitate the involvement of senior management and other departments but would be potentially very useful in a crisis situation.


These are some of the typical, basic steps to take to ensure adequate staff coverage at all times. Of course, specific situations will warrant specific steps and readers should plan for their specific situations accordingly. However, the steps above should be a good starting point. The important thing to do is to plan before the crisis actually hits.

Monday, February 15, 2010

Modernizing Legacy Systems

Running into a co-worker at a company I worked at years ago, we sent into the routine of asking how things were with each other etc. When I asked him if they had finally made a move out of their old AS400 system, he replied in the negative. Now this is an old legacy system that has the effect on the company similar to swimming with a pair of 50 pound cement blocks glued to your feet. Of course, the company has made numerous attempts to modernize and move away from the legacy system in the past but they have all been unsuccessful. So we have the situation that the system is still in place and taking up a higher than necessary operation cost with the company unable to upgrade or replace it effectively.


This is actually a very familiar situation for a lot of organizations. Perhaps the legacy applications are not so large and not so old but the various mechanisms that prevent it from being sent into nothingness to rest in peace are the same. First let us look at the advantages and attractions of legacy systems:


  • Over a large span of time, they are firmly entrenched in the organizations way of going about things and are quite stable (even though they may be inefficient).

  • The legacy systems typically run mission critical applications that would disrupt the users/customers a great deal if they had to be replaced.

  • The legacy systems are familiar to large numbers of users and they know all the special ins and outs of the system well. A new system will entail re-education of the new system to users.


The disadvantages of legacy systems on the other hand are:

  • Enormous cost of ownership due to prehistoric technology and underlying systems. Large number of servers and staff are needed to keep it all going and make modifications as and when necessary.

  • Built eons ago with a specific purpose in mind which makes the system extremely inflexible and resistant to modifications. Any alterations take a large amount of resources, time and cost.

  • Typically poorly documented with only a few crusty old timers knowledgeable in the inner workings of the system which translates to difficulty in making modifications or replacing the system. Also the few who are familiar with the legacy systems resist attempts to share the knowledge and produce documentation since keeping things in the dark makes them valuable and reinforces their job security.


So how do we go about replacing the legacy systems? A few guidelines are as follows:

  • Create as much documentation as possible for the existing system. Ideally a complete set of requirement and functional spec documents should be created.

  • Perform proper risk management and mitigation strategies. Monitor risks all through the modernization for occurrences and perform mitigation as needed.

  • Strategize on the best way to perform the modernization. Perhaps a full scale replacement and recoding is required. Perhaps commercial off the shelf software will do the trick. Perhaps it can be replaced in bits and pieces?

  • Understand and educate staff that the disadvantages of clinging on to legacy systems are enormous and their co-operation in the matter will only be to their benefit.


Making fundamental changes to legacy systems is a hazardous task mainly because the inner workings of the systems and the inter-dependencies are so rarely understood. Typically, a small modification can have far reaching consequences. Therefore, it is best to approach this cautiously but not so cautiously that it never gets accomplished.

Monday, February 8, 2010

QA to Developer Ratio

This week, during interaction with potential clients, I was speaking with them about their QA department and asked “What is your QA to developer ratio?” The answer was an embarrassed laugh followed by an explanation of how there were very few QA team members compared to the development team. This gave me a good idea not only of the immediate problems faced by the organization but also the lack of strategic thought, the lack of executive planning, and the longer term problems that the organization will face in the future.


I did not even bother to ask why they had a low QA to developer ratio as the guaranteed answer was going to be “lack of funding” or some variation thereof. Which, therefore, indicates that the management does not consider quality an important part of what the organization provides to the customers. Oh sure, if I were to state this directly to them, they would deny it vehemently but actions speak louder than words and the true meaning of their actions is that they do not give quality the importance that they claim to. Now, in certain rare cases, a low QA to developer ratio is acceptable and makes sense. This would be in low price, commodity items where the development process was very mature and error free and not a lot of QA was needed nor made financial sense to be deployed. However, in the case of complex software with a not so strong development team producing it, a QA to developer ratio of less than 1 to 1 is simply stating that you do not consider quality important. There is, of course, no one specific ratio that serves all organizations. However, in my opinion for most IT and software type of situations, at the minimum, a 1 to 1 ration of QA to developers is necessary. To really provide “Cadillac” service, in my opinion, a 2 to 1 ration of QA to developers should be implemented. The 2 to 1 ratio, while being expensive, really puts a lot of pressure off the QA staff and makes the QA process fun and not such a pressure cooker kind of an environment. However, most companies are very far from the 1 to 1 ratio so I won’t put too much emphasis on anything higher than that. Of course, in mission critical software where lives are at stake, the QA to developer ration has been known to go as high as 4 to 1 or even more which illustrates that organizations do spend on QA when they have to.


It really boils down to whether the goal is to squeeze out as much of a profit as possible for the quarter, or to truly plan for the future and be as well setup to deal with it as possible. As a QA team member in the past, I can assure readers that a high QA to developer ration is very, very beneficial and ultimately cost effective to the organization in the end.

Monday, February 1, 2010

The Right Way to Reduce Cost

When organizations are faced with the task of reducing their cost, very often, they instinctively think of the removal of personnel. While this may be the correct course to take (especially in extreme market conditions such as the present), generally a great deal of cost savings can be obtained from the removal of waste.


IT waste is unique in that it generally cannot be inventoried and stored for later sale like steel pipes or copper wire. If a developer sat on the bench for a day then the company just wasted a man-day and the equivalent dollar amount and there is no way that this expenditure can be recovered. Therefore, a great deal of care and effort should be expended towards ensuring that waste does not occur in the first place. The second source of waste is needless rework due to defects and misalignment with business requirements. This is particularly true for organizations that perform application development. So another great way to streamline costs would be to ensure products and services are created right the first time which then minimizes the cost of performing testing and rework.


I am reminded of my time consulting at a large mortgage bank. The application being updated and released monthly always had issues in production after each monthly release. Multiple efforts of QA and user acceptance testing had to be performed, in spite of which defects would find their way to the end user. The following highlights my strategy as a consultant to resolve this situation:


  • My first step was to create a system of metrics for measuring and analyzing defects so that we knew where we were and how changes were improving the performance or otherwise. After all if you can’t measure it, you can’t manage it.


  • Next, I ensured worked with QA to re-strategize their approach and to create new test plans and test case documents. This ensured that the application was tested thoroughly and defects were at least found and not missed and sent on to the customer.


  • At this stage, a great deal of pressure was taken of user acceptance testing and those personnel could be partially taken off testing and utilized elsewhere (which was a cost saving already). The defects found by QA were then analyzed for their root cause by development and this information was then utilized to ensure that the error did not occur again.


  • The result of all this was that development began to produce software that was relatively defect free and the pressure on QA was significantly reduced while UA only performed a cursory check of software to be released. A number of personnel were freed up to work on other tasks and customers began to see zero defects in production.


Therefore, a great deal of cost savings was achieved along with improvement in quality and increased customer satisfaction. The alternative, which would reduce headcount and therefore cost, would still leave the organization with the issues and inefficiencies it had before but with fewer people to solve them with. Clearly, the former is the better way to go.