Real competitive advantage via IT is now based on service innovation, and the implications for talent management and recruiting are far reaching.
A recent white paper from the University of Cambridge (http://www.ifm.eng.cam.ac.uk/ssme/documents/080428ssi_us_letter.pdf) brings to mind a distinction I make between
Most IT projects are now based on second and increasingly third order business cases: Commodity technologies are still finding their way into first-order solutions, but mostly in late adopter sectors, such as governmental services. In almost all businesses, most of the low-hanging automation business cases have already been picked & consumed, or already thrown out onto technology compost heap.
Yet our education and training methods for preparing our IT workforce are still largely based on first-order automation and its constituent technologies: We still teach the design of operating systems, compilers, train on low-level programming languages, etc. But as a percentage of industry labor, such base technology work is rare, and, outside of the open source community, limited to a select few companies.
Successful second- and third-order business case projects require modeling, analytic, design, management and financial skills that today are only acquired through many years of hard knocks. Our Cambridge Don's suggest that,
"Service Science is emerging as a distinct field. Its vision is to discover the underlying logic of complex service systems and to establish a common language and shared frameworks for service innovation. To this end, an interdisciplinary approach should be adopted for research and education on service systems…Industry refers to these people as T-shaped professionals, who are deep problem solvers in their home discipline but also capable of interacting with and understanding specialists from a wide range of disciplines and functional areas."
Bowen & Spohrer at IBM have suggested that a hybrid degree program, half business and half computer science, is best suited to the new age.
My own education was almost entirely interdisciplinary and liberal arts; my technology training came later and on the job. Back in the eighties, that was novel; but in the future, perhaps not. I also think the trend bodes well for integrating more women into the IT workforce, women being generally more communicative and group-engaged then the "deep problem solvers" who are 90% male.
Probably not: But it is sure changing rapidly, within both web content businesses and software as a service firms. No doubt, corporations will also follow this trend line as experience with faster, more efficient and less costly testing techniques spreads.
Following up on my last post, here is some direct feedback from Robert Johnson at Facebook, who spoke recently about their process for software development and testing.
"Facebook developers are encouraged to push code often and quickly. Pushes are never delayed and applied directly to parts of the infrastructure. The idea is to quickly find issues and their impacts on the rest of system and surely fixing any bugs that would result from these frequent small changes."
"Second, there is limited QA (quality assurance) teams at Facebook but lots of peer review of code. Since the Facebook engineering team is relatively small, all team members are in frequent communications. The team uses various staging and deployment tools as well as strategies such as A/B testing , and gradual targeted geographic launches. This has resulted in a site that has experienced, according to Robert, less than 3 hours of down time in the past three years."
Johnson certainly confirms some of the Beck's observations about how test changes when the business need for agility starts to dominate, specifically:
The argument against agile QA is usually that such techniques have no place in truly mission-critical corporate applications, and that such methods sacrifice quality by shifting the burden to the consumer.
Well, if you haven't noticed, the quality burden has been shifting to the consumer for a couple decades now. For example, permanent beta testing is now the norm at most large Internet content and ecommerce firms.
Traditional QA engineering won't change dramatically where risks due to poor quality are too great: Medical devices or aeronautics; regulated markets, such as security exchanges; on-line banking or card payment systems; core corporate accounting; etc. But out of a typical corporate application portfolio, most applications pose no such risks, and these new QA techniques offer opportunities for more agility, faster time to market, with significant cost savings.
The logic of capitalism is such that quality is never an absolute, but just one among many factors that influence consumer demand. When properly marketed, a shift in quality techniques that results in lower costs to the consumer without significant loss of features will almost always find new customers and more demand from existing customers. I argue this will be as true for internal corporate IT as with external consumers, and once again, Internet development techniques will lead the way.
Or is it the other way around: Manage reality and perception will follow? Please follow our author as he plays tennis with himself, and arrives at a dialectical conclusion up in the umpire's chair.
Thesis: If indeed, following Peter Drucker, "the purpose of a business is to create a customer", then the primary goal of management is marketing. Managers need to control the perception of a business because "the aim of marketing is to know and understand the customer so well the product or service fits him and sells itself."
"Reality", from this marketing perspective, is the realization of sales, and has little to do with whether or not a product or service "really" works. As long as messages defining fitness to purpose can be created and transmitted at a reasonable cost, and as long as these messages are believed by customers and exploited by sales agents, then it's all good, right?
Marketing, outside of selective technical sectors, doesn't need to operate in the realm of logic, only in the emotional and cultural soup of a market economy, and Maslow's "hierarchy of needs". For example, marketing could be used to bring us into an informed, adult discussion of healthcare policy, but it is far easier (and clearly for some marketers, more fun) to generate tirades about "death panels" via innuendo and exploitation of ignorance.
And as anyone who lives in a corporate hierarchy can attest, one can also "solve" any management problem by careful marketing. Within most businesses, and especially politics, it often doesn't matter whether a manager is "really" correcting a problem, as long as the perception of the manager's activities is controlled. Unfortunately, manipulating human perceptions is oft times simpler and less costly than changing a product, improving a service, replacing an underperforming resource, telling the truth, following lawful conduct, or even making any kind of logical argument at all.
Antithesis: Product developers, and their close relations in manufacturing, have long known that their parts of a business can only operate effectively and efficiently if problems are "really" solved, and not just considered another problem of perception. Customers are only, over the long term, enthusiastic about products or services that not only appear to work for them, but "really" work well for them, and over the entire product's lifespan, too. Otherwise, both the customer and the product manufacturer will incur additional costs that reduce competitiveness over time.
"Reality", from the product perspective, is usually objective, often physically so, but always measurable. Isn't marketing and sales easier if the product "really" works, not just that we say it does? And in the long run, isn't that a better outcome for both producer and consumer?
Synthesis: The "manage reality" camp, technocratic, and politically elitist though it is, has a strong argument, because over the long run, it really does cost less, and generate more revenue, if you manage reality and not just perception. The American automobile industry is surely one glaring example, where marketing always came first, and product second. (Ironically, they often employed Peter Drucker, too.) Spending 17% of GNP on healthcare services is another example where realization of revenue is not consistent with long term survival, hence our mediocre public health statistics and 15% of the population uninsured.
But the "manage perception" camp, manipulative, short-sighted and politically populist though it may be, is grounded in reality also: Consumers and markets are decidedly not careful evaluators of costs and benefits, except maybe (and it's a big maybe) in the aggregate. Humans evolved innumerable heuristics to help us survive on the savannahs of Africa, and these make us biologically directed, emotionally buffeted, and drawn to magical explanations that don't require much thought. Given the hundreds of generations it took to enable our species to survive and eventually thrive, we are not likely to shed these heuristics easily. We will remain deeply "conservative", except when threatened or forced to behave differently.
The best products or services in the world will go un-consumed if purchasers don't believe they "fit". That belief must be achieved, whether by logic, an appeal to prejudice and bigotry, or marching around in a gorilla suit.
But let's hope we have the wisdom to both manage reality and perception both in their proper proportion. Marketing and the management of perception is surely necessary for the realization of sales, but it is not an end in itself, much less an assurance of long-term viability.
Fine tuning IT project risk management by phase has the potential to improve the quality of project outcomes and reduce failures. But as all experienced project managers know, there is no one life cycle model that can capture all the variations we encounter in the real world.
As I reported previously, researchers measured business success factors by the phase of a new technology's deployment in its life cycle. The life cycle phases used in the study were as follows:
The success factors were categorized at a high level as:
The success factor weightings for the first four phases are summarized in the following table:
Phase | Commitment | Knowledge | Communications | Planning | Infrastructure |
Initiation | 15 | 60 | 20 | 30 | 15 |
Adoption | 25 | 20 | 35 | 15 | 25 |
Adaptation | 20 | 5 | 20 | 35 | 20 |
Routinization | 40 | 15 | 25 | 25 | 40 |
Infusion | 40 | 15 | 25 | 25 | 40 |
Let's see how we could adapt these findings to a commonly used project risk management technique. The following table includes a variety of risk factors to be evaluated periodically with mitigations assigned accordingly. A project's total risk profile should, of course, diminish towards zero as the project nears completion:
# | Project Risk Description | Risk Factor | Mitigations |
1 | Application Complexity | ||
2 | Baselines | ||
3 | Contract or SOW | ||
4 | Customer Expectations | ||
5 | Customer Involvement | ||
6 | Customer Acceptance | ||
7 | Design level of detail | ||
8 | External Dependencies | ||
9 | Hardware (new) | ||
10 | Software (new) | ||
11 | Interfaces or Integrations | ||
12 | Experience of team | ||
13 | Productivity of team | ||
14 | Project Management | ||
15 | Project planning/scheduling | ||
16 | Project Resources | ||
17 | Requirements Stability | ||
18 | Requirements Definition | ||
19 | Subcontractor involvement | ||
20 | System Performance | ||
21 | Network Performance | ||
22 | Workload on team | ||
Oct. | Totals by Month: | ||
Nov | |||
Dec | |||
Jan, etc. |
Obviously, certain categories of risk are not as significant during Initiation as during Adaptation, for example. Furthermore, during periodic risk factor reviews, irrelevant factors are distracting and a waste of time. A simple improvement would be to add a column of phase-weighted scalars, ignoring any factors when the scalar is zero:
Adaptation Phase | Initiation | Adoption | Adaption | Acceptance | Routinization & Infusion |
Application Complexity | 3 | 3 | 3 | 3 | 3 |
Baselines | 0 | 1 | 2 | 2 | 2 |
Contract or SOW | 1 | 3 | 3 | 3 | 0 |
Customer Expectations | 2 | 2 | 2 | 3 | 2 |
Customer Involvement | 1 | 2 | 2 | 3 | 3 |
Customer Acceptance | 1 | 2 | 2 | 3 | 2 |
Etc. |
The resulting project risk matrix, easily implemented in a spreadsheet, looks like this when filled in. A metrics team could track such totals and then assign certain thresholds for the risk totals to help decide which projects are at low, medium or high risk:
# | Adaptation Phase | Project Specific | Phase Weighting | Factor * Weight | Mitigations |
1 | Application Complexity | 2 | 3 | 6 | Educate customer organizations beyond senior managers. |
2 | Baselines | 1 | 1 | 1 | Have vendor commit to v3.5 before evaluation. |
3 | Contract or SOW | 2 | 3 | 6 | Sourcing to review ASAP. |
4 | Customer Expectations | 3 | 2 | 6 | Underpromise and overdeliver. |
5 | Customer Involvement | 3 | 2 | 6 | Do not start Adaptation until customer employee budget is approved. |
6 | Customer Acceptance | 3 | 2 | 6 | Do not start Adaptation until customer selects test vendor. |
7 | Design level of detail | 1 | 2 | 2 | In good shape for now. |
Etc. | Etc. | ||||
Total for Oct | 120 | Within medium risk range. |
One of the dilemmas of our business is that no one life cycle model can ever capture the wide variation of IT projects we perform. Yet creating an overly complex life cycle model quickly makes project management too complex, and in itself a risk to project success. IT managers must use judgments based on experience to assure their life cycle model accurately reflects an individual project, but without introducing too much overhead and complexity. Hopefully, the above example reflects a happy medium and is an improvement, based on quantitative research, yet easy to implement.
A recent study (Brown et al, CACM, Volume 50, #9) provides a handy matrix of what IT business success factors are critical for each phase of a new technology's life cycle, suggesting potential improvements to our risk management techniques.
What the researchers discovered was that business success factors varied considerably depending on the phase of new technology deployment within an organization. The life cycle phases used in the study were as follows:
The success factors were categorized at a high level as:
The success factor weightings for the first four phases are summarized in the following table:
Phase | Commitment | Knowledge | Communications | Planning | Infrastructure |
Initiation | 15 | 60 | 20 | 30 | 15 |
Adoption | 25 | 20 | 35 | 15 | 25 |
Adaptation | 20 | 5 | 20 | 35 | 20 |
Routinization | 40 | 15 | 25 | 25 | 40 |
Infusion | 40 | 15 | 25 | 25 | 40 |
This makes intuitive sense to me:
My next post will show how we might use these findings to improve our project risk management techniques.
Virtual teams and business relationships without any physical presence can work, once the on-line techniques and tools are learned. But when misunderstandings, lapses, or failures occur, on-line tools and those who use them often lack the means to recover.
Remote, non-physical relationships are not new. Initially, however, they were conducted by human intermediaries such as diplomats -- ambassadors, emissaries, and the like – who were selected for their sensitivity to language, culture, etiquette, character, as well as mission. Diplomatic skills raise the probability that politicians or businessmen can be successful together virtually, even if they never see or meet in person. Diplomatic skills work not only because they raise the probability that messages are accurately delivered, understood, and mutual interests advanced, but also because they lay the groundwork for recovery when a relationship goes south and disputes must be resolved.
Yet for diplomacy to work, the participants must have both the means and the time to conduct a social relationship. Today's digital communications, combined with globalization, and the push for ever greater productivity via virtual teams and business relationships, makes establishing, stabilizing and maintaining a relationship difficult. We are pushed to conduct social relationships in decidedly non-social time frames. It is an experiment in speed-dating on a massive scale.
Not only are we expected to speed-date into new business relationships, whether colleagues or customers, but to do so with an array of real-time and near real-time technologies that are often brand new, poorly understood and difficult to control. Each new communications channel is like a shiny new toy, with some new intrinsic property we may find appealing – immediacy, a-synchronicity, brevity, etc. But in the mix with established technologies, without established social norms developed over longer durations, communications may only worsen.
Diplomacy and politesse have to be relearned, often painfully, as every new communications technology takes hold. This was true of writing, telegraphy, telephony, and now e-mail, IM, texting, twittering and the entire spectrum of on-line social software. Every new communications technology over history has eventually adopted and evolved core diplomatic skills, but the process proceeds by trial and error, and moves forward fitfully. We do eventually evolve, adapt and adopt, but our physical, biological, cultural and genetic nature don't go virtual just because our social relationships do (c.f., "Blown to Bits" by Evans and Wurster, "Being Digital", by Negroponte, "Social Life of Information", by Brown, and many others).
Immediacy combined with anonymity can create rapid, negative feedback loops that can quickly destroy working relationships, sometimes irreparably. The effect was first noticed with email "flame wars" and newsgroup postings, but the same behavior can be seen in blog comment threads, texting, twittering, etc. Such negative feedback loops is inherent to the technology, and only human judgment can mitigate it.
Here are some of my recommendations for diplomatic communications in an inhumane age:
Encouragingly, software is becoming better at helping individuals manage multiple channels – this is an interesting article about an IBM application that uses simple data consolidation techniques to help people within virtual meetings.
Many IT organizations compare themselves to sports teams, yet where are the coaching staff, pre-season, training camp, game-plans and regularly scheduled practices? If practice in a safe environment under the supervision of coaches is how you get better at a skill, and essential for team success, how come our IT team never practices anything at all?
To answer these questions, I am going to do a quick compare & contrast. Part 1 will perform a quick compare & contrast, while Part 2 will focus on a root-cause analysis & make some (im)modest suggestions.
Front Office: There are, of course, executives and money guys above both professional sports teams and corporations with significant IT organizations. Each has a general manager or CEO who is (usually) an expert in their respective business, and who makes key financial decisions while trying to create & execute a winning strategy.
In professional sports, though, the general manager's decisions are largely (in consultation with the coaching staff and owners) about whom to hire, fire, trade, draft, and pay to build a great team. Note that these activities are largely operational and inward-facing. In larger corporations, most CEOs, have a primarily outward-facing role that leaves daily operations to the C-level executives.
Head Coach: Sports teams have a head coach, who determines strategy, and who hires a coaching staff with expertise in particular skills or aspects of the game. Sports strategy is an attempt to optimize your wins based on the skills & strengths of your team compared with your opponents (see for example the excellent Moneyball). Coaches develop game plans consistent with the strategy to optimize their chances of winning against particular opponents; unless, of course, you are the Oakland Raiders (sorry, had to get that out of my system).
IT doesn't have a head coach; IT has a CIO, who is largely concerned with how much money to invest in on-going operations vs new projects within parameters set by the CFO. IT strategy is usually based on recommendations from a CTO, and the game plan for winning in a particular market is usually developed by the CMO. A CIO implements a strategy and participates in a competitive game-plan, but the CIO is usually only directly responsible for the methodology used to build and operate products or services. Aside from referring to "my team", CIO's spend precious little time, much less investment, on human capital and organizational development issues.
A CIO, therefore, is more like a member of the coaching staff, a coach whose specialty is IT. But that should not explain or excuse the lack of teamwork. A complex software development or systems project requires sophisticated business processes to keep its many participants in sync. For example, a full-blown Rational Unified Method (RUP) project uses a business process that includes dozens of roles, capabilities, and standard activities, and that's just the generic, top-level complexity of RUP. When we incorporate the technical specifics, organizational peculiarities and iteration goals (prototype, pilot, production, etc.) that are part of any project's unique characteristics, task plans in the thousands are not unusual.
Yet you rarely hear of IT organizations practicing their methodology to achieve optimal performance. IT seems to be the only "sport" where we expect coaches, the newly drafted and the veteran alike to run out onto the field of play and somehow know the entire strategy, playbook, competition, and assignments, and then perform with optimal productivity, all without ever having so much as a scrimmage together.
Players: Athletes at the professional level have great natural physical ability, and are also highly skilled in their discipline (quarterback, linebacker, etc.), having honed those skills over many thousands of hours of practice and hundreds of games. But athletes are not necessarily effective at using those skills effectively to implement a strategy or game plan, or to coordinate or communicate their tasks in real-time. For all these reasons, coaching is essential to team sports.
IT professionals are usually highly intelligent, which is their physical ability. The IT labor market is also diverse, and its players do not always work for the same company, or reside in the same locality or country, even if they are playing on the same "team". As in team sports, maybe even more so, IT professionals are not necessarily effective at using their skills effectively to implement a game/project plan or strategy, so coordination and communications is still essential.
Recruitment & Draft: Team sport skills are highly measurable via standards such as speed of acceleration, muscle strength, endurance, ability to memorize plays, etc. A player's disciple skills are also visually apparent as seen in game play, whether live or on video. Recruitment is largely an activity for college coaches who identify players with natural abilities that they can subsequently develop through coaching, turning them into running backs, linebackers, etc. At the professional level, players in particular disciplines are readily scored by the teams doing the hiring, are evaluated against a team's needs, and then selected in the draft.
Within IT, college recruitment is mostly on the basis of standardized test scores, not demonstrable skills. There is a seemingly eternal dialog in academic computer science about what to actually teach students, but the study of theory usually completely submerges actual practice, and what practice there is focuses on only one discipline (e.g., coding). Undergraduate test taking, not game-time, is what determines academic ranking.
The equivalent of the draft in IT is hiring, and, In comparison to sports, evaluating IT skills is actually quite difficult. The skills within IT disciplines (coding, testing, documenting, designing, etc.) are surprisingly and alarmingly unpracticed and un-measurable except on-the-job during actual projects. Because job experience is the only practice we get in IT, it is not even obvious that recruiting computer science majors is the best strategy. And IT job experience tends to be highly unique to the company where it occurred.
Training: Becoming a professional athlete is largely about the training. Some skills can be developed through individual study and training, but the direction, measurement and evaluation of an experienced coach is usually essential to achieve the highest individual level within a team context. All team sports have developed extensive & focused regimes of training & practices appropriate for an entire season and particular game.
In comparison, becoming an IT professional is largely about convincing someone to hire you in the first place, since practice is only to be had during projects, and for most practitioners, the only games being played are the ones you are paid to play in.
We all know that training in IT departments is grudgingly provided, and then only for the basic skills necessary to use some new tool. "Team training" usually means "the whole team was trained on a tool", not that anyone learned how to work together effectively.
Projects (i.e., IT games), at least, do have project managers, and IT project plans are comparable to a game plan in sports. But we never actually scrimmage or practice the game plan before we hit the field.
Coaching Staff & Game Management: Sports are generally fast, real-time events, and the coaches are in constant communication, selecting the plays, evaluating the results, providing feedback to the players, before, during and after the game.
IT projects, in comparison, rarely have someone in the role of head coach with a coaching staff. Instead, coaching responsibilities are usually distributed over project managers, development managers, test managers, operations managers, product managers, etc. Worse, within the typical corporate matrix, each of these coaches reports to a different boss with no common technique or methodology. Rather than provide feedback before, during and after a game, IT employees usually get feedback only once a year during annual performance reviews.
IT projects are not typically real-time events (outside of some operational situations), but most IT projects have only the most rudimentary tools for evaluating results. Typically, project managers only know if particular "plays" (i.e., tasks on the project plan) were completed, not whether they were successful. My guess is that because IT lives with failure so much, that is the reason post-project evaluations came to be called "post-mortems".
Summary: As we can see, IT does in fact share many of the traits of a sports team, and would logically benefit from coaching & practice. But this is certainly not the case.
In part 2, I'll explore why I think the IT industry behaves as it does, and also why I think we can learn a thing or two by behaving more like a professional sports team.
IT development processes and governance can only minimize the risk of doing something really stupid and damaging to your business. But even the best development process is never a game-changer, and almost always results in lower net productivity, and can even contribute to greater alienation and misalignment of IT from business operations.
After a project train-wreck, after all the blame has been assigned, miscreants fired, and bills paid, executive management will naturally ask of survivors: "How will you assure me this never happens again"? And the answer almost certainly will be: "We will improve our processes and take fewer risks". Re-engineering development processes is a natural, human response to some act of profound stupidity.
To explore this topic a bit further, let's compare with current events in global finance. (There are many good analogies between the worlds of finance and IT, in part, I would hazard, because both deal in abstractions -- money and data, respectively -- that are the drivers of a modern economy. Like finance, IT deals in portfolios of projects and operations -- analogous to investments and accounting -- with varying risk profiles, based on a symbolic model of the world. To my thinking, comparisons between software and more physical disciplines, such as civil engineering, usually result in false analogies and misleading conclusions.)
When I assess an IT project disaster, I am firmly of the President Obama persuasion:
Almost from the dawn of computing, the challenges of IT abstractions, complexity, and growth has given birth to waves of engineering process enthusiasms and literature. I am of Fred Brooke's "no silver bullet" camp (c.f. http://www.lips.utexas.edu/ee382c-15005/Readings/Readings1/05-Broo87.pdf). My summary conclusion is: There can never be one right development process, standard model or methodology, for any & all businesses. Managers can only progressively improve the alignment of IT with a business through iterative projects, continuous learning, light-weight processes and above all, better hiring.
One last thought: When dealing with the aftermath of a project disaster, managers should remind themselves that the key determinant of technical outperformance is – and always has been -- the quality and talent of your employees. The relative productivity differentials between individuals in the technology industry remains as wide today as when it was first measured (see http://blogs.construx.com/blogs/stevemcc/default.aspx, and many others). Better hires, properly guided, will develop better processes to protect your business, resulting in more productive, less expensive and more responsive IT.
When interviewing candidate managers to manage people, not just a process or market, always ask them about the last person they fired.
It is surprising how many managerial candidates have never fired anybody at all, and this one question is far more revealing about a manager's skills than softballs about finding and managing top talent.
Most managers don't find top people candidates themselves and don't usually manage the hiring process. We generally use recruiters to find candidates, who are largely dependent on job boards and proprietary personal networks. Once top candidates are found, the business process of recruitment is usually managed by HR in order to protect the corporation from legal risk. And while there may be some challenge in finding the best person for the money (i.e., who fits your budget), or in identifying the best future performers among entry level graduates, the Best candidates usually stand out pretty clearly.
Next, getting top candidates to accept an offer does show an ability to sell yourself and your company, but don't flatter yourself too much: Even the most charismatic executive cannot overcome a bum business plan. Successful hiring has more to do with the underlying health of your corporation and its ability to offer professional growth and increasing compensation than a winning smile and good Irish story.
Finally, the Best are not usually that difficult to manage – not surprisingly, that's part of the reason they are the Best. One of the great pleasures of managing in the technology business is the opportunity to manage the brightest talents on the planet, which allows you to focus on accomplishing business goals, not monitoring work hours, inappropriate behavior, ethical lapses or poor hygiene.
First off, admitting you had to fire someone is a reality check on honesty: We all make hiring mistakes, and if a managerial candidate cannot admit to a hiring mistake, then he or she is probably hiding a lot more as well, or really hasn't managed very much.
If hiring mistakes are not addressed head-up, the entire organization suffers. With the human body, the immune system constantly seeks out and eliminates threats to health. In the technology business, where people are the core asset, performance management has a similar function, identifying and eliminating hiring mistakes. A manager that does not, or cannot fire someone – for example, by dishonestly shuffling someone via transfer or misrepresentation -- puts the entire corporation's productivity at risk. Yes, rehiring is expensive, but tolerating underperformance is even more expensive.
Second, the ability to fire tells you much more about a managerial candidate's day to day skills than hiring. Firing is much, much harder than hiring, because to avoid subsequent litigation risks, you have to prove that you were managing the fired person responsibly all along. Litigation over hiring is much less common than litigation over wrongful termination.
There are many reasons for firing someone, but all these reasons – we'll call them firing-factors – reflect on a manager's core skill sets. The set of firing-factors do not begin or with end with technical skills, productivity or mental power. For example, because so much of technology is team-based, sometimes personal style and team-fit is the key firing-factor: If a team under a manager's direction is underperforming, and one person is the identifiable cause, that manager must act.
What are the managerial skills and knowledge that firing someone reveals?
In my own experience, when asked about actions I regret as a manager, the top of my list is always "I should have fired so-and-so sooner". By delaying, I caused myself ongoing trouble, reduced my own productivity, and allowed continuing organizational underperformance. I also did the person I eventually fired a disservice, since he or she was not forced to address their firing-factors quickly, or find the right company for them faster. The delay in firing hurt their careers as well.
So fire fast, fire well, and always ask: "Who did you fire last and why?"