Perhaps you have hesitated to make a full investment in Continuous Integration, wondering if this is just a passing fad. Continuous Integration has momentum because it is firmly grounded in sound theoretical principles. Because of its fundamental correctness, Continuous Integration is destined to make a profound and permanent impact on the way software is developed.
It has long been established that the impacts of defects detected and repaired early in a process flow are much less than the impacts of defects that lie undetected until late in a process flow. This is partly because the work performed on defective software, i.e. the packaging, deployment, testing, etc. must be repeated once the defect is repaired. The effort to identify the defect also increases the longer the defect exists in a changing code base, since it is much easier to troubleshoot a problem when the difference between working code and nonworking code is very small.
So, why is it that many organizations do not fully commit to rigorous code inspections and frequent, full regression testing? The answer ultimately is the cost, or the perception of the cost, to detect the defects. Businesses must justify the certain cost, both in dollars and in time to market, of committing to detect the defects early versus the subjective and uncertain risk of adverse market reaction to the product once it is deployed without that kind of investment. Confronted with the choice to pay certain money to mitigate uncertain risk, businesses often choose to cut costs as the lesser of two evils.
In the Agile world where Continuous Integration has evolved, the uncertainty of software development is greatly mitigated by practices that ensure a potentially releasable at the end of every two- to four week sprint. “Potentially releasable” by definition includes completion of successful testing of committed feature content. The stability of the code the impact of lurking defects have become more visible and the value of detecting defects long before the end of a sprint has become more obvious.
Continuous Integration techniques have been born of the need under high visibility to detect defects as early and cost effectively as possible, so the defects can be removed before the end of the sprint. By making the necessary investments to implement these techniques, an organization can drive the down costs for detecting and repairing defects, thus preventing their downstream impacts.
Given that cost effective techniques have been developed to prevent the greater downstream costs that lurk below the surface by default, what could hold you back from taking the plunge?