We could look for one in the genetic technologies, or in nanotech, but their time hasn’t fully come. But I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.

Customer value and end-user experience, which are at the heart of all digital initiatives, are realized in the multi-channel ‘front office’. But they rely heavily on the dependable, safe, secure and performant behavior of the ‘back office’, where “Large and fairly complicated conversations that you’ve triggered occur entirely among things remotely talking to other things: servers, switches, routers, and other Internet and telecommunications devices, updating and shuttling information back and forth." 

In the past, these components were stand-alone applications and were considered non-mission critical because they did not interact directly with each other, but were intermediated by people that were able to control and, if necessary, bypass their behaviors and modify their outcomes “by hands.” 

Now, these components interact without human intermediation, and their end-to-end interactions become mission critical since there is no manual alternative "in an Always-On World." Systematic testing of these services, architectures and processes is a growing fundamental challenge that is having a significant impact on the Quality Assurance (QA) and Testing market, which must respond to the demand for greater assurance that underneath interconnected components are dependable, secure and performant for all users. The World Quality Report 2015-2016 - that results from a survey among 1,560 CIOs and IT and testing leaders, from 32 countries across the globe - relates a spectacular progression of the allocation to QA & Testing in percentage of IT spend: (i) 2012: 18%, (ii) 2013: 23%; (iii) 2014: 26%; (iv) 2015: 35%; (v) 2018 (predicted): 40%. 

The budget increase suggests a growing awareness of the value of QA & Testing as a critical contributor to the digital transformation, but, according to field and market research and our direct experience, distributed services architectures test raises enormous industrial and economic problems, and the available infrastructure is still inadequate.

Industrial problems

Testing of distributed services architectures is complicated, hard, knowledge-intensive, and time-consuming. The consequences for businesses are the obligation to build and maintain complex and expensive QA systems, the mobilization of highly skilled personnel, over-budgeted R&D, and time-to-market delays that threaten the competitive position.

The scale factor boosts the technical and organizational complexity of distributed architecture testing, in particular for cross-organization integration testing of multi-stakeholder architectures with end-to-end scenarios. The IoT expected growth figures naturally emphasize this trend, even if they are still inaccurate - the forecast of IoT growth in 2020 (in billions of connected objects) is 20.7 for Gartner, 38.5 for Juniper research, 42 for Idate and 50 for Cisco. 

The increased complexity and size of test artifacts and data sets becomes a challenge per se. In Europe, organizations have to cope with a strict regulation about data protection rules. In practice, the new EU's Data Protection Directive forbids the use of copies of production data for testing. The directive forces Businesses and Administrations in all industries (e.g. Healthcare) to employ synthetic data, for the masking constraints are substantially unsatisfiable, and fines are severe (up to €20 million / up to 2% of turnover). 

Test optimization is mandatory to ensure efficacy, efficiency, and optimal coverage but mobilizes both in-depth business and technical knowledge about each service/component and its role in diverse distributed applications and specialized skills about test approaches and tools. These competencies are owned by business analysts, technical designers, and test experts, hence are rare and difficult to put together. 

Furthermore, testing tasks require sustained attention and keen observation, often beyond human capabilities, and testing is still considered a low rewarding job. Moreover, human-based testing is rarely systematic, often ad hoc, frequently ineffective, inefficient, error prone - configuration errors, false positives, false negatives - and difficult to plan and manage.

The World Quality Report 2015-2016 reveals that 61% of overall respondents rate time-to-market as an important part of their corporate strategy. The elapsed time of test campaigns is long and unpredictable, not only for the considerations above but also as a consequence of human and technical resource allocation and contention. 36% of developers/testers wait a week or more for access to dev/test environments. After waiting, it takes them an average of 14 days to get a development environment configured (or 12 days to get a QA/test environment configured)

Moreover, it is troublesome to determine when stopping the testing of uncertain quality software that is carried out by scarce human resources using disputed equipment. Companies face the dilemma between long, painful, costly and often ineffective test campaigns that can cause the loss of the market momentum, and delivery “in time” of inadequately tested components with elevated business risks.

Economic problems

Distributed services architecture testing is expensive. The elements of the testing cost can be classified into two categories: (i) labor costs and (ii) equipment costs with, as business consequences, high OPEX and CAPEX.

The testing tools that are available in the current QA & Testing market implement at most the basic mechanization of basic tasks (test execution and logging) within unit testing of single components. The critical tasks, such as: 

  • authoring of synthetic test cases, 
  • manual configuration of the testbed,
  • manual binding of the testbed to the service architecture components, 
  • evaluation of test results, and 
  • hand-writing of test reports from the burdensome “eyeball” analysis of bulk logs, 

remain human-based and are not only knowledge-intensive but also labor-intensive.

Higher level tasks such as focused test case production, optimized test case prioritization, reactive planning of test sessions, and integration of testing steps within the software engineering cycle, requires fine testing skills. 

End to end, cross-organization, integration test of multi-owner services architectures, demands the mobilization of vast human and equipment resources. 

The total cost of ownership (TCO) of the test system equipment (hardware, software licenses, and maintenance, logistics, operating expenditures) is high. The equipment expenses for the services architecture under test are added to the production system equipment costs. On-premises, many of these expenses are CAPEX and could be substantial because the equipment must scale to the highest usage.

Problems engendered by inadequate testing

If distributed services architecture testing is problematic and expensive, inadequate testing is even more problematic and more expensive in perspective. Inadequate testing causes problems that can be classified into three categories: 

  • failures after delivery due to avoidable defects, 
  • laborious defect fixing due to late discovery and 
  • weak pre-procurement and integration testing of third party services/components. 

These problems cause at once additional labor and equipment costs and adverse business consequences such as reputational damage, liabilities, customer loss, profit loss, competitive disadvantage and time-to-market delays.

A recent Forrester survey reports that about 50% of the developers admit that they practice minimal or close to zero testing. The well-known NIST report estimates the 2001 US annual costs engendered by the lack of an adequate infrastructure for testing from $22.2 to $59.5 billion. 

A 2010 survey states that inadequate testing procedures or infrastructure limitations rather than design problems cause 58% of late software failures. From the same study, over one-third of developers don't think their companies perform enough pre-release testing and 56% of them estimated that their last significant software bug resulted in an average of $250,000 in lost revenue. 

The most important business consequences of inadequate testing are not always those that we intuitively expect. From a Parasoft study, in 2013-2014, on the announcement of a software failure, public companies lost an average of $2.3 billion of shareholder value (a -3.75% loss of market capitalization). So far in 2015, the figure has increased to $4.06 billion - a -4.12% loss of market capitalization every time that news of faulty software hits the wire. These facts explain why ‘Protect corporate image’ is the top concern of the World Quality Report respondents (higher than ‘Ensure end-user satisfaction’). 

Conclusion

All the market research studies and reports champion test automation as the obvious solution to the industrial and economic problems of testing and inadequate testing. But test automation is a buzzword: the currently available testing tools in the QA and Testing market implement at most the basic mechanization of trivial and repetitive tasks (test execution and logging) within unit testing of single services, achieved with a large amount of complex code supplied by the tester. 

The recent marketing hype on service virtualization masks the reality of "high coding" to fulfill the skeletons provided by the tools. The critical test tasks - test generation, deployment, configuration and binding of the testbed and the system under test, test arbitration, test reporting, test scheduling, test planning and integration of the test procedures within software engineering processes - remain entirely manual or, at best, "high code." Gray-box integration testing of multi-component architectures, with the probing of service dependencies and end-to-end scenarios, is entirely out of range of the current QA & Testing tools.