Can you afford not to use TDD?

Test-driven development (and test-driven design, behavior-driven design, acceptance test-driven development, etc.) have been around a while. One might say that it was started (or “rediscovered“) in 2003 with the publication of Kent Beck’s book “Test-Driven Development By Example.”

Whilst there are purist arguments about whether strictly adopting TDD is the best approach, there are few arguments about whether it is wise to adopt the general philosophy. Why? Because the earlier a defect is detected, the cheaper it is to fix it.

In Code Complete: A Practical Handbook of Software Construction, Steve McConnell provides estimates of the cost to remediate a defect in table 3-1 “Average Cost of Fixing Defects Based on When They’re Introduced and Detected” (page 29 in the second edition of the book, near the beginning of chapter 3).

CostToFixDefects

This table makes it pretty clear that non-coding bugs need to be detected before coding begins, and coding bugs need to be detected before a project reaches the system test stage. From these points hereafter the cost of defect resolution rises drastically.

So how does one ensure that defects are detected at these early stages? Continue reading

Stop mocking and use API Virtualization

API Virtualization and Mocking are NOT synonyms.

Many IT professionals confuse the two, failing to see the differences between them. But there are differences, and important differences at that.

API Virtualization, mocking (and stubbing) are all “test doubles” – techniques to allow integration testing by substituting a double for the real system with which integration is required (the “depended on component,” or DOC). The aim is to make the system under test (the SUT) “think” that it is dealing with the real DOC.

Stubbing uses very basic test doubles and is only useful for the most simple of integration tests. Typically data would be hard-coded in the stub and would be returned to the system under test under explicit instances. The simplicity of a stub means that often a developer can write his/her own stub, although tools do exist but are usually not cost effective for one-off situations. Passing a stubbed integration test provides little assurance that the integration has been properly designed and coded.

Mocking is more sophisticated than stubbing (and the two terms should not be used interchangeably: martinfowler.com Mocks Aren’t Stubs). Mocking tools are required to handle the sophistication. Data are not hard-coded. Instead, in mocking objects are pre-programmed so that they know how they should be called, and are hence able to report whether or not they have been called correctly. This “behavior verification” is the key distinguishing factor between mocks and stubs.

But mocking is not the holy grail of test doubles. The next level of sophistication (and the most sophisticated approach in widespread use) is API virtualization.

API Virtualization employs a tool that creates a virtualized copy of an API. This copy is complete (or at least partially complete overall but fully complete for the subset of API functionality that is required for the particular testing that is being performed). It means that you can mimic production wihout setting up a complete server stack. Hence you can save time, money and cost yet retain the full benefits of functional testing (non-functional is a different matter).

API virtualization is an improvement on mocking because it is not context restricted. Mocking encompasses just a subset of scenarios and mock objects are coded for these scenarios only. Theoretically API virtualization replicates completely the functionality of the API and hence is context unbound. Virtualized APIs need only be written once and then they remain for ever in the testing arsenal, with re-writes/updates only required if the specification of the original API is modified.


Back to the future with Docker

A whole cult seems to have appeared to praise the virtues of Docker.

Now while Docker is useful, us old mainframe hacks get a sense of deja vu.

Firstly we had VM that can act as a host for other VM instances and for MVS (z/OS) and DOS/VSE. So that is just like virtual machines today such as VMWare and Parallels. This is known as full virtualization.

But we also have operating system level virtualization, or containers. Here the cult around Docker is suggesting that this is the best innovation in IT for decades. Well that’s interestingly almost true. Because the principles underlying Docker have been around for decades. In IBM mainframe terminology they are MVS address spaces.

So what does a container do? – it allows multiple applications to run under the control of a single operating system, yet remain securely separate from each other.

What does an MVS address space do? – it allows multiple applications to run under the control of a single operating system, yet remain securely separate from each other.

Notice any similarities here?

Now I am not suggesting that people cut off their noses to spite their faces. Docker is certainly a very useful technology and its use is spreading widely. I just suggest that people realize that this is no new, brilliant idea.


Frames, pages and slots

People often confused terminology in IT. Memory is one prime area for such confusion.

  • A block of real storage is a frame
  • A block of virtual storage is a page
  • A block of auxiliary storage is a slot

Virtual storage is the technique whereby a programmer need not know whether or not a piece of information is actually in memory at any point in time. The underlying operating system takes care of this, using paging to reduce the amount of (and cost of) real storage (i.e. RAM) while still presenting an infinite or near-infinite storage capacity to each application program.

Paging is the transfer of a block of storage between auxiliary storage slots and central storage frames. Inactive frame contents are paged out to auxiliary slots. As required, the contents of slots are paged in.

Learn more:

How z/OS uses physical and virtual storage


IT support has left the building

Or at least it should have done.

How many users struggle with remote connectivity? Either functionality failures (why, after 20 years, are VPNs still so flaky?) or poor response times?

But does IT support understand? Well it’s sitting on a nice high bandwidth, highly available, low latency network.

So no it doesn’t.

It doesn’t understand the daily frustrations of users repeatedly getting timeouts, of going hours or even days without being able to establish a VPN and thereby access email and applications.

Is there a solution?

Continue reading