”SoftOne”
Sören Janstål

.   Tidigare gallups
Fun

Data Research DPU ab
Torsvikssvängen 34
181 34 Lidingö


Tel +46 (0)70 727 67 95
Skype sjanstal

eMail to Sören Janstål

Sören Janstål
Sören Janstål
Data Research DPU
Data Research DPU ab
Data Research DPU for Evaluation of Information Technology

Sören Janstål
Sören Janstål

Bloor Research Group:

CAST - Software Testing Tools

CAST Tools. An Evaluation and Comparison

Svenska

A Testing Time for Everyone?

Until very recently in the world of software development, software testing had always been looked down on with rather Cinderella-like disdain by developers. It was viewed as definitely not a `macho' activity, more the domain of weaklings and weeds. In fact, performed by the kind of people who will watch, over and over, a video recording of Maradonna scoring a penalty against their favourite football team, in the vain hope that he might miss next time round.

 Well that's how it used to be until the advent of Computer Aided Software Testing (CAST) tools, which are able to automate the more tedious aspects of software testing. Times are changing and Cinderella has finally come to the ball. Our research shows that the recently formed CAST market is booming, world-wide it is worth $100 million and is growing at the rate of 30 percent per annum.

 This trend looks set to continue for the next few years, being pulled along in the slip stream of users moving over to client/server computing. We expect the growth rate to level out around 20 percent per annum.

 There's no doubt that this is good news for the CAST vendors, and judging from their responses to our questionnaire, we estimate that the market for software testing related consultancy is worth in the region of $20 million. Its good news for users too, because the increase in the use of CAST tools means that they are likely to be the recipients of better quality software.

What's the Point ?


 So, apart from being a tedious and unglamorous pastime, shunned by `real' developers, what exactly is software testing all about, and why are the hearts of hardened developers, warming to the idea of tools that can automate the software testing process ?

 To answer these questions, let's start by looking at what software testing is. Quite simply, the whole purpose of testing software is to ensure that it comes up to a desired quality standard. Although the definition of quality is open to interpretation, in the case of an item of software, it generally means that the software does what it is supposed to, and does not generate any unpleasant side effects, like destroying the database.

 For large scale developments, this usually means that the software does the things that have been agreed upon in the requirements statements, and that it meets the acceptance criteria which were defined at the start of the project. For a shrink wrapped product things are less formal. It typically means that the software should do exactly what it says on the box. The challenge for developers is to test sufficiently in a realistic time frame, while staying within the usual commercial constraints of costs and resources.

 This is where CAST tools come in. When managed correctly, they provide a double benefit, firstly by finding bugs early in the development cycle, but also by reducing the cost of the overall testing process itself. Automation means that less people are needed to test systems, and often testing can be carried out unattended overnight, or with one person managing the emulation of hundreds of terminals on a network.

 

Why Worry ?


 The mere fact that a system is likely to be too complex to test manually is not a good enough reason to make developers test their software using CAST tools. Why bother with the expense ? Why not just sell dud software and get your users to test it for you in action, and report any bugs. And if they're really desperate to do some work, why not charge them for the bugs fixes too ?

 This was the kind of philosophy that was prevalent among software vendors in the 1980s, when unreliable software was viewed as inevitable by suppliers and customers alike, and at times, even as something of a joke. In the United Kingdom and the United States, software disasters were viewed with fatalism, and disasters in this vein are still being reported today. What is more, during the 1980s, there was a commercial advantage to the software supplier if the maintenance load was large. The supplier could extract additional revenues for fixing software, which should have been fault free in the first place!

 However, in the 1990s attitudes are changing, both on the part of suppliers and of the customer. End-users, newly empowered with the choice of buying their software from numerous vendors, through opting for open systems, are beginning to flex their purchasing muscle. They are choosing to buy reliable software, rather than being dazzled by over featured products which have the in-built capability to crash a whole network if you choose the wrong option.

 But of more significance is the threat of legal action against vendors of shoddy software. Recent legislation in favour of the customer means that software vendors have a greater responsibility under law to produce software which is fit for the purpose that it is intended. This is forcing them to find bugs earlier in the development cycle, and to fix them before the product is handed over to the customer. The most cost effective way to do this is by the use of CAST tools.

Resistance to Testing


 There are still however one or two sources of inertia in the market-place. One of the main problems is that many organisations that develop software do not have a consistent method of software testing. A survey that was reported by the Quality Assurance Institute in the US in 1994, found that more than half of user organisations which developed software did not follow a consistent testing methodology, and in many cases testers received inadequate training. Furthermore, testing was stopped when project time ran out.

 This is also typical of the situation in the UK, where managers of software companies do not see a correlation between the final quality of their product, and the profitability of their business. This is despite the widely publicised ISO 9001 and TQM initiatives.

 Hence the testing process is neglected, usually crammed in as an after thought at the end of a development project, and is often carried out by recruiting part timers, students, or novices with little planning or co-ordination. The use of CAST tools in such circumstances may prove to be of dubious benefit. If you automate the mad panic at the end of a development project, when most of the testing is carried out, all you end up with is automated chaos, rather than a software product of improved quality.

 

The CAST Products In comparing the testing tools, we have divided them into four groups according to functionality.


 * Dynamic Testing - Client/Server
 * Dynamic Testing - Character Based
 * Dynamic Testing - GUI Tools
 * Static Testing Tools

 In some ways, the groupings are slightly artificial in that some tools can fit into more than one category. Therefore, where there was considered to be a significant overlap in functionality, a product was rated in more than one group.

  •  Automated Test Facility (Softbridge Inc)
  •  Automator QA Center (Direct Technology Ltd)
  •  AutoTester (AutoTester Inc)
  •  CA-TRAPS (Computer Associates)
  •  CA-Verify (Computer Associates)
  •  Cantata (Information Processing Ltd)
  •  Enterprise Quality Architecture (Mercury Interactive Corp)
  •  EVALUATOR (Elverex International)
  •  LDRA Testbed (LDRA Ltd)
  •  The McCabe Tool Set (McCabe  Associates)
  •  MicrosoftTest (Microsoft)
  •  PLAYBACK/Hiperstation (Compuware Corporation)
  •  PreVue (Performance Awareness Corporation)
  •  QA Partner (Segue Software Inc)
  •  QA C++ (Programming Research Ltd)
  •  Software TestWorks (Software Research Inc)
  •  SQA TeamTest (Software Quality Automation Inc)
  •  V-TEST (Performance Software Ltd)
  •  VISION:Testpro (Sterling Software)

CAST Tools: An Evaluation and Comparison


 Authors: Carl Potter and Dr Therese Cory
 Edited by: Tom Jowitt
 Length: 476 pages
 Published by: Bloor Research Group
 ISBN: 1-874160-16-3



Related reports:

ERP market, just this minute

Look how the users rate their systems!

Give your rating of any system and get a market analyse report free of charge!

Give your rating of any IT vendor and get a market analyse report free of charge!

Data Research DPU
for Evaluation of Information Technology and Computing


[ Order | More info | Suggest new evaluations ]
[ Consulting | Price List | Mailing List ]
[ Contact ]

Back to Data Research DPU top page.


Data Research DPU ab - Torsvikssvngen 34, SE-181 34 Liding, Sweden - Tel +46 70 727 67 95 - Skype: sjanstal, SkypeIN: +46 8 559 25 900 Contact (email)



Ataio

Plats nr 5 för sponsor/s


Plats nr 9 för sponsor - Ledig (s)


Metodika

Changed September 7, 1999