Abstract:
Developers need to evaluate reusable components before they decide to adopt them. When a developer evaluates a component they need to understand how that component can be used, and the behaviour that the component will exhibit. Existing evaluation techniques use formal analysis, sophisticated classification/search functionality, or rely on the presence of extensive component documentation or evaluation component versions.
We first present a model for describing how developers might gain first hand experience of a component's runtime behaviour by 'test driving', that is by directly invoking and monitoring that behaviour. We then analyse the issues that the model raises.
We then propose that test driving should not be done at the developer's end — as it is currently — but rather that test driving should be done at the marketplace where the component was initially found. We then analyse the issues that are raised by shifting test driving to the marketplace. We then propose an architecture and data formats to support marketplace test driving.
Finally, we present Spider: a proof-of-concept prototype for marketplace test driving. Developers can use Spider to test drive reusable components through a standard web browser, storing information extracted from the server-side runtime environment that can later be presented back to the developer as software visualisations. We argue that this approach will provide developers with a new way of evaluating reusable components, and in turn will support their efforts in making the correct reuse decision.