Extreme Web Architectures - Testing Web sites in Seconds
This article is written by Dave
Chaplin, an IT Consultant with 10 years experience. He leads and mentors teams of developers
on agile .NET projects and has extensive experience in agile development techniques, particularly
Test-Driven Development. Dave can be emailed at
Sunday, June 8, 2003
In the agile process Extreme Programming(XP) a key practice is automating
testing. It is part of Test Driven Development (TDD), Refactoring and
Continuous Integration. In XP it is important that systems are highly testable
and that the tests run fast. This article discusses the concept of Extreme Web
Architectures that are designed to maximise testability and also ensure that a
full regression test can be conducted in under 1 minute. It also describes a
real life architecture built with those goals in mind. The
architecture enables the developer to test very quickly all HTML
pages without having to load the pages into a browser.
Motivation - Benefits of Quality Assurance
It is claimed that the Quality Assurance(QA) activities in XP flatten the cost
curve of a project in terms of the cost of defects over time. If we look at the
traditional cost curve of finding defects then we see that the cost of defects
rises exponentially over time. A defect could range from a low level bug to
something more costly like getting the requirements wrong.
The practices in XP attack this cost risk from requirements, through to
development and testing. XP iterations can range from 10 minutes through to a
month. The key is that there is lots of QA activity going on all the time. So,
instead of one long exponential curve, you get lots of smaller exponential
curves due to the iterations being so small. With the small iterations and lots
of feedback, combined with the continuous testing and integration you are
effectively resetting the cost curve more often before it gets out of control.
Hence, the flatter curve when you compare it to the more traditional one.
Keeping The Tests Running Fast
be quick to ensure they are run continuously. If they are slow then the
continuous integration process becomes less frequent and quality suffers as
developers fight to get the application running again. With slow tests the
refactoring process becomes more time consuming, and it tends not to occur as
much, which results in increased code rot. Once quality suffers, productivity
In a nutshell, you have got to get those tests running as fast as you can.
What is an Extreme Web Architecture?
An Extreme Web Architecture is one designed to be fully testable within
seconds, which can then be used to build web applications which can then be
tested in under one minute. The full set of proposed qualifications are below:
Qualifications For An Extreme Web Architecture
Built 100% test first
Exhaustive set of unit tests
Massive decoupling, resulting in the ability to apply a unit test at any point
within the architecture.
Ability to substitute any component in the architecture with a mock object.
The whole architecture can be fully regression tested in under 5 seconds.
Any web application built using the architecture can be tested in under one
Extreme Web Architecture Example – using .NET
In this section I describe an Extreme Web Architecture that was built using
. The Design Patterns quoted can be found in Design Patterns[Gamma et al.] and
Patterns of Enterprise Application Architecture [Fowler].
Designing For Fast Tests
In order to get the tests running fast we need to attack a number of areas
where testing is commonly slow. By using mock objects we can mock out areas of
the application that are typically slow.
In a web application the slowness of the front end makes is very
difficult to automate fast testing. To get the tests quick we must avoid the
need to fire up a browser to run the tests. Granted,
automating test using IE
is quicker than manual testing, but to pass the Extreme Web Application
requirements it won’t be fast enough.
Although not web specific, another area to apply mock objects is at
the persistence layer. Connecting to a persistence storage (e.g. SQL Server) is
slow when done repeatedly in automated tests. Solutions to these problems will
be discussed in detail. First, an overview of the architecture.
The architecture is built using some of the following patterns: Model View
Controller, Two-Phase View, Application Controller, Intercepting Filter, Chain
of Responsibility, Logical View, Composite, Singleton, Strategy, Factory
Method, Page Controller, and Builder.
The application is written in C#, using ASP.NET for session persistence, and
XML/XSL for the HTML rendering mechanism.
To describe the way it hangs together lets first look at the filter chain
(Intercepting Filter) that sits in front of the Controllers and Views in the
The Filter Chain
The filter chain consists of the following filters:
Get Session Filter: This retrieves the users strongly typed session from the
httpSession so it is available for the length of the request.
View Identification Filter: This determines the logical view that originated
the event from the front end.
Synchronisation Filter: Takes the httpRequest and synchronises the logical
views held in the user session with values from the httpContext that get
submitted as part of the request.
Event Filter: This identifies the event that the user requested and invokes the
event on the relevant controller. The controller then builds a new logical
Render Filter: This takes the view built by the controller, serialises it to
XML, transforms it to HTML using XSL, then writes the HTML to the httpResponse.
Save Session Filter: This saves the users strongly typed session into the
httpSession so it is available for retrieving on the users next request.
Response End Filter: This simply ends the httpResponse and returns control to
There are some potential filters not included. An Authentication Filter could
check to see if the user is allowed to use the system, and a Permissions Filter
could check to see if the user has permission to execute a particular request
Events are declared as delegate methods using .NET Attributes on the
Controllers. At start up an Event Manager uses Reflection to register all the
events. An event delegate takes an Abstract View as its argument.
Getting Data To The Screen
An event gets raised (either application start, or a user request). Controllers
have the responsibility of reacting to events. They build strongly typed
Logical Views (C# objects) of the response and park their result into the
CurrentView property of the Session. Logical Views can be combined as per the
The Render Filter takes the current Logical View and then uses a series of
Builders to turn the Logical View into XML. Some of the properties of the Views
do not need to be put into the XML which the Builders recognise by the use of a
After the XML is built the Render Filter transforms the XML into HTML using XSL
for that view. The HTML is then written to the httpResponse.
Any information written to the users strongly typed Session is persisted to the
users httpSession at the end of the request by the Save Session Filter.
Finally, the Response End Filter is called to end the httpResponse.
Reacting to a User Event
When the httpRequest is sent in from the front end, the Application Controller
intercepts the request and invokes the first filter in the chain, namely the
Get Session Filter. The Get Session Filter retrieves the users strongly typed
session from the httpSession ready for use during the request.
Next, the Synchronization Filter synchronises the values from the request with
the Logical Views that were previously rendered and saved in the CurrentView
property of the users Session. The synchronisation uses the Builder pattern and
reflection to take values from the HttpRequest.RequestParms collection and
populates the views with any changes. Like the XML Builder process there are
some values that do not need to be synchronised. These are identified by the
synchronisers IgnoreOnSynchronisation Attribute.
After synchronisation the Event Filter gets the name of the event that was
invoked from the browser and pulls the delegate from the Event Registry. The
delegate is then invoked which results in the correct method being called
on the appropriate controller.
The controller then deals with the request, and builds a new Logical View and
we are back to where we started with getting data to the client.
We can now fully automate the testing of the system without having to fire up
browser. We can build views, call the Controller, and check that the correct
response was returned. [This technique is nothing particularly new. On
many other articles you will see developers using the common Model View
Controller pattern with a Seperated Interface for the Logical View and the
subsequent use of a Mock Object for the testing of the controller. It is a
standard and very effective technique.]
We can test the XSL by creating a Logical View, calling the Render Filter, and
then asserting that the HTML created has the correct elements with the correct
values. If they have the correct values we know that when we synchronise a
request the Views will be populated correctly. Thus, no need to fire up the
browser. [This part of the architecture is not such a common technique,
but as we will see later it is a very powerful one.]
We have complete separation of the presentational aspects using XSL. If we want
to we can enhance the render filter to choose a different XSL page to render
the same logical view in a different format. Suffice to say, it is very easy to
make changes to the front end. It is also much quicker and easier to convert
HTML provided by a designer into XSL, than it is to convert it to an ASP.NET
Because of the separation of concerns we can also build parts of the system
independently. We can stub out (Service Stub) or use Mock Objects on any area
of the system.
We now have strong typing since the views and synchronisers are responsible for
type validation. By the time the response hits the Controller all type checks
have been done.
Building and Synchronisation Of Logical Views
This part is key to the testability of the system. If we can trust the building
and synchronisation then we have cracked the need to fire up a browser whilst
still being able to actually test that the HTML works.
The render filter uses a series of XML builders to create the XML using
Reflection. The builders iterate through the views and then build a single XML
string of the views, and the embedded views. Each view has a unique id. When
the HTML is rendered via XSL each HTML element is also given a unique id.
During synchronisation, the synchronisers use reflection to iterate through the
views again. The values of the view are updated based on any changes in the
HTML elements as recognised by the previous id's.
This is all done using reflection and attribute programming.
Mocking out the HttpContext
The application uses the httpContext to persist Application and Session data.
In order to get the tests running in NUnit we need to remove the coupling to
the httpContext. For example, to test the synchronisers we needed to simulate
them receiving a Named Value Collection in the same way it would from an
To do this we created a class called the AbstractContext with the same
interface as the htttpContext. We then created two implementations of
AbstractContext: one that wrapped the real httpContext, and the other that
simulated it using standard .NET collections. When we used the real application
the Application Controller then set the AbstractContext.Current = WebContext(),
and during testing we set the AbstractContext.Current = TestContext(). Then all
the code referred to the AbstractContext. We could then test the application
without being bound to Http. This we 'mocked' out the httpContext.
Practicalities of Testing The Front End
To test the front end we take the html response from the Render Filter. This is
then used to create an XHtmlDocument. The constructor takes the original html,
makes it well formed, and then forms a wrapper around a normal .NET
XmlDocument. We can then query the document for certain types of Html element
to see if the rendering worked correctly.
Mocking out the Persistence Layers
Below the Controller level there is a Command layer, a Domain Model and a whole
bunch of layers that do the mapping of the Domain to persist the data in
To keep the tests quick we need to mock out the need to talk to the database. To
do this we use mock objects. When domain objects make use of Mappers we simply
substitute for a Mock Mapper that returns hard coded data. This removes
the time consuming database calls and also allows us to subsitute
different Mappers that behave differently. For example, we can use a Mock
Mapper than raises an exception indicating the DB connection is
This removes the need to call the back end whilst running tests.
It is a bad idea to rely on test data in databases. It becomes time
consuming to set the data up and also makes it time consuming to
debug the system. Also, with test data the team would need to be very
careful about running test concurrently.
Golden Rule: Don't talk to the database during testing.
The Extreme Web Architecture explained has been used to build a real commercial
application. When the initial architecture was built during the development of
the first user story it took 4 developers 2.5 weeks to build it using
The first iteration of the application was quite small and had 8,500 lines of
production code and 7,500 lines of test code. The tests for the architecture
itself take about 2 seconds to run, and the full regression test suite takes 18
The key to getting the tests running fast was removing the need to fire up the
browser to test the rendered Html, whilst ensuring we could get full test
coverage. Further enhancements were gained by mocking out the persistence
The architecture defined herein is not solely my own work, but as a result of
the 4 people on the team whose combined expertise resulted in the solution
described. Those people were: Jason Gorman and his excellent knowledge of
application architecture and software engineering. Duncan Green’s with his
passion for superb OO design. Jason Hales and his ability for balancing complex
design and pragmatism. And myself who has an almost unhealthy obsession with
In this article I’ve discussed the need for ensuring that the suite of tests
runs fast in an XP environment. I’ve qualified the concept of an Extreme
Architecture and an Extreme Application.
I’ve also described the design of a real application that has been built using
Test Driven Development (Dec 2003)
The Losers Olympics (Sep 2003)
Making Projects Succeed: Part 1 - Measurable
Business Goals (Sep 2003)
Pitfalls In Software Development (June 2003)
Extreme Web Architectures - Testing Web Applications In
Seconds (June 2003)
Pair Programming and Quad Programming - From
Experience (June 2003)
Making Extreme Programming A Success (April 2003)
Contractual Test Driven Development (TDD with DBC)
Moving To XP (Feb 2003)
Maximising Development Productivity (Dec 2002)
Writing Automated Browser Tests Using
NUnit and IE (Oct 2002)
Mitigating Requirements Risk With Agile
Practices (Oct 2002)
10 Tips for Successful Team Leading (Oct
Developing Automated Using Tests Using NUnit 2.0
With VB.NET (Sep 2002)
Quality By Design - Part 1.doc (May 2001)
Quality By Design - Part 2.doc (May 2001)
Quality By Design - Part 3.doc (May 2001)