Archive for the 'Agile' Category

Landed It!

vlcsnap-2013-08-10-16h25m32s83

Today was my second day of circuits, and I went up with a camera to record my progress! If you click the picture above you’ll get a ouTube video of my first circuit of the day. I landed it really nicely, was very happy (and surprised!) about it. Whether it was a fluke or not, I don’t know, because the it was the best of the day. I landed a couple of others nicely but then I bounced and ballooned the rest.

Still, my instructor is happy, I’m making progress, so I’m happy too. And I did all the radio calls today and kept up with all the traffic situations, including a helicopter flying 300 feet above the ground right across the circuit (they were cleared to do that and on their way to Calgary International).

No Comments »

mike on August 10th 2013 in Agile

Serious Sabbaticating

IMG_4257

This week I’ve been continuing with the flying here at Springbank airport as part of my ThoughtWorks sabbatical. I chose Springbank Air Training College after trying various outfits around here, and picked them because frankly I felt safest here. There’s a high level of professionalism amongst the instruction staff and quite a lot of young-looking students. The picture above is yesterday morning’s weather. Not a cloud in the sky, still, calm air, and miles of visibility. Decoding the weather was really easy compared to Monday’s yukky rainclouds and bad visibility.

IMG_4256

My instructor and I have spent a bunch of time reviewing theory from the Canada Flight Training manual, and also spent time in the simulator. The school has a Redbird FMX full-motion simulator which is really fun, much cheaper than renting a real aircraft, and still counts towards my total flight time. The biggest benefit is that all the controls match the real aircraft and I can practice the exact control adjustments I’d need in real life. This is more efficient than sitting on the tarmac burning fuel and doing it in a real plane.

I’ll be starting an online ground school this weekend and just got all my pilot gear including flight “computer” (crazy slide rule thing for doing navigation). This is one of those hobbies you can spend infinite money on gear, which I’m trying to avoid. I don’t think it’ll be hard — I’m converting all amounts to “hours in a real plane” and that makes being frugal much easier.

No Comments »

mike on July 17th 2013 in Agile

Data Visualization Showcase

As part of an internal event here at ThoughtWorks, I’ve been putting together various Big Data and Visualization related stuff. Some of it’s worth sharing outside ThoughtWorks, so I’ll post a few things here.

First up is a Visualization Showcase. Look at all this cool stuff you can do with data!

This image is a map of New York City, showing the geo-location of Flickr photographs and of Tweets. The Flickr photos are orange dots, the Tweets are blue dots. You can see that people are likely to photograph and tweet from different locations. Source: IBTimes Picture This – Daily Photo News

The next image is a representation of tweets between Twitter’s first 140 employees. Source: Connections among Twitter employeesJason Stirman (stirman) on Twitter

The next image shows media fear-mongering for various different scares, such as bird flu, SARS, and the Year 2000 Bug. What’s even more interesting are the patterns: there is a twice-yearly peak for scares about violent video games. Once in December (makes sense, lots of new games are coming out) and once in April. Why April? It’s the anniversary of the Columbine Massacre, and we see that event echo throughout the media each year. Source: Mountains Out of Molehills

Do women rule the Internet? The next visualization shows the female, or male, dominance of users on various websites. Source: Chicks Rule?

Are corporate fines for lawbreaking really punishing the corporations? A visualization of fines vs. annual revenue for major lawbreakers. Note that there are an awful lot of Big Pharma fines over the years. Source: Punytive Damages? World’s Biggest Corporate Fines

Stephen Wolfram has been quietly collecting every scrap of information he can about himself. He’s been recording every keystroke, phone call and email for over two decades. The result is an interesting set of personal analytics. Source: Stephen Wolfram Blog : The Personal Analytics of My Life

Videos

Visual.ly is a community for sharing and collaborating on visualizations. Their intro video is excellent. Source: http://visual.ly/

Visual.ly intro video

David McAndless’ TED talk on visualization. He contributes regularly to the Guardian Datablog and runs Information Is Beautiful.

David McCandless: Data is Beautiful

Hans Rosling’s visualization of the development of the world since the 1800s, showing how countries have moved from “poor and sick” to “rich and healthy” but that there is still much inequity.

Hans Roslin: 200 Countries, 200 Years in 4 Minutes

Websites

No Comments »

mike on October 16th 2012 in Agile

Screencast: Subversion Server on Windows

To celebrate the publishing of Pragmatic Guide to Subversion I put together an eight-minute screencast that shows you how to install a Subversion server on Windows. The video includes a importing an existing codebase and checking out a working copy,  and demonstrates how to make simple edits, view your changes, and check back into the repository.

Software used in the video includes TortoiseSVN and VisualSVN Server.

As an editorial note, it is not easy to do a screencast like this. I went through about ten different pieces of software before the stars finally aligned and I got everything to work. I used a Windows 7 VM and CamStudio to record the video at 720p resolution. I then used WinFF to convert the CamStudio lossless video to H264. Finally I used iMovie to add a voiceover track. Now that I have everything set up I think I should do more screencasts—please let me know what you’d like to see and I’ll work on it.

No Comments »

mike on November 17th 2010 in Agile

Compressing Viewstate

This article is written by guest author John Finlay. Thanks John!

Perhaps the primary statistic by which clients measure web applications is response time. Obviously, web pages that take too long to appear, or at least start to appear, frustrate end-users.

One of the many reasons a web page can be slow is merely the quantity of information that must be sent to a user’s browser; pictures, video, maps, and huge quantities of text can lead to poor response times, even though the server and all processes behind it are running efficiently.

ASP.NET has a page state management mechanism known as “ViewState”, by which changes to page markup (altered text, colors, dynamic formatting etc.) are retained. On a postback that encoded ViewState data is made available to the server, and the .NET environment automatically updates the state of web control properties to represent the combination of the original markup and the ViewState-retained changes. ASP.NET provides an optional SessionPageStatePersister to retain the ViewState in the user’s Session data, and other persisters can be written whereby the ViewState may be stored anywhere you desire. By default ASP.NET persists ViewState as base-64 encoded text in a hidden input field in the HTML delivered to the browser.

Developers need to understand how ViewState works, and what it does (and does not do), to suss how their web pages actually function. For an excellent and detailed writeup on ViewState, see Dave Reed’s article Truly Understanding ViewState.

One issue we experienced on a current project in Calgary was with excessive ViewState size. For example, we frequently use a custom control to generate a customized scrollable table of data on a web page, and these tables frequently contain hundreds of rows and many columns of data. Unless switched off, ViewState is maintained automatically for all this generated data, in effect causing the same data to be transmitted twice to the user’s browser. Often we want the ViewState retained as a necessary evil, however, to support client-side sorting of the data rows.

Sometimes these large tables are inside an UpdatePanel or an AJAX TabContainer, both of which retain their own ViewState plus all the ViewState of all the controls inside them. Now we have even more ViewState to cope with, and for tables contained in multiple tabs in a TabContainer, the actual ViewState size can get unexpectedly large; we have seen a relatively benign popup page with almost 6MB of ViewState! Response time was rather unpleasant for locally connected users, and downright unacceptable for those at remote sites.

Rather than refactoring individual pages, we took a more global approach: compressing the ViewState before delivery. This approach was very successful, primarily because of what the ViewState contains. Remember that it represents all data not explicitly coded in the HTML markup, which includes all dynamic formatting and placement specifications. For a large table, there is massive repetition of various property settings, ideal data for compression algorithms.

Our implementation was simplified by the fact that all our code-behind partial classes inherit a project-wide BasePage, which is where we placed the compression and decompression logic. In ASP.NET, the System.Web.UI.Page class contains two overrideable methods that were ideal for our compression logic: SavePageStateToPersistenceMedium() and LoadPageStateFromPersistenceMedium().

Sample compression code is shown below, and requires the SharpZipLib compression library.

using ICSharpCode.SharpZipLib.Zip.Compression;
using ICSharpCode.SharpZipLib.Zip.Compression.Streams;
...
private const int BUFFER_SIZE = 65536;
private int viewStateCompression = Deflater.NO_COMPRESSION;

public int ViewStateCompression
{
    get { return viewStateCompression; }
    set { viewStateCompression = value; }
}

protected override void SavePageStateToPersistenceMedium(Object state)
{
    if (ViewStateCompression == Deflater.NO_COMPRESSION)
    {
        base.SavePageStateToPersistenceMedium(state);
        return;
    }

    Object viewState = state;
    if (state is Pair)
    {
        Pair statePair = (Pair) state;
        PageStatePersister.ControlState = statePair.First;
        viewState = statePair.Second;
    }

    using (StringWriter writer = new StringWriter())
    {
        new LosFormatter().Serialize(writer, viewState);
        string base64 = writer.ToString();
        byte[] compressed = Compress(Convert.FromBase64String((base64)));
        PageStatePersister.ViewState = Convert.ToBase64String(compressed);
    }
    PageStatePersister.Save();
}

private byte[] Compress(byte[] bytes)
{
    using (MemoryStream memoryStream = new MemoryStream(BUFFER_SIZE))
    {
        Deflater deflater = new Deflater(ViewStateCompression);
        using (Stream stream = new DeflaterOutputStream(memoryStream, deflater, BUFFER_SIZE))
        {
            stream.Write(bytes, 0, bytes.Length);
        }
        return memoryStream.ToArray();
    }
}

Looking at the “Save” logic, you’ll see that the default setting for compression is to not do any compression at all. Other deflater settings include DEFAULT_COMPRESSION, BEST_SPEED and BEST_COMPRESSION. The ViewStateCompression property must be set before ViewState is retrieved, in the Page_PreInit() or Page_Init() method. ViewState is available as the Second part of a Pair object (the first part references ControlState, which is different than the page’s ViewState. We decided not to compress it due to its limited use and small size. See Bean Software’s ControlState Property Demystified). We grab the ViewState object hierarchy, serialize it using the same System.Web.UI.LosFormatter that ASP.NET uses to serialize ViewState, compress it using SharpZipLib, System.Convert it to a base-64 string again, and hand it to the PageStatePersister to be written out. Use the PageStatePersister to write to the normal __VIEWSTATE hidden field; the AJAX toolkit gets upset if you manually write it to any other field.

The reverse is done on a PostBack:

protected override Object LoadPageStateFromPersistenceMedium()
{
    if (viewStateCompression == Deflater.NO_COMPRESSION)
        return base.LoadPageStateFromPersistenceMedium();

    PageStatePersister.Load();
    String base64 = PageStatePersister.ViewState.ToString();
    byte[] state = Decompress(Convert.FromBase64String(base64));
    string serializedState = Convert.ToBase64String(state);

    object viewState = new LosFormatter().Deserialize(serializedState);
    return new Pair(PageStatePersister.ControlState, viewState);
}

private byte[] Decompress(byte[] bytes)
{
    using (MemoryStream byteStream = new MemoryStream(bytes))
    {
        using (Stream stream = new InflaterInputStream(byteStream))
        {
            using (MemoryStream memory = new MemoryStream(BUFFER_SIZE))
            {
                byte[] buffer = new byte[BUFFER_SIZE];
                while (true)
                {
                    int size = stream.Read(buffer, 0, BUFFER_SIZE);
                    if (size <= 0)
                        break;

                    memory.Write(buffer, 0, size);
                }
                return memory.ToArray();
            }
        }
    }
}

If no compression were originally applied, we call the base method to do its thing. Otherwise, the state information is derived from the hidden HTML field, and the ViewState portion of it is converted from base-64, decompressed, reconverted to base-64, and deserialized into its original object hierarchy.

Some experimentation should be done to determine the optimal sizes of the various buffers; here we used our elite programming skills to pick a workable size (we guessed). Likewise, likely no single type of compression (default, max or fast) is optimal in all circumstances.

So how does a developer determine whether ViewState compression is required at all? One could view the page source, copy the value of the __VIEWSTATE hidden input field, paste it into an editor and determine the column width. A better approach is to display the size of the ViewState (during development) as part of a FooterInfoControl on the page itself. Our MasterPage.Master displays the footer control, which contains other controls, one of which is the ViewstateSizeControl itself:

public class ViewstateSizeControl : Label
{
	private const string SCRIPT = "$('{0}').innerText = document.forms[0].__VIEWSTATE.value.length;";

	protected override void OnLoad(EventArgs e)
	{
		if (Visible)
		{
			Page.ClientScript.RegisterStartupScript(
				typeof(Page),
				UniqueID,
				string.Format(SCRIPT, ClientID),
				true);
		}
		base.OnLoad(e);
	}
}

This allows the developers and testers to see how big the ViewState is, both before and after compression, with no extra effort. Quite handy! And how well does this compression mechanism work? In most cases, you should expect at least a 90% reduction in ViewState size. For example, the aforementioned 6MB of data actually compressed to less than 60K. Quite effective!

Demo source code is available to accompany this article: ViewStateCompressionDemo.zip (Visual Studio 2005, C# 2.0 web application)

5 Comments »

mike on November 4th 2008 in Agile

Subversion 1.5 is Ready

Subversion 1.5 has just been released and has a whole raft of new features that developers have been asking for. To take full advantage of all the new features you’ll need to upgrade both your server and clients, but both will inter-operate so you can upgrade gradually if you wish. Server upgrade does not require a dump and reload, but as usual with any major upgrade you should back up your repositories first. Some of the new features in 1.5 require a repository upgrade—after installing the 1.5 server software use svnadmin upgrade to bring your repositories up to the newest format, Subversion won’t do this automatically.

The killer new feature in Subversion 1.5 is merge tracking. This is a feature that Perforce has had for years and that I always missed in Subversion. It’s a major change and the Subversion developers have been working on it for several years—there’s literally been a design document in the Subversion repository since the 1.0 days.

Usually when creating a release, you’ll create a production release branch of your code. This branch will be where you get your software ready for final release, fix the last few bugs, that sort of thing. You’re also likely to use this branch for production support, fixing production issues when they arise. Using this strategy, detailed in Pragmatic Version Control using Subversion, you will frequently need to merge release branch changes back down to the trunk. Until now, you had to manually track which changes you had already merged, and ask Subversion to only merge new changes. This meant a fair amount of manual bookkeeping, writing down revision numbers, looking at log entries, and so on.

Subversion’s new merge tracking fixes the need to manually figure out which changes need to be merged between branches. Instead you just tell Subversion you want the branch changes merged to the trunk, and it figures out what to merge. You can run the same command every week to merge changes, no revision numbers required. Subversion 1.5 also makes it easier to merge entire branches back down to the trunk, for example when merging an experimental changes branch to the trunk.

For merge tracking to work, you need to upgrade both your server and client to 1.5, and upgrade your repository with svnadmin upgrade. Merge tracking isn’t quite finished in Subversion—1.5.1 will address performance issues and some edge cases such as cyclic merges.

Another great feature in Subversion 1.5 is change list support. As you are working on a change, you can organize your changes into named change lists, and check them in independently. This is really useful in cases where you’re working on a feature but then someone asks you to fix a bug, and you want to do a quick fix on the bug and check it in. You can now just fix the bug, keep the bug changes in a different change list to the feature changes, and then commit the bug fix without committing the unfinished feature. This is a client-only feature which many people did manually previously—I know I’ve deliberately done half-commits so someone else could see my work sooner. Unlike the Perforce version of change lists, no-one else can see your in-progress change lists, they’re stored on the client only.

As usual TortoiseSVN, everyone’s favourite Windows Subversion client, has full support for all the new Subversion features.

7 Comments »

mike on June 23rd 2008 in Agile

Domain Driven Reporting

ThoughtWorks projects are built using best practices that include layered architectures, at the heart of which is often a domain model. The model contains concepts and business logic to support the application. The model is often persisted using an object relational mapping scheme with either custom code or a mapping tool such as Hibernate. Our domain model is used to drive screens and processes within the application. But when it comes time to writing those pesky reports that the business desperately needs, we tend to revert back to plain old SQL stored procedures. This article shows how we can leverage a .NET C# domain model to create traditional Reporting Services reports.

The Bad Old Days

Microsoft Reporting Services is a standard part of the .NET stack. When faced with a reporting requirement on a .NET project, we’d need to do a lot of explaining to choose a different reporting tool, and we’d be taking some chances. For this article we’ll assume the political or technical environment doesn’t allow us to choose another tool or to avoid reports entirely. It is usually still worth asking the business whether an interactive screen will serve their needs instead of a report—on a current project in Calgary the business champion understands that a screen is faster to develop and easier to test than a Reporting Services report. He will often choose a screen instead of a report where functionality such as PDF export or printability is not required.

The Reporting Services authoring tool is designed around SQL or stored procedure development. The default data source is an SQL data source, and the tool works well alongside a SQL editing environment where we can develop our queries. Reporting Services uses SQL based data sources for report parameters too. This leads many of us to conclude that the right way to use Reporting Services is to develop custom SQL for report parameters and data sets. SQL cannot leverage our domain model, so we end up repeating many business rules and concepts within our reporting SQL. Since SQL or even T-SQL is less expressive than C# code, and since most programmers are less adept at database programming, our reporting SQL gets unwieldy, complicated, and bug ridden. Maybe we add some views or functions to attempt to clarify things. Then we tune our SQL so the reports perform well. The result is often a really nasty section of our application that no-one wants to work on, and that can often contain bugs. Reports are usually important for running the business, so the numbers must be right.

Web Services to the Rescue!

A little-advertised feature within Reporting Services is its ability to consume Web Services as a data source. One reason this feature is poorly advertised might be Microsoft’s lack of expertise with applications that have a real domain model—most of their examples get as far as Data Access Objects but no further. Another reason is that the tool support for using web services isn’t as good as the support for SQL and stored procedures. It’s still quite usable, however.

The basic idea is to expose our existing domain model using web services, with methods specifically tailored to our reports. The SOAP response from a web service call is an XML document and is “flattened” into a dataset by reporting services. We can tune the way the flattening works to avoid extraneous elements (for example, we’re probably not interested in the SOAP envelope). Once we have the dataset we can write a report against it in the usual way.

A Web Services Driven Report

When creating a report you first need to define an XML data source. For the connection string enter the path to your web service.

Reporting Services Datasource

Now, add a new data set for the data source. The command type should be “text” and the query string an XML fragment including the method you wish to invoke on the web service. You should include parameters in your XML query for each parameter the web service method expects. In this example we have four parameters. We mapped the query parameters to report parameters using the “parameters” tab.

Reporting Services Dataset

Beware that whilst you’re designing your report and setting up your data source, Reporting Services will often call your web service method using null for each parameter. Code your web service so that it checks for null and returns an empty response (empty list, zero response, etc) for one of these invalid calls, otherwise you’ll find the designer loses your query/parameter settings. This can be really annoying.

In our example we’re using ElementPath to collapse the web service XML response into a data set. The syntax is fairly arcane but you can try starting with a * as your element path which does a default conversion. You can read more about the syntax at Microsoft TechNet. We seem to need the Response {} and Result {} at the start of the path, even though those elements don’t actually occur in the web service SOAP response. In this example, we’re also forcing various fields to Reporting Services data types such as Date, Boolean and Decimal, rather than treating them as simple strings.

You should now be able to exercise your data source and see results come back. Now you can build the rest of your report in the designer as usual.

Performance Considerations

One common reason for using SQL or stored procedures for reporting is a worry about performance. Traditionally, a domain object approach is slower than using “raw” SQL. In fact this is always likely to be true—getting the database to do all the work will always be faster than loading objects into memory and processing them. But how fast do we really need a report to perform? And can we truly write database procedures that duplicate our business logic and get them to perform well?

Our experience in Calgary has shown that a report that took over two weeks to develop using stored procedures could be built in an afternoon using our domain model. Tuning the original report took considerable time, but we got to the point where we could generate an 850 page report in under 20 seconds. Initially the domain model version of the report took minutes to run, but with a small amount of tuning (only a few hours) we had it running in just ten seconds—twice as fast as the SQL based report. As an added benefit, the performance tuning we did on the domain model helps the entire application run faster. Tuning a domain model isn’t that hard; simply avoid loading too many objects into memory. Instead of loading 30,000 transaction entries and then processing them, we get the database to aggregate them by transaction type and date, something that we’d need to do in the domain model before displaying the results anyhow.

1 Comment »

mike on June 9th 2008 in .NET, Agile

Fire your dev teams (reprise)

No, of course I don’t really think you can fire your entire development team and throw away your existing successful code base. Just because a blog post is spell checked and doesn’t contain swear words all over it does not mean someone is being entirely serious. I’m really disappointed in the anonymous Internet hate comments that some of you have left. I’m surprised and disappointed that people claiming to be professional programmers are acting like teenage forum trolls.

Many people did leave constructive comments and their points are worth some discussion.

jimmyD pointed out that the Internet runs on “hacker built” systems such as Apache, Bind, Sendmail and many more. They’re built by a loose collection of programmers–sure–but in no way are those tools “hacks”. Apache has a rigorous process for defining their projects, managing and running each project, and for testing and releasing software. There’s a massive difference between “hacker built” and “hacked together” systems. The programmers who develop successful open-source tools are not your average computer programmer. Many successful open-source programmers are hired by large enterprise companies or successful startups in order to apply their skills to other software. Open-source is some of the most successful “enterprise” software you’ll find.

While there was disagreement about how best to move from “startup” to “enterprise” some people conceded that there is indeed a time when a company needs to move from a code slinger mentality to something a little more formal. (Thanks to Adam Ierymenko for contributing the term “code slinger”).

Business people who start companies are usually not the same people to take those companies from startup to enterprise. Many successful startup founders immediately hire business execs with years of experience running large businesses (Google, WestJet, RackSpace, Facebook). Why do we assume that technical people will be able to span the transition when business people are happy to admit that someone else has more appropriate skills? IT is not a commodity, developers often do not span these multiple roles. Dennis Martinez blames IT management for having the wrong developers, but based on the responses I had to my first post I suggest we also blame developers for failing to admit they cannot or should not span styles of development. (By the way Dennis, you should re-read my post before grabbing sentences out of context and claiming every line of text is advice.)

The terms “enterprise,” “enterprise quality” and “enterprise developer” drew a lot of flak. Many people assumed that since I had used the E word I was talking about massive engineering projects, slow application servers, or processes involving lots of overhead. I didn’t mean that, and just like you I’ve been burned by heavy shrinkwrapped stuff labelled “Enterprise”. What I mean when I say “enterprise” is that a thing is suitable for use and extension over the long term. That means it’s well tested, firstly. It means the architecture is coherent and consistent. It means the software is extensible, and maintainable by people who (often) did not write the software in the first place. It means the system is performant and scalable and easily deployed.

There will always be tradeoffs when writing software. There’s no one “right way” to create an application. A startup, under significant pressure to deliver, will usually make tradeoffs that produce worse code, and that’s the right decision at the time. But please, startup IT people, make sure you and your team are aware that you are making compromises. Discuss the problems you might face in future and have a plan in place for fixing the problems when you become the next Facebook.

No Comments »

mike on March 9th 2008 in Agile, Web 2.0

Card Infected

This whole Agile thing has messed me right up.

Wedding Card Wall

It’s not the most original idea—Joe Walnes had a card wall for renovating his house, where estimates were in money-costs rather than time—but it works well for me. I find myself getting a little stressed out when a personal project, even just “stuff I should do this week,” doesn’t have a card wall. Now that our wedding project has been successfully deployed I’ll be starting a card wall for happily ever after…

No Comments »

mike on March 21st 2007 in Agile

Model, View, Presenter with ASP.NET 2.0

Most people are familiar with the Model, View, Controller pattern (MVC) for separating business logic and presentational logic within an application. MVC is implemented in a number of Java web frameworks, such as Struts and Spring. A more recent pattern–Model, View, Presenter–can be applied in contexts where there is no central “controller” for the application. One such framework is ASP.NET.

My team recently built an MVP-based application on ASP.NET 2.0 and had great success with highly testable presenters and a highly adaptable presentation layer. In fact, we chose to switch from creating custom web controls to using simple .aspx pages and didn’t have to change our presenters at all–it’s always nice to validate those previously abstrct design decisions!

The MVP pattern separates three elements. The model is one or more domain-specific objects representing the current state of the system, the information we’re trying to display, etc. The view is how we present that information to the user and handle input, usually a particular screen or web page. The presenter is the logic that ties together the model and the view, handles navigation, business logic requests, and model updates.

For MVP in ASP.NET 2.0, we use an .aspx page–or more precisely, the code behind partial class–as the view, custom domain objects as the model, and a Plain Old C# Object (can I steal the term POCO?) as the presenter. Let’s assume we’re creating a page to list customers in our application. Our ListCustomers.aspx page might look like this:

<asp:Content ContentPlaceHolderID="Main" Runat="Server">
    <h1>Customer List</h1>
    <asp:GridView ID="customerGridView" AutoGenerateColumns="false" runat="server" SkinID="CustomerGrid">
        <Columns>
            <asp:BoundField HeaderText="Customer" DataField="Name" />
        </Columns>
    </asp:GridView>
</asp:Content>

Nothing special here, we’ve just defined a GridView which will list the customers and display a title. What’s interesting is the code-behind:

public partial class ListCustomers : System.Web.UI.Page, IListCustomersView
{
    protected void Page_Load(object sender, EventArgs e)
    {
        ICustomerService customerService = ServiceRegistry.GetService();
        ListCustomersPresenter presenter = new ListCustomersPresenter(this, customerService);
        presenter.PageLoading();
    }

    public List<Customer> Customers
    {
        set
        {
            customerGridView.DataSource = value;
            customerGridView.DataBind();
        }
    }
}

The code-behind partial class implements IListCustomersView, which we’ll see in a moment. When the page is loaded we create a new ListCustomersPresenter, passing it the view (this) and anything else it requires (in this case, an ICustomerService). We then call the presenter’s PageLoading() method. What’s happening is that the code-behind is making no decisions about what to display on the page, it simply delegates to the presenter for any non-display-related business logic.

The IListCustomersView interface defines how the presenter can interact with the ASP page. There’s just one settable property, Customers:

public interface IListCustomersView
{
    List<Customer> Customers { set; }
}

Looking back up at the code-behind, you can see that the implementation of the Customers property sets the grid view’s datasource to the list of customers and then calls DataBind() to populate the grid.

Let’s take a look at the final piece of the puzzle, the presenter:

public class ListCustomersPresenter
{
    private readonly IListCustomersView view;
    private readonly ICustomerService customerService;

    public ListCustomersPresenter(IListCustomersView view, ICustomerService customerService)
    {
        this.view = view;
        this.customerService = customerService;
    }

    public void PageLoad()
    {
        List<Customer> customers = customerService.GetAllCustomers();
        view.Customers = customers;
    }
}

When our presenter is constructed the two things it depends on, the view and the customer service, are passed to it (in this case, by the code-behind). This is known as constructor dependency injection. In the PageLoad() method the presenter simply accesses the customer service to load customers and sets this information on the view. This simple example can be extended to include input (the view can have read-only properties that correspond to text boxes, etc) and action (add a button to your web page and in the code-behind for its click call an action method on the presenter, like AddCustomerClicked()).

So why is this useful? Why not just have the code-behind access the CustomerService to load the customers? One of the main benefits is that the difficult business logic is captured in the presenter and can be more easily tested. Because we’re using dependency injection we can instantiate the presenter in an NUnit test, mock-out the view and service, and check the presenter does the right thing. The “load customers” example isn’t very hard but you can imagine logic that needed to take a set of user input and perform something more complex, such as placing an order. A second benefit is it’s very easy to see how the presenter can interact with the view–it can only use methods and properties on the IListCustomersView interface, which means it’s much easier for us to see the logical interface between the UI and the business layer. Finally, it’s possible to test drive your presenters and views, which tends to lead to simpler, more modular design for the system.

I’ve simplified some of the other stuff you’ll need to do in a real application. Screen flow and input validation are good examples. We solved the flow issue by having presenters able to return a “presenter result” object, indicating whether the ASP.NET framework should redirect to another page, stay on the current page, go to a login screen, etc. With input validation and error conditions, we add properties on the view so that the presenter can instruct the view to show a particular error message. You can still use an ASP.NET validator for client-side validation, but the presenter needs to be able to toggle it visible too.

We started our application using MVP, implementing the view using custom controls coded in C#. We did this mostly because we envisaged our application as a series of reusable controls, some of which (for example a Wiki control) should be embeddable in other applications. Ultimately we went a bit too far with this and coded some simple screenflow, which wasn’t really reusable, with the same C# controls. We found that layout and other tweaks became fairly onerous and decided to switch to implementing the view using traditional .aspx pages and code-behind. We found that our presenters required no changes at all to be able to accomodate the new view–excellent validation of the MVP design pattern.

No Comments »

mike on February 15th 2006 in .NET, Agile