Screencast: Subversion Server on Windows

To celebrate the publishing of Pragmatic Guide to Subversion I put together an eight-minute screencast that shows you how to install a Subversion server on Windows. The video includes a importing an existing codebase and checking out a working copy,  and demonstrates how to make simple edits, view your changes, and check back into the repository.

Software used in the video includes TortoiseSVN and VisualSVN Server.

As an editorial note, it is not easy to do a screencast like this. I went through about ten different pieces of software before the stars finally aligned and I got everything to work. I used a Windows 7 VM and CamStudio to record the video at 720p resolution. I then used WinFF to convert the CamStudio lossless video to H264. Finally I used iMovie to add a voiceover track. Now that I have everything set up I think I should do more screencasts—please let me know what you’d like to see and I’ll work on it.

No Comments »

mike on November 17th 2010 in Agile

Pragmatic Guide to Subversion: Now in Beta

Things have been a bit quiet on my blog recently, I think I’d better explain myself. Two things have been happening in my life over the last couple of years. Firstly I am now a proud dad with two lovely kids, Ben and Natalie, who are two-and-a-bit and seven months, respectively. To say that having kids is a life changing is an obvious understatement. I won’t gush too much but it has been a wonderful experience. It’s also been exhausting so my capacity for extra stuff reduced quite a lot, hence the lack of activity on the blog.

Secondly, and probably more interesting to those of you who follow my blog, I have been working on a new Subversion book. I’m very happy to announce that Pragmatic Guide to Subversion is now in beta. The new Pragmatic Guide series takes various topics and condenses them down into easily digestible guides and quick references. The original idea was a kind of “pocket guide” series. We have both a Subversion and a Git guide, designed to get someone up to speed quickly on each version control tool. If you have experience with another version control tool (and most developers do already have this knowledge) the guide allows you to quickly transition to Subversion. Each section of the book starts with an introduction that sets the scene, so for those of you who are new to version control you will be able to understand what the book is talking about. Those of you interested more in a complete guide to why we use version control tools should take a look at the original Pragmatic Version Control using Subversion, as that book spends more time on the basics of source control.

I’m especially excited because this is the first multi-platform book I’ve done, covering command-line Subversion, the Tortoise Windows GUI, and the Cornerstone Mac GUI. A lot of people still use the powerful command-line interface but for some tasks the GUI is just way better. History browsing and viewing diffs are just naturally better with a GUI. All of the tasks in the book contain instructions for the command line, Tortoise and Cornerstone. I’ve also been able to incorporate all the latest Subversion features and techniques, the most important of which is probably merge tracking. This is new in Subversion 1.6 and can save you a ton of time if you’re working with a branched code base.

The book is a Pragmatic Bookshelf title and has been released as a beta. The beta process means that readers can get an electronic copy of the book before it goes to the paper presses, as well as getting a good deal on an eventual paper copy should you want one. The beta helps me fix the final few errata in the book and get feedback on other changes that might need to be made. The book is complete and has been through two review cycles so I think you’ll get a lot out of it even though it’s technically not finished yet.

I’m excited about the new Pragmatic Guide series. They’re designed to be fairly short, to the point books that get you quickly up to speed on a topic. If you have teammates who need a Subversion primer my new Pragmatic Guide to Subversion is exactly the book you want. If you know all about Subversion and want to start using Git, check out the Pragmatic Guide to Git. There’s also a Pragmatic Guide to Javascript which is a killer reference for all those Javascript and jQuery tricks we all need when developing web UIs. All three books have been released as beta and you can download excerpts from the Pragmatic Bookshelf website.

2 Comments »

mike on September 28th 2010 in Version Control

Compressing Viewstate

This article is written by guest author John Finlay. Thanks John!

Perhaps the primary statistic by which clients measure web applications is response time. Obviously, web pages that take too long to appear, or at least start to appear, frustrate end-users.

One of the many reasons a web page can be slow is merely the quantity of information that must be sent to a user’s browser; pictures, video, maps, and huge quantities of text can lead to poor response times, even though the server and all processes behind it are running efficiently.

ASP.NET has a page state management mechanism known as “ViewState”, by which changes to page markup (altered text, colors, dynamic formatting etc.) are retained. On a postback that encoded ViewState data is made available to the server, and the .NET environment automatically updates the state of web control properties to represent the combination of the original markup and the ViewState-retained changes. ASP.NET provides an optional SessionPageStatePersister to retain the ViewState in the user’s Session data, and other persisters can be written whereby the ViewState may be stored anywhere you desire. By default ASP.NET persists ViewState as base-64 encoded text in a hidden input field in the HTML delivered to the browser.

Developers need to understand how ViewState works, and what it does (and does not do), to suss how their web pages actually function. For an excellent and detailed writeup on ViewState, see Dave Reed’s article Truly Understanding ViewState.

One issue we experienced on a current project in Calgary was with excessive ViewState size. For example, we frequently use a custom control to generate a customized scrollable table of data on a web page, and these tables frequently contain hundreds of rows and many columns of data. Unless switched off, ViewState is maintained automatically for all this generated data, in effect causing the same data to be transmitted twice to the user’s browser. Often we want the ViewState retained as a necessary evil, however, to support client-side sorting of the data rows.

Sometimes these large tables are inside an UpdatePanel or an AJAX TabContainer, both of which retain their own ViewState plus all the ViewState of all the controls inside them. Now we have even more ViewState to cope with, and for tables contained in multiple tabs in a TabContainer, the actual ViewState size can get unexpectedly large; we have seen a relatively benign popup page with almost 6MB of ViewState! Response time was rather unpleasant for locally connected users, and downright unacceptable for those at remote sites.

Rather than refactoring individual pages, we took a more global approach: compressing the ViewState before delivery. This approach was very successful, primarily because of what the ViewState contains. Remember that it represents all data not explicitly coded in the HTML markup, which includes all dynamic formatting and placement specifications. For a large table, there is massive repetition of various property settings, ideal data for compression algorithms.

Our implementation was simplified by the fact that all our code-behind partial classes inherit a project-wide BasePage, which is where we placed the compression and decompression logic. In ASP.NET, the System.Web.UI.Page class contains two overrideable methods that were ideal for our compression logic: SavePageStateToPersistenceMedium() and LoadPageStateFromPersistenceMedium().

Sample compression code is shown below, and requires the SharpZipLib compression library.

using ICSharpCode.SharpZipLib.Zip.Compression;
using ICSharpCode.SharpZipLib.Zip.Compression.Streams;
...
private const int BUFFER_SIZE = 65536;
private int viewStateCompression = Deflater.NO_COMPRESSION;

public int ViewStateCompression
{
    get { return viewStateCompression; }
    set { viewStateCompression = value; }
}

protected override void SavePageStateToPersistenceMedium(Object state)
{
    if (ViewStateCompression == Deflater.NO_COMPRESSION)
    {
        base.SavePageStateToPersistenceMedium(state);
        return;
    }

    Object viewState = state;
    if (state is Pair)
    {
        Pair statePair = (Pair) state;
        PageStatePersister.ControlState = statePair.First;
        viewState = statePair.Second;
    }

    using (StringWriter writer = new StringWriter())
    {
        new LosFormatter().Serialize(writer, viewState);
        string base64 = writer.ToString();
        byte[] compressed = Compress(Convert.FromBase64String((base64)));
        PageStatePersister.ViewState = Convert.ToBase64String(compressed);
    }
    PageStatePersister.Save();
}

private byte[] Compress(byte[] bytes)
{
    using (MemoryStream memoryStream = new MemoryStream(BUFFER_SIZE))
    {
        Deflater deflater = new Deflater(ViewStateCompression);
        using (Stream stream = new DeflaterOutputStream(memoryStream, deflater, BUFFER_SIZE))
        {
            stream.Write(bytes, 0, bytes.Length);
        }
        return memoryStream.ToArray();
    }
}

Looking at the “Save” logic, you’ll see that the default setting for compression is to not do any compression at all. Other deflater settings include DEFAULT_COMPRESSION, BEST_SPEED and BEST_COMPRESSION. The ViewStateCompression property must be set before ViewState is retrieved, in the Page_PreInit() or Page_Init() method. ViewState is available as the Second part of a Pair object (the first part references ControlState, which is different than the page’s ViewState. We decided not to compress it due to its limited use and small size. See Bean Software’s ControlState Property Demystified). We grab the ViewState object hierarchy, serialize it using the same System.Web.UI.LosFormatter that ASP.NET uses to serialize ViewState, compress it using SharpZipLib, System.Convert it to a base-64 string again, and hand it to the PageStatePersister to be written out. Use the PageStatePersister to write to the normal __VIEWSTATE hidden field; the AJAX toolkit gets upset if you manually write it to any other field.

The reverse is done on a PostBack:

protected override Object LoadPageStateFromPersistenceMedium()
{
    if (viewStateCompression == Deflater.NO_COMPRESSION)
        return base.LoadPageStateFromPersistenceMedium();

    PageStatePersister.Load();
    String base64 = PageStatePersister.ViewState.ToString();
    byte[] state = Decompress(Convert.FromBase64String(base64));
    string serializedState = Convert.ToBase64String(state);

    object viewState = new LosFormatter().Deserialize(serializedState);
    return new Pair(PageStatePersister.ControlState, viewState);
}

private byte[] Decompress(byte[] bytes)
{
    using (MemoryStream byteStream = new MemoryStream(bytes))
    {
        using (Stream stream = new InflaterInputStream(byteStream))
        {
            using (MemoryStream memory = new MemoryStream(BUFFER_SIZE))
            {
                byte[] buffer = new byte[BUFFER_SIZE];
                while (true)
                {
                    int size = stream.Read(buffer, 0, BUFFER_SIZE);
                    if (size <= 0)
                        break;

                    memory.Write(buffer, 0, size);
                }
                return memory.ToArray();
            }
        }
    }
}

If no compression were originally applied, we call the base method to do its thing. Otherwise, the state information is derived from the hidden HTML field, and the ViewState portion of it is converted from base-64, decompressed, reconverted to base-64, and deserialized into its original object hierarchy.

Some experimentation should be done to determine the optimal sizes of the various buffers; here we used our elite programming skills to pick a workable size (we guessed). Likewise, likely no single type of compression (default, max or fast) is optimal in all circumstances.

So how does a developer determine whether ViewState compression is required at all? One could view the page source, copy the value of the __VIEWSTATE hidden input field, paste it into an editor and determine the column width. A better approach is to display the size of the ViewState (during development) as part of a FooterInfoControl on the page itself. Our MasterPage.Master displays the footer control, which contains other controls, one of which is the ViewstateSizeControl itself:

public class ViewstateSizeControl : Label
{
	private const string SCRIPT = "$('{0}').innerText = document.forms[0].__VIEWSTATE.value.length;";

	protected override void OnLoad(EventArgs e)
	{
		if (Visible)
		{
			Page.ClientScript.RegisterStartupScript(
				typeof(Page),
				UniqueID,
				string.Format(SCRIPT, ClientID),
				true);
		}
		base.OnLoad(e);
	}
}

This allows the developers and testers to see how big the ViewState is, both before and after compression, with no extra effort. Quite handy! And how well does this compression mechanism work? In most cases, you should expect at least a 90% reduction in ViewState size. For example, the aforementioned 6MB of data actually compressed to less than 60K. Quite effective!

Demo source code is available to accompany this article: ViewStateCompressionDemo.zip (Visual Studio 2005, C# 2.0 web application)

5 Comments »

mike on November 4th 2008 in Agile

Subversion 1.5 is Ready

Subversion 1.5 has just been released and has a whole raft of new features that developers have been asking for. To take full advantage of all the new features you’ll need to upgrade both your server and clients, but both will inter-operate so you can upgrade gradually if you wish. Server upgrade does not require a dump and reload, but as usual with any major upgrade you should back up your repositories first. Some of the new features in 1.5 require a repository upgrade—after installing the 1.5 server software use svnadmin upgrade to bring your repositories up to the newest format, Subversion won’t do this automatically.

The killer new feature in Subversion 1.5 is merge tracking. This is a feature that Perforce has had for years and that I always missed in Subversion. It’s a major change and the Subversion developers have been working on it for several years—there’s literally been a design document in the Subversion repository since the 1.0 days.

Usually when creating a release, you’ll create a production release branch of your code. This branch will be where you get your software ready for final release, fix the last few bugs, that sort of thing. You’re also likely to use this branch for production support, fixing production issues when they arise. Using this strategy, detailed in Pragmatic Version Control using Subversion, you will frequently need to merge release branch changes back down to the trunk. Until now, you had to manually track which changes you had already merged, and ask Subversion to only merge new changes. This meant a fair amount of manual bookkeeping, writing down revision numbers, looking at log entries, and so on.

Subversion’s new merge tracking fixes the need to manually figure out which changes need to be merged between branches. Instead you just tell Subversion you want the branch changes merged to the trunk, and it figures out what to merge. You can run the same command every week to merge changes, no revision numbers required. Subversion 1.5 also makes it easier to merge entire branches back down to the trunk, for example when merging an experimental changes branch to the trunk.

For merge tracking to work, you need to upgrade both your server and client to 1.5, and upgrade your repository with svnadmin upgrade. Merge tracking isn’t quite finished in Subversion—1.5.1 will address performance issues and some edge cases such as cyclic merges.

Another great feature in Subversion 1.5 is change list support. As you are working on a change, you can organize your changes into named change lists, and check them in independently. This is really useful in cases where you’re working on a feature but then someone asks you to fix a bug, and you want to do a quick fix on the bug and check it in. You can now just fix the bug, keep the bug changes in a different change list to the feature changes, and then commit the bug fix without committing the unfinished feature. This is a client-only feature which many people did manually previously—I know I’ve deliberately done half-commits so someone else could see my work sooner. Unlike the Perforce version of change lists, no-one else can see your in-progress change lists, they’re stored on the client only.

As usual TortoiseSVN, everyone’s favourite Windows Subversion client, has full support for all the new Subversion features.

7 Comments »

mike on June 23rd 2008 in Agile

Domain Driven Reporting

ThoughtWorks projects are built using best practices that include layered architectures, at the heart of which is often a domain model. The model contains concepts and business logic to support the application. The model is often persisted using an object relational mapping scheme with either custom code or a mapping tool such as Hibernate. Our domain model is used to drive screens and processes within the application. But when it comes time to writing those pesky reports that the business desperately needs, we tend to revert back to plain old SQL stored procedures. This article shows how we can leverage a .NET C# domain model to create traditional Reporting Services reports.

The Bad Old Days

Microsoft Reporting Services is a standard part of the .NET stack. When faced with a reporting requirement on a .NET project, we’d need to do a lot of explaining to choose a different reporting tool, and we’d be taking some chances. For this article we’ll assume the political or technical environment doesn’t allow us to choose another tool or to avoid reports entirely. It is usually still worth asking the business whether an interactive screen will serve their needs instead of a report—on a current project in Calgary the business champion understands that a screen is faster to develop and easier to test than a Reporting Services report. He will often choose a screen instead of a report where functionality such as PDF export or printability is not required.

The Reporting Services authoring tool is designed around SQL or stored procedure development. The default data source is an SQL data source, and the tool works well alongside a SQL editing environment where we can develop our queries. Reporting Services uses SQL based data sources for report parameters too. This leads many of us to conclude that the right way to use Reporting Services is to develop custom SQL for report parameters and data sets. SQL cannot leverage our domain model, so we end up repeating many business rules and concepts within our reporting SQL. Since SQL or even T-SQL is less expressive than C# code, and since most programmers are less adept at database programming, our reporting SQL gets unwieldy, complicated, and bug ridden. Maybe we add some views or functions to attempt to clarify things. Then we tune our SQL so the reports perform well. The result is often a really nasty section of our application that no-one wants to work on, and that can often contain bugs. Reports are usually important for running the business, so the numbers must be right.

Web Services to the Rescue!

A little-advertised feature within Reporting Services is its ability to consume Web Services as a data source. One reason this feature is poorly advertised might be Microsoft’s lack of expertise with applications that have a real domain model—most of their examples get as far as Data Access Objects but no further. Another reason is that the tool support for using web services isn’t as good as the support for SQL and stored procedures. It’s still quite usable, however.

The basic idea is to expose our existing domain model using web services, with methods specifically tailored to our reports. The SOAP response from a web service call is an XML document and is “flattened” into a dataset by reporting services. We can tune the way the flattening works to avoid extraneous elements (for example, we’re probably not interested in the SOAP envelope). Once we have the dataset we can write a report against it in the usual way.

A Web Services Driven Report

When creating a report you first need to define an XML data source. For the connection string enter the path to your web service.

Reporting Services Datasource

Now, add a new data set for the data source. The command type should be “text” and the query string an XML fragment including the method you wish to invoke on the web service. You should include parameters in your XML query for each parameter the web service method expects. In this example we have four parameters. We mapped the query parameters to report parameters using the “parameters” tab.

Reporting Services Dataset

Beware that whilst you’re designing your report and setting up your data source, Reporting Services will often call your web service method using null for each parameter. Code your web service so that it checks for null and returns an empty response (empty list, zero response, etc) for one of these invalid calls, otherwise you’ll find the designer loses your query/parameter settings. This can be really annoying.

In our example we’re using ElementPath to collapse the web service XML response into a data set. The syntax is fairly arcane but you can try starting with a * as your element path which does a default conversion. You can read more about the syntax at Microsoft TechNet. We seem to need the Response {} and Result {} at the start of the path, even though those elements don’t actually occur in the web service SOAP response. In this example, we’re also forcing various fields to Reporting Services data types such as Date, Boolean and Decimal, rather than treating them as simple strings.

You should now be able to exercise your data source and see results come back. Now you can build the rest of your report in the designer as usual.

Performance Considerations

One common reason for using SQL or stored procedures for reporting is a worry about performance. Traditionally, a domain object approach is slower than using “raw” SQL. In fact this is always likely to be true—getting the database to do all the work will always be faster than loading objects into memory and processing them. But how fast do we really need a report to perform? And can we truly write database procedures that duplicate our business logic and get them to perform well?

Our experience in Calgary has shown that a report that took over two weeks to develop using stored procedures could be built in an afternoon using our domain model. Tuning the original report took considerable time, but we got to the point where we could generate an 850 page report in under 20 seconds. Initially the domain model version of the report took minutes to run, but with a small amount of tuning (only a few hours) we had it running in just ten seconds—twice as fast as the SQL based report. As an added benefit, the performance tuning we did on the domain model helps the entire application run faster. Tuning a domain model isn’t that hard; simply avoid loading too many objects into memory. Instead of loading 30,000 transaction entries and then processing them, we get the database to aggregate them by transaction type and date, something that we’d need to do in the domain model before displaying the results anyhow.

1 Comment »

mike on June 9th 2008 in .NET, Agile

Fire your dev teams (reprise)

No, of course I don’t really think you can fire your entire development team and throw away your existing successful code base. Just because a blog post is spell checked and doesn’t contain swear words all over it does not mean someone is being entirely serious. I’m really disappointed in the anonymous Internet hate comments that some of you have left. I’m surprised and disappointed that people claiming to be professional programmers are acting like teenage forum trolls.

Many people did leave constructive comments and their points are worth some discussion.

jimmyD pointed out that the Internet runs on “hacker built” systems such as Apache, Bind, Sendmail and many more. They’re built by a loose collection of programmers–sure–but in no way are those tools “hacks”. Apache has a rigorous process for defining their projects, managing and running each project, and for testing and releasing software. There’s a massive difference between “hacker built” and “hacked together” systems. The programmers who develop successful open-source tools are not your average computer programmer. Many successful open-source programmers are hired by large enterprise companies or successful startups in order to apply their skills to other software. Open-source is some of the most successful “enterprise” software you’ll find.

While there was disagreement about how best to move from “startup” to “enterprise” some people conceded that there is indeed a time when a company needs to move from a code slinger mentality to something a little more formal. (Thanks to Adam Ierymenko for contributing the term “code slinger”).

Business people who start companies are usually not the same people to take those companies from startup to enterprise. Many successful startup founders immediately hire business execs with years of experience running large businesses (Google, WestJet, RackSpace, Facebook). Why do we assume that technical people will be able to span the transition when business people are happy to admit that someone else has more appropriate skills? IT is not a commodity, developers often do not span these multiple roles. Dennis Martinez blames IT management for having the wrong developers, but based on the responses I had to my first post I suggest we also blame developers for failing to admit they cannot or should not span styles of development. (By the way Dennis, you should re-read my post before grabbing sentences out of context and claiming every line of text is advice.)

The terms “enterprise,” “enterprise quality” and “enterprise developer” drew a lot of flak. Many people assumed that since I had used the E word I was talking about massive engineering projects, slow application servers, or processes involving lots of overhead. I didn’t mean that, and just like you I’ve been burned by heavy shrinkwrapped stuff labelled “Enterprise”. What I mean when I say “enterprise” is that a thing is suitable for use and extension over the long term. That means it’s well tested, firstly. It means the architecture is coherent and consistent. It means the software is extensible, and maintainable by people who (often) did not write the software in the first place. It means the system is performant and scalable and easily deployed.

There will always be tradeoffs when writing software. There’s no one “right way” to create an application. A startup, under significant pressure to deliver, will usually make tradeoffs that produce worse code, and that’s the right decision at the time. But please, startup IT people, make sure you and your team are aware that you are making compromises. Discuss the problems you might face in future and have a plan in place for fixing the problems when you become the next Facebook.

No Comments »

mike on March 9th 2008 in Agile, Web 2.0

How to Post Comments

My last post seems to have generated some commentary, as was intended. I moderate all comments and if you haven’t posted before it might take me a while to approve your comment. Once you’ve had one comment approved any future comments should appear immediately. I’m doing this is so I can filter out Viagra spam and it’s really in everyone’s best interests.

Internet script kiddy “you fail, loser” comments do not further the discussion and I will delete them. I’ve let everything through so far, since I didn’t set any ground rules, but I’ll be less lenient in future. Please limit yourself to mature responses that contribute something other readers might find interesting. Anything else will be deleted—you’re wasting your time posting that kind of thing here.

Please use your real name when you post (not that I can actually check who anyone really is). Anonymous posts will be consigned to the bit-bucket in the sky. Feel free to enter a fake email address, I don’t think WordPress actually uses them for anything. Thanks!

2 Comments »

mike on January 30th 2008 in Blog

Startups: Fire Your Dev Teams

Facebook should have fired their development team the minute they became successful. If you’re a startup and you just made it big you should take a long hard look at your software team and make some changes.

I’ve worked for two startups and consulted at several more. I’m not just talking about software startups, this applies to many industries—transportation, web 2.0, shopping, digital media, government—where the guys who built v1 of your web site, order tracking software, customer service tool, batch processing system, whatever, are unequivocably not the people you should rely on to take you from Startup to Enterprise.

When you’re a startup, time-to-market and user base are absolutely the most important success factors. For an aviation startup, flying your planes, selling your tickets and generating a loyal customer base are the most important things. As a computer manufacturer, you hire ex-military types to run your logistics and can custom build a PC and get it to a customer’s doorstep in four days. In general, an IT/programming competency is not required for a startup to be successful. In fact, “enterprise” engineering practices can actually slow you down. As someone who’s written software at startups I know the pressures to get the next demo ready, to release the next feature for the web site. At that stage in the life of a startup we don’t even know if the company will survive another month. Heavyweight testing, deployment and release processes just slow you down. I’d argue that most development teams actually do the right thing at this point. Get the software done, fast, and make it mostly work.

The problem I’ve found when working at startups-turned-enterprises is that the guys who built v1 of that web site are now running the IT department. The operations guy who originally managed the company’s one web site is now in charge of your entire data centre. The v1 codebase has been hacked and patched into v2 and v3, and everything is in a mess. Let’s talk about the codebase. When trying to get stuff out the door as fast as possible developers will often find out that a design decision they made is no longer appropriate. Given enough time they could fix that problem, but they don’t because the demo is happening tomorrow. And this is the right decision! But letting that patched, hacked, copy-and-pasted codebase live any longer than it takes for the company to declare itself stable is a really bad idea.

Most software developers don’t know how to engineer large enterprise systems. Most large enterprise systems are engineered badly. I know, I’ve seen a lot of them. Even with a resume with years of experience on appropriate sounding systems, most developers won’t know a well engineered app from a hole in the head. Now give these guys a couple of years hacking some code together for your startup and promote them to Chief Architect. Hire an expensive consulting company to help you build your systems better, and allow Chief Architect dude to ignore their recommendations. What?

Seriously. The minute you’re successful, plan to rewrite your software from scratch. Plan to hire enterprise quality developers. Move anyone with decent business knowledge (like Chief Architect dude) into business-related roles. They’ll be much more valuable there and cause you less problems. Hire a new development manager with proven experience engineering enterprise software. Pay them market rate +30% (you’re successful, remember?). Keep your developers but explain to them that you’re no longer a startup and they’re going to have to raise their game. At the same time you’ll be putting that old codebase in maintenance mode and starting with a clean slate. This will require planning with your business—you won’t be giving them new features for a while until you can rewrite your software. Believe me, you’ll end up with much better engineered software that can respond to change faster than your existing stuff.

Don’t believe me? As a successful startup looking to become an enterprise, try the following exercises:

  • Add a column to an existing database table. Make it required. How many files did you need to alter to get that field to show up on screen? How many people were required to make the change? How did you migrate old data to the new format? How did you test the change, and are you sure you didn’t break anything else? How long did it take to deploy? What about rollback?
  • Run a cut-and-paste detector on your codebase. Simian works for Java, C#, C++, Ruby, HTML and other code and is a great way to scare the crap out of your dev team (assuming you’ve admitted that duplicate code is a bad thing).
  • Find out your automated test coverage percentage. If you’re not yet tracking this, now would be a good time. 80% coverage on a real world project is extremely good, most projects run much below this. If an automated test isn’t testing your code, how do you know it works?

30 Comments »

mike on January 29th 2008 in Web 2.0

Blog upgraded to WordPress

I’ve upgraded to WordPress after several happy years using Blosxom. Whilst Blosxom is a neat blogging engine and has served my needs well, it doesn’t really have all the spanky new “community” features that I’d really like. A colleague told me recently that I’d gone “dark” and that’s true–hopefully easy drafting of blog posts and features like comments and trackback will help me reconnect.

The older archived mikemason.ca is still available, and anything indexed by a search engine will continue to work. If you see something awry, please let me know.

2 Comments »

mike on December 28th 2007 in Blog

Disabling Export Formats in Reporting Services

Reporting Services 2005 doesn’t allow you to selectively disable export formats for your reports. You can disable export formats for the whole server, but this is a bit useless in a shared environment. For my current client, we wanted to disable XML, Tiff and “web archive” export on all of our reports, but not affect any other reporting applications deployed on the server.

Our reports consist of custom parameter pages followed by a report view page onto which we’ve dropped a ReportViewer control. When the page loads, we programatically set the ReportViewer’s server URL, report path, and parameters. We’re doing this to ensure users are requesting reports for data that they’re allowed to see and not tricking us into revealing other users’ data.

Whilst there’s no official way to disable export formats on an individual report, a little bit of exploration using Reflector reveals that the ReportViewer contains a ServerReport that acts as a proxy between the control and the Reporting Services web service. The ServerReport allows you to list rendering extensions—these guys do the hard work of actually exporting in each format—but it doesn’t allow you to set the visibility of an extension. Reflection to the rescue!

The following code sample turns off XML, Tiff, and “web archive” export for a particular ReportViewer or ServerReport:

using System.Reflection;
using Microsoft.Reporting.WebForms;
using Microsoft.SqlServer.ReportingServices2005.Execution;

namespace MyProject
{
    public class ServerReportDecorator
    {
        private readonly ServerReport serverReport;

        public ServerReportDecorator(ReportViewer reportViewer)
        {
            this.serverReport = reportViewer.ServerReport;
        }

        public ServerReportDecorator(ServerReport serverReport)
        {
            this.serverReport = serverReport;
        }

        public void DisableUnwantedExportFormats()
        {
            foreach(RenderingExtension extension in serverReport.ListRenderingExtensions())
            {
                if(extension.Name == "XML" || extension.Name == "IMAGE"
		                           || extension.Name == "MHTML")
                    ReflectivelySetVisibilityFalse(extension);
            }
        }

        private void ReflectivelySetVisibilityFalse(RenderingExtension extension)
        {
            FieldInfo info = extension.GetType().GetField("m_serverExtension",
                                                          BindingFlags.NonPublic
							| BindingFlags.Instance);
            if (info != null)
            {
                Extension rsExtension = info.GetValue(extension) as Extension;
                if(rsExtension != null)
                {
                    rsExtension.Visible = false;
                }
            }
        }
    }
}

I really hope Microsoft includes this functionality in the next release of Reporting Services. It would be really quite simple to include it in the graphical report designer and avoid this nasty reflection hack entirely. At the very least, the sealed RenderingExtension class could have a mutable “Visible” property, rather than the read-only property it has now.

33 Comments »

mike on April 30th 2007 in .NET