Sunday, 9 February 2014

AForge, FFmpeg and H.264 Codec - Default Settings Problems

If you've read my previous blog post you;d be aware that I'm currently in the process of creating mp4 videos encoded with the H.264 codec and to do that, I'm using AForge.NET. Unfortunately, the functionality to encode in H.264 isn't readily available and I went through the steps to enable it in that previous blog post.

However, nothing is ever as easy as it should be and as soon as I had enabled it, I got the following error message:
"broken ffmpeg default settings detected"

After a bit of research I found the cause of the problem and unsurprisingly, it's exactly what it says on the tin. The defaults settings being sent to the codec are broken. In actual fact, the default settings set by the FFmpeg library (that's the library which AForge.NET wraps around) are a load of rubbish. If we want to get this working then we're going to need to set some sensible defaults.

If you open up the Video.FFMPEG project from the AForge.NET solution (the one found here) and open up VideoFileWriter.cpp and find the add_video_stream method, you should see an if statement that looks like this:


if (codecContex->codec_id == libffmpeg::CODEC_ID_MPEG1VIDEO)
{
    codecContex->mb_decision = 2;
}


We can now add to this if statement and set up some default values which will work like so:


if (codecContex->codec_id == libffmpeg::CODEC_ID_MPEG1VIDEO)
{
    codecContex->mb_decision = 2;
}
else if(codecContex->codec_id == libffmpeg::CODEC_ID_H264)
{
    codecContex->bit_rate_tolerance = 0;
    codecContex->rc_max_rate = 0;
    codecContex->rc_buffer_size = 0;
    codecContex->gop_size = 40;
    codecContex->max_b_frames = 3;
    codecContex->b_frame_strategy = 1;
    codecContex->coder_type = 1;
    codecContex->me_cmp = 1;
    codecContex->me_range = 16;
    codecContex->qmin = 10;
    codecContex->qmax = 51;
    codecContex->scenechange_threshold = 40;
    codecContex->flags |= CODEC_FLAG_LOOP_FILTER;
    codecContex->me_subpel_quality = 5;
    codecContex->i_quant_factor = 0.71;
    codecContex->qcompress = 0.6;
    codecContex->max_qdiff = 4;
    codecContex->directpred = 1;
    codecContex->flags2 |= CODEC_FLAG2_FASTPSKIP;
}


If you now compile that and use the resulting DLL in your project, you'll see the error has gone!

But.... as always, it's not that simple! I got to this stage and when I was just using a simple bitmap image to create a very simple (and very short) video I'd get the following warning during every frame that I sent to be encoded:
non-strictly-monotonic PTS

However, it didn't seem to have any effect, my video file was still created and played so I thought it wouldn't really matter. I was wrong.

When I put the DLL into my final project that involves creating much larger movies, the program would just randomly crash. I say randomly because there was no real consistency to it. At different times during the writing process the WriteVideoFrame method would throw an exception and that'd be the end of that.

On that basis, I thought it be best that I resolve this "PTS" warning and see if that solves the problem. But, what on earth is a "non-strictly-monotonic PTS"? That's a good question which I hope to answer in my next blog after I've fully understood it myself!

Monday, 3 February 2014

HTML5 - Video and Encoding

I've recently thought I'd dive into the world of showing video online and being an up to date web developer, I don't want to be using no flash stuff.... I want to use the latest and greatest HTML5 video tags after all, it's meant to be easy right?

Wrong. Well, kind of.

If you have a video that is in the right format and encoded with the correct codec (take a look at w3schools for a list of them), then it is actually very simple, you can use the HTML5 Video tag like so:
HTML5 - The Future


<video width="320" height="240" controls>
  <source src="movie.mp4" type="video/mp4">
  <source src="movie.ogg" type="video/ogg">
Your browser does not support the video tag.
</video>


The multiple sources allow you to define different formats of the same video. The browser will go down the list until it finds a format it can play. If it finds a playable format then it'll do just that.

However, what if you don't have a video in the correct format? What if you're trying to generate your own content, on the fly using a simple web cam on your laptop? Surely saving a video in the format you want is pretty straight forward?

Wrong.

Let's take you through the dark and nasty world of video's in managed code but first, let's give you some idea of what I'm trying to achieve.
I've just been given a Raspberry Pi with a camera module (a great Christmas present by the way) so I thought I'd go about and set up a little home CCTV system. To go a step further, I want the system to be able to detect movement and at that point, start uploading a live feed to a website, where I can then log on and view this live feed. I've also got a couple of laptops around the house equipped with web cameras so my plan is to use them as extra cameras to the system, when they're turned on.

That's the simple brief. I say simple, when you scratch beneath the surface, it gets complicated. The laptops are on various versions of Windows (Windows 7 and Windows Vista) with various versions of the .NET framework installed. The Pi runs on Raspbian which is a port of Debian wheezy which is of course, a version of Linux. So we've got different OS versions with different architectures. Because of these complexities, I want to make this little system with managed code using the .NET Framework. There are quite a few challenges to over come here and I don't want the fundamentals of a language I don't really know to be getting in the way, so I'm going to play it safe and stick with what I know.

Now at this point, I should say this is a work in progress, this project isn't completed by a long shot but I thought I'd blog about the problems I encounter as and when I encounter them.

So, for the time being at least, I'm going to ignore the Raspberry Pi camera module, I'll come back to that later. I haven't done the necessary research but I suspect Mono (the cross platform, open source .NET development framework) won't support the necessary libraries I need to use to be able to capture video feeds but I have a cunning plan for that... that, however, is for a separate blog post. For now I just want to be able to capture a video feed from one of my laptops.

So, where to start?

I said this system should detect movement. To do that I need to compare a frame from one moment in time to a frame in another and if there's a difference then something has moved. Fortunately, there's some great blog posts around movement detection algorithms and I implemented one that's shown here: http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms

As you go through the above post  you'll notice it has the option of writing to file. Great!
You'll then notice it's writes it as an AVI file. Bad!

AVI uses the Windows Media Video 9 VCM codec. The word "Windows" in there should give you a pretty good indication that browser vendors like Google aren't going to support it and you'd be right. It's not a supported codec for HTML5 Videos and browsers like Chrome and Safari won't play it.

So how we go about saving this thing in a format that is supported by most browsers? In particular, how do we save this thing in mp4 format encoded with H.264?

Well, the motion detection algorithm uses a framework called the AForge.NET Framework. This is a very powerful framework and as their website states, it's a "C# framework designed for developers and researchers in the fields of Computer Vision and Artificial Intelligence - image processing, neural networks, genetic algorithms, machine learning, robotics, etc.". I'm particularly interested in the "image processing" part of that.

As it turns out, AForge has a library called AForge.Video.FFMPEG. This is a managed code wrapper around the FFMPEG library. This library has a class called "VideoFileWriter" and it seems like we're on to something here. It has an Open method with the following specification:


public void Open(string fileName, int width, int height, int frameRate, VideoCodec codec);


That last parameter allows you to define a VideoCodec to encode it with. Great! Now we're getting somewhere. Surely all we need to do is set that to H264 and we're there! VideoCodec is an enum so let's check out it's definition.


public enum VideoCodec {
    Default = -1,
    MPEG4 = 0,
    WMV1 = 1,
    WMV2 = 2,
    MSMPEG4v2 = 3,
    MSMPEG4v3 = 4,
    H263P = 5,
    FLV1 = 6,
    MPEG2 = 7,
    Raw = 8
}


What?! No H264? To make matters worse, none of those codecs are supported by the major browser vendors. You've got be kidding right? I'm so close!
Surely the FFMPEG library has an encoder for H.264? It's meant to be the "future of the web" after all...

Let's check the FFMPEG documentation. After a bit of searching you'll come across that yes, it does. Why on god's green earth can we not use it then?! Unfortunately, that's not a question I can answer. However, with AForge being open source, we have access to the source code and with us being software developers, we can solve such problems! After all we know the the AForge.Video.FFMPEG library is just a wrapper around FFMPEG. Come on, we can do this!

If you open up the AForge.Video.FFMPEG solution after downloading the source code of AForge, the first thing that will hit you is this isn't C# we're looking at... this is Visual C++. Now I haven't touched C++ since University but not to worry, we're only making a few modifications and I'm sure it'll all coming flooding back when we start getting stuck into it.

Now where on earth do we start? We've got a library written in an unfamiliar language which is wrapped around another library that we have absolutely no knowledge of. I could download the source code for FFMPEG but let's cross that bridge if and only if I have to.

First off, we know we need an H264 option under the VideoCodecs enum, so let's add that. Open up VideoCodec.h and you'll see the enum definition. Add H264 to the bottom so it looks something like this:


public enum class VideoCodec {
    Default = -1,
    MPEG4 = 0,
    WMV1 = 1,
    WMV2 = 2,
    MSMPEG4v2 = 3,
    MSMPEG4v3 = 4,
    H263P = 5,
    FLV1 = 6,
    MPEG2 = 7,
    Raw = 8,
    H264 = 9
}


Unsurprisingly, we can't just add an extra option and expect it to work. At some point that enum will be used to actually do something. The first thing it does is to select the actual codec and pixel format to use for the encoding of your video. It does that by looking up the codec and the format from two arrays using the enum value as the position of the item in the array.
These arrays are stored under VideoCodec.cpp. Open that up and you'll see the definition of the video_codecs and pixel_formats array. We just need to add our options in here like so:


int video_codecs[] = 
{
    libffmpeg::CODEC_ID_MPEG4,
    libffmpeg::CODEC_ID_WMV1,
    libffmpeg::CODEC_ID_WMV2,
    libffmpeg::CODEC_ID_MSMPEG4V2,
    libffmpeg::CODEC_ID_MSMPEG4V3,
    libffmpeg::CODEC_ID_H263P,
    libffmpeg::CODEC_ID_FLV1,
    libffmpeg::CODEC_ID_MPEG2VIDEO,
    libffmpeg::CODEC_ID_RAWVIDEO,
    libffmpeg::CODEC_ID_H264
}

int pixel_formats[] =
{
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_BGR24,
    libffmpeg::PIX_FMT_YUV420P
}


Now we're getting somewhere. Now when we compile this and add it to our project, when we open up a VideoFileWriter using VideoCodec.H264 as the final parameter, the system finds our codec and tries to encode the video using it. Yes! We're there.

Wrong.

What's the red error appearing in our console window?
"broken ffmpeg default settings detected"

Damn. So close. What's going wrong now? As it turns out, the default settings that FFMPEG set for the H264 codec are a load of rubbish. Nothing is ever easy eh?

More on that in the next blog post...

Tuesday, 31 December 2013

Book Review - Secrets of a JavaScript Ninja

I was watching a talk given by Angelina Fabbro on YouTube named "JavaScript masterclass". Its all about trying to become an expert in a particular field. It's a great talk and I suggest you give it a watch. In that talk, she mentions a book, "Secrets of a JavaScript Ninja" and being what I'd consider as an intermediate JavaScript developer myself, I thought it deserved a look. 

Just to give you an idea of the level of my JavaScript expertise, I've never been "taught" JavaScript, I've never attended any courses, I didn't cover it at University. My general method has been to look up pieces of code online as and when I've needed it. After doing this for a while you get a general feel of the language and after 10 odd years (on and off), I feel that I'm pretty knowledgeable in the area however, due to this learning methodology, undoubtedly there's going to be gaps in my knowledge and so I bought this book in hope to fill those gaps.

I'm very happy to report that it does fill in those gaps and more! It does so in a clear and concise way. Any new concept is backed up with code that's been written in such a way that its easy to follow and if there is a difficult concept, virtually all of the code has been broken down into small snippets so you can quickly stick it into JSFiddle and have a play around.

The book covers the core JavaScript language from topics ranging from the importance of functions (and they're far more powerful than I ever imagined) to regular expressions, runtime code evaluation and with statements. These are all areas that you can get by with not knowing in detail but when you do, you'll realize there are far simpler ways of doing the things you've been doing for the past 10 years. As frustrating that is, it is enlightening.

It also covers some of the problematic areas of programming in the browser and the cross browser problems that come hand in hand with this. From event handling, manipulating the DOM and CSS selectors, it covers them all and offers some inventive solutions to problems that you've probably come across yourself.

The really good thing about the book is that throughout it introduces you to patterns of programming JavaScript that you probably don't already use and really wish you did. If you're anything like me you'll find yourself thinking "I wish I had programmed x like this" or "I wish I knew about this feature before I programmed x, y, z".

The book is co-authored by John Resig, the creator of the most popular JavaScript library, jQuery and the book often uses methodologies and solutions that are used within that library. That to me, really gives this book substance, you're learning methods that are out there in the real world and that work so well that it's led to the immense popularity of jQuery.

If you're an intermediate JavaScript developer like me then this book is a must. Some of it you'll already know but some of it you won't and having that extra knowledge at your disposal will give you the tools to write far more elegant code.

If you're new to JavaScript development then I'd suggest holding off on this book for now. The book assumes a certain amount of knowledge regarding the language. You probably could work your way through the book and pick it up as you go along but it will take you a significant amount of time (ok, you'll be learning a good portion of a language so that's to be expected) but I think that process would take away something from the book so if you are in this category, I'd suggest going away, learning the basics and then picking this book up in a month or two's time.

Saturday, 23 November 2013

HTML5 - Prefetching

Once upon a time I blogged about the new features included in the HTML5 spec and I was slowly making my way through the big new additions.

That pretty much died out due to a lack of time but I recently attended WebPerfDays and a new feature mentioned there jumped out at me. This feature is prefetch and it has some fantastic implications for web performance.

What is Prefetch?


Prefetching is the ability to request a page even though you're not on it, in the background. Sounds odd right? Why would you want to do that? Well, requesting a page means the browser can pretty much download all the content of a particular page before the user has requested to see it so, when the user does click on a link to go to that page, the content is immediately shown. There's no download time required, it's already been done.

To enable this, all you have to do is add a link tag like so.

<link rel="prefetch" href="http://clementscode.blogspot.com/somepage.html" />

And that's it. When the browser comes across that tag, it'll initiate a web request in the background to go and grab that page. It will not affect the load time of your original page.

The implications of this for web performance is obvious. Having the content of the page available before it's even requested by the user can only speed up your website but it has to be used properly. Adding prefetching to every web link on your website will cause unnecessary load on your web server so this functionality needs to be thought about before being used. A good example of this is Google. If you search for a term on Google, the first link brought back by Google will be prefetched (feel free to check the source to prove that I'm not lying!). The other links brought back are not prefetched. That's because Google know that in the vast majority of cases, the user clicks on the first link brought back and this functionality allows Google to provide you with that page as quickly as possible.

Are There Any Other Benefits?


That depends on your point of view... I primarily work on ASP.NET WebForms applications most of which are not pre-compiled... not ideal but we have our reasons. Using prefetching enables us to request pages before they're hit which, if it's the first time that page has been hit, forces it to be compiled. So we're improving performance two-fold. That initial compilation time has now been taken away from the user and we're getting the usual benefit of prefetching so users are presented with a page almost instantly after clicking.

That Sounds Awesome But What Are The Downsides?


Well, you're requesting additional pages, as long as the user actually goes to that page then that's great but, if they're not, you're placing an additional load on your server that serves no purpose.

Also, if you're gathering website statistics such as number of page hits and such then this will throw those stats off as technically, the user may not actually view that page even though it's been requested.

Finally, this obviously uses client resources, where as this may not be a problem on a nice big powerful desktop, it may be a problem on small mobile device.

And that's about it. Another great addition to the HTML5 spec. As with most things in our world, you need to think about its use rather than just blindly prefetching everything without any thought of the disadvantages of doing so.

Enjoy!

Monday, 2 September 2013

Improving Build/Start Up Time

In fairness, there's nothing better to do while
waiting for compilations
I've recently had the pleasure of upgrading to Visual Studio 2012 and for the most part, I love it. However, I've noticed that debugging my web application has become a very time consuming event. From hitting the F5 button to getting to my start up page, it was taking over 3 minutes and needless to say, it was driving me crazy and was seriously affecting my productivity. The xkcd comic on your right may just point to why...

For the sake of my sanity, I set out to find out how to improve this and I've now got that time down to 20-30 seconds. Here's what I've done...

Web.config Changes

There's a couple of changes you can make in order to speed things up. I should say these should only be applied to your local development environment. They're not changes that should be applied to a production environment as they'll have a direct impact on your application's performance.

Firstly, the configuration tag has two attributes you should make use of. There's the batch attribute and the optimizeCompilation attribute. You should set the batch attribute to false and the optimizeCompilation attribute to true, so your tag should look something like this:

<compilation debug="true" batch="false" optimizeCompilations="true" />

Let me explain what this does. First off, the batch attribute. By default, this is set to true and what this means is that when your web application starts up, ASP.NET will pre-compile all your un-compiled files (aspx, ascx files for example) in one big batch. Depending on the size of your application, this can significantly increase your load time. By setting it to false, this will no longer occur and instead, the file will be compiled as and when you access it. That means when you visit a particular page, the load up time for the first time you access that page will be a little slower as it'll need to be compiled but the chances are, if you're debugging a particular problem, you're only going to be visiting a very small subset of the total files that get compiled when the batch attribute is set to true so overall, you'll be saving yourself a significant portion of time. For more info, check out the MSDN documentation.

The optimizeCompilations tag is a bit of an odd one. I can't seem to find any documentation about it apart from this blog post so I don't know if it's valid in .NET 4 (Although I use it in my applications and it seems to do the job). Anyway, the reason this is helpful is that by default every time you change a file in the bin directory, the global.asax file or anything under the App_Code directory, the application is re-compiled during start up (the same re-compile process we spoke about above). By setting this to true, this re-compilation no longer occurs. Now this can cause problems which is why it's not turned on be default (more info about that can be found on the blog post mentioned above) but in the majority, if you're like me, you're usually changing method implementations rather then creating or change method specifications. So, turning it on means no more recompilations, again saving more time on start up.

Fusion Log

For those of you that don't know, fusion is a tool that enables you to log DLL binds to disk (more info on this tool can be here). This is particularly helpful when at runtime you're getting errors about DLL versions or particular DLLs not being found and you can't work out why. The log will tell you where ASP.NET is looking for those DLLs, what it's finding and whether or not it fails. This can be a very handy tool. 
In .NET 3.5, I found that it was sometimes a little unreliable. I'd turn on logging and the logs wouldn't be generated. It was a tad frustrating it must be admitted. In .NET 4, I don't have this problem and logs seem to be generated as you'd expect. The problem with this is that creating these logs takes time and if you happen to have hit the "Log All Binds" option and then forgot about it, you'll notice a significant performance decrease.
Long story short, only log binds if you absolutely need to and make sure it's turned off when you're not using it.

Application_Start

Application_Start is a method in the global.asax file and it'll fire when when the application domain starts up. This is great but if you're running code in there then it'll obviously effect load up time. So, if you don't need to run that code, don't. I know in our application we have features that are initiated in that method. If I'm not testing one of those features then I don't need it enabled which helps speed up the application start up time.

Solution Changes

Finally, you can make a few changes to your solution to ensure only what you need to be built, is built. 

Firstly, do a full re-build of your application so that all necessary DLLs are generated. Then, unload any projects you're not working on. If you change a class or method in one project, Visual Studio will compile all projects that have references to that project. Unloading those projects will ensure they're not rebuilt saving you some precious seconds on your build time.

Secondly, if you right click on the solution at the top of the solution explorer within Visual Studio, you can select "Configuration Manager". In here you can un-tick the "build" option on all projects that you're not working on. As with the option above, this will ensure un-neccesary projects aren't built. What it does mean is that when you change code in a project that you've marked not to be built, you have to explicitly build that project otherwise your changes won't be picked up.

Warning. These solution changes do have down sides. If you change a method signature within a project and projects that reference that have not been built, then any errors caused by that change will not be picked up at compile time. With that in mind, it's well worth compiling everything before committing/pushing/releasing any code, just to ensure everything compiles as you'd expect.

And that's all I've got. If you've got any other tips or tricks regarding application start up time then I'd love to hear them. Very few things frustrate me as much as waiting for the web applications start up screen to actually load.

Happy coding!

Wednesday, 31 July 2013

JavaScript - Memory Leak Diagnostics

Memory leaks in JavaScript seem to be becoming an ever increasing problem. This is no surprise with JavaScript being used more and more but have you ever tried to solve a memory leak in JavaScript? It's no simple task, the tools simply don't exist to help determine what objects are leaking. You're essentially trying to find a needle in a haystack while blindfolded. Not Cool!

Until now.

There are three tools I would like to talk about. Sieve, Google Chrome's Heap Snapshot and the new boy on the block, Internet Explorer 11 Developer Tools.

In most cases, as a developer, you need a bit of a nudge in the right direction. Once you have an idea of where the problem may lie, we're pretty intelligent, we can usually work it out. Sieve gives you that nudge.
It's a memory leak detector for Internet Explorer. When running it'll show you all the DOM elements that are currently in memory. It'll then go one step further and it'll show you the DOM nodes that are currently leaking with an ID and everything. You can then use that information to try and find out why it's leaking. Usually it's because some piece of JavaScript some where references the element which has since been removed from the DOM.
sIEve - Memory Leak Detector

I must admit, I used this on a complex web application which had popup windows with iframes inside iframes and if it didn't crash then it did report some nodes were leaks when they weren't but, it did at least give me that nudge to look at a particular screen.

Chrome Heap Snapshot
Now we're talking! This is, to my knowledge, is the first proper way of determining what objects are specifically leaking. It allows you to take a snapshot of what objects are in memory at a given point in time and then you can compare these snapshots.

Chrome Heap Snapshot
This is rather handy. It means you can see which objects were created in the first snapshot and still exist in the second snapshot, i.e. the ones that are causing your problem!

The good thing about these snapshots is that it also tells you the "retaining tree". This is essentially the path from the root objects to the object in question, this means you can trace the path and work out why your object isn't being garbage collected.

The tool has a few other ways of helping you find your leak if comparing snapshots isn't quite cutting it. There is a "containment" view and a "dominator" view. I haven't had much use for the containment view (see here for more details) but the dominator view essentially lists the objects with the biggest memory consumption which can be helpful if you've got leaking global objects.

And a late entry.... Internet Explorer 11 Heap Snapshot
A developers preview has just been released on Windows 7 but so far so good. It's much the same as Chrome's version, if a little easier to read.

Internet Explorer 11 Developer Tools
There are two exceptions, firstly, on a positive note, it has a search functionality which Chrome doesn't have. This allows you to find objects that you know the id of. On a negative, it seems you can only compare sequential snapshots. You could not for example, compare your first and third snapshot which means you have to really think about when to take a snapshot.

I haven't had much time to really play around with this and it is only a developers preview but so far it looks like it could be a very useful tool. In actual fact, the whole new developer tools has a real potential but that's another blog post for another day.

For more info on the memory tab within the developer tools check out the MSDN documentation.

Conclusion...
As always, use the best tool for the job. For simple leaks, sieve is very good in finding the problem. For more complex problems the heap snapshots are the way to go.

The work Google and Microsoft have done in this area recently show how big JavaScript has now become and these tools are a great addition to any web developers tool kit.

If you do ever have to look for a memory leak, my thoughts are with you.

Good luck!

Sunday, 26 May 2013

Web App Upgrade From .NET 3.5 to .NET 4.5

We've recently gone about upgrading our web application from .NET 3.5 to .NET 4.5 and as you could probably guess, it didn't quite go as smoothly as one would hope.

As we go through this process I'm going to blog about the difficulties and what we did to overcome them.

So, here we go...

System.Web.UI.HtmlControls.HtmlIframe


This is a whole new type in .NET 4.5 and oddly, it can cause a few problems.

Take this line of code for example:

<iframe src="about:blank" id="myFrame" runat="server" />

If you wanted to refer to this control in C# code, in 3.5 you'd write something like this (preferably in a designer.cs file):

HtmlGenericControl myFrame;

In .NET 4 however, an iframe is no longer an HtmlGenericControl, it's an HtmlIframe which does not inherit from HtmlGenericControl. This means you need to change the above line of code to something that looks like:

HtmlIframe myFrame;

Creating this HtmlIframe class makes sense and means that iframes have their own object, much like the HtmlTable class but, it does seem odd that it does't inherit from HtmlGenericControl. Unfortunately, this design decision has knock on effects for upgrades. Any iframe which has been defined as an HtmlGenericControl now needs to be changed to an HtmlIframe. To make matters worse, if you've manually defined these controls and they're not wired up via an auto-generated designer file, then the problem won't be picked up at compile time. You'll need to actually run the application and wait for it to fall over to find the problem.

The joys of upgrades eh?