Saturday, 22 November 2014

Microsoft Future Decoded 2014

On a pretty miserable Wednesday, I set off for the Excel centre in London to attend the Microsoft Future Decoded conference. My aim of the day was to learn what Microsoft was planning for the future in terms of the .NET framework and to hopefully generate a few ideas that I can utilize in future projects.

The event consists of three separate sections, in the morning you had the key note speeches. In the afternoon you had various technical tracks on a range of subjects and throughout the day you had the expo.

The keynote speeches were, as a whole, very interesting. They covered quite a wide range of topics from open data, that's data that is publicly available in a machine readable format, to the technology advancements in the formula one world. The speakers themselves were all very good public speakers and the final speaker was Professor Brian Cox, he's obviously on TV a lot which should give you an idea of the calibre of speaker at the event. The problem with all of this is that very little of the content of the talks actually related to the development world and I was left wondering what exactly I, as a developer, was meant to take out of the talks. Don't get me wrong, it was very interesting learning about the theories behind how the universe began but I didn't see how that was going to help me in the near future.

As for the technical tracks, there were a variety of topics to go and listen to, ranging from "Big Data" to the "Future of the .NET Framework". The structure of these tracks was that there was three separate presentations, each one separated by a quick coffee break. Now I have an interest in various areas and it would have been good if I could have sat in on one talk from one track, then another talk in another track... kind of mix and matching topics throughout the day. The problem was that each of the talks in each track lasted a different amount of time so without turning up to a talk late, there was no way you could do such a thing.

With that in mind, I ended up going to the "Future of .NET Framework" track. It seemed a reasonable thing to do considering that will affect pretty much every project I work on in the foreseeable future. It seemed the vast majority of attendees thought the same. The session was massively over subscribed with a good portion of people having to sit on the floor or stand at the back. To make matters worse, I didn't get much out of it if I'm honest. I follow the inner workings of the .NET framework in quite a bit of detail, making sure that I'm at the cutting edge of the technology and trying to see if there's any technology that I can take advantage of on the horizon. Sadly the vast majority of things being shown I had already read and researched from various other announcements. The one big announcement of the day was that the .NET framework was being made completely open source. Don't get me wrong, this is a massive step and one in the right direction but I didn't gain any value from being there in person for this announcement, I'd have taken just as much out of it by reading about it the following day. None of the talks inspired me or generated any new ideas which I felt was pretty disappointing.

Finally, there was the exhibition. I was hoping this would be the saving grace for the day but alas, it wasn't to be. The vast majority of the exhibition was people just trying to sell services, services that were completely irrelevant to me, a developer. The only real good thing about the exhibition... the x-box stand fully loaded with Fifa. Who couldn't resist a go on that?

Overall then, I left the conference quite underwhelmed. The future definitely didn't seem decoded to me. With that said, would I go next year? I certainly would. The keynotes, although had little relevance to my day job were interesting. As for the technical tracks, I feel the key to getting the most out of those may be to pick a subject that I don't follow in such detail, big data perhaps. Doing that I suspect I'd leave the track with more knowledge then when I went in which will help me feel like it was more of a success. And as for the expo... more time on Fifa would certainly be required!

Saturday, 26 July 2014

Oracle Data Provider 11.2.0.4 - Connection Leaks

Are you using the Oracle Data Provider provided with Oracle Client v11.2.0.4? Noticed that after a period of time of use, the database is hitting its maximum number of processes and all of them have been generated by your application? You've only got a few users, how can this be? It worked fine on previous versions...

Essentially, you've got a connection leak. Connections to the database are being created but they're not being closed correctly and over time, these build up until the database starts rejecting connections stating "ORA-12518: TNS: listener could not have off client connection" or "ORA-00020: maximum number of processes (%s) exceeded".

How can we fix this?

This problem is almost always caused by not disposing of Oracle objects so you need to ensure that you've closed and disposed of all Oracle objects. OracleConnection, OracleCommand, OracleDataReader, OracleParameter, they're all disposable so make sure that they're disposed and closed after you've finished using them. This should ensure that the connection is put back in to the connection pool accordingly (assuming you're using connection pooling, if you're not, it'll just close the connection to the database).

But we didn't have this problem with 11.2.0.3 or any other previous version. What's changed?

In 11.2.0.4, the OracleConnection object must be disposed of when you're finished with it. In previous versions you would have got away with just calling the Close method. Not anymore. Dispose must be called. This isn't documented anywhere so things will just start breaking in 11.2.0.4. Cool huh?

The best way of ensuring this is to use the using statement (http://msdn.microsoft.com/en-GB/library/yh598w02.aspx) e.g.


using(OracleConnection conn = new OracleConnection()){
     // Do stuff with our connection.
}


Unfortunately for me, the application I'm working on is rather complex and the above pattern can't be used. The application is a web application and breaking it down to a very basic level, a connection is created at the beginning of the page life cycle. It's then used throughout the page life cycle and then closed and the end of the cycle. The code that closed and disposed of the OracleConnection looked a lot like this:


if(conn != null && conn.State != ConnectionState.Closed){
   conn.Close();
   conn.Dispose();
}


If we haven't opened the connection then we don't try and close it. There's no point after all, it hasn't done anything. This is fine, apart from when the Fill method of the OracleDataAdapter is called.

The documentation of this method states:
"The connection object associated with the SELECT statement must be valid, but it does not need to be open. If the connection is closed before Fill is called, it is opened to retrieve data, then closed. If the connection is open before Fill is called, it remains open."

It opens and closes the connection if it's not already open. That's great, it saves us the hassle of having to open the connection. It also means that check on the connection state is going to cause us problems. The Fill method would have opened the database connection meaning we have to call the Dispose method in order to put that connection back into the connection pool.

The solution? Place the Dispose outside the if statement and problem solved.

This is one of those frustrating problems that would have been solved in pretty much no time at all if the change Oracle made was documented. Instead, it took the best part of a week trying to identify the issue (complicated by the big black box which is connection pooling), reproducing the problem then ensuring all objects were disposed of as they should have been before finally finding the offending code path. It just shows how important documenting changes is, especially changes that have the potential to break your application!

Monday, 21 April 2014

Is 'DevOps' Killing the Developer?

This post is inspired by a blog post by Jeff Knupp entitled "How 'DevOps' is Killing the Developer'. If you haven't read it, go and do so. It's a very good read and can be found here.

For those of you that don't want to spend the time reading it, as a quick summary, Knupp argues that the 'DevOps' movement is changing what the world considers a "developer". Before, a developer would the guy that pumps out lines of code. Nowadays, due to the start-up mentality, developers are now expected to not only produce code but also to act as a system tester; a system admin; a DBA and a systems analyst, all rolled in to one. Knupp argues that this wrong. That it stops developers doing what they enjoy and what ultimately, they're good at which is producing code.

I think Knupp is wrong, to a degree, and this is why...


To take a phrase Knup uses, a 'full stack' developer is not only wanted by companies. They're needed and saying a developer should just code will affect the quality of the system they're building. I don't deny that most developers should be developers in the traditional sense but depending on the size of your team, you need at least a few that have the skills and the knowledge to be a 'full stack' developer, or in other words, a DevOps guy.

Why?

'Full stack' developers have two major plus points:

1.  They can predict problems that traditional developers can not.
As they understand the entire process, they can anticipate problems from external systems before they actually happen. This will lead to a system that is more stable. For example, they have a good understanding of the database. With this they can look at a design of a system and anticipate where bottlenecks at the database level will exist and then they have the ability to change the design accordingly.

2.  They can take ownership of any bug.
This is a huge benefit. They have the ability to find the cause of bugs that other teams don't. Bugs that require knowledge of how everything works together.
Here's a typical scenario. A customer logs a problem with their live system. The customer support team refer it to the DBA's because the problem looks database related. The DBA's look at it, can't find any immediate problem with the database and forward the problem to the developers. The developers look at it and find it works fine in the customers development environment. As the development environment and the live environment, from a code standpoint, are exactly the same, they forward it back to the DBA's. This process repeats. No single technical person can take ownership for that bug because neither believe their area is the cause. The bug takes a significant period of time to resolve at the expense of the customers happiness.
Enter the 'full stack' developer... the bug gets put on their desk. They take ownership of that bug, there's no need to bounce it from department to department, they investigate the bug and find the problem to actually be some proxy server doing something it shouldn't. For the developers this is hard to find as the tools they use to debug problems don't enable them to find such a problem. The DBAs can't find it because it turned out the problem had nothing to do with the database anyway. The network guys could have found it, but why ask them? The problem wasn't originally attributed to a proxy server. The 'full stack' developer knows all of these systems well enough to find these odd, but sometimes critical, problems and when your customers happiness is directly related to how quick that issue gets resolved, this knowledge can prove to be invaluable.

One point that Knupp makes which I do agree with is that these DevOps guys find themselves doing things other than coding. But is this a bad thing?

Generally speaking, you want your staff doing what they do best and in this case, 'full stack' developers are best at coding. You wouldn't hire a developer and put them on reception for example. However, let's analyse the nature of these DevOps guys. These guys are inquisitive, they want to know more. So when faced with a problem they don't just pass it on, they take the opportunity to learn a little more and find the solution themselves. They do this because they find the field of I.T./Computer Science/Development/whatever you want to call it, interesting and want to learn more. Over time, they've investigated enough problems, found enough solutions that they've learnt enough to be considered a 'full stack' developer. Should people like this be confined to "just coding"? Would that give them the ultimate job satisfaction? Is that enough to keep these developers at your company? After all, that's what you want to do... any developer that wants to carry on learning is one you'll want to keep.

In conclusion then, is DevOps killing the developer? I'd argue no. Being a DevOps guy takes a certain attitude and there are plenty of developers out there that are more than willing to just write code. And that's fine. That's better then fine, it's fantastic! But should we be shunning DevOps then? Again, I'd argue no. There's a place for them in any software house, an important place at that. The key to getting the best out of your DevOps guys is the real problem here. How to balance coding time to the other duties they end up performing. It's a fine line but one that if you do get right, has some great rewards.

Sunday, 9 February 2014

AForge, FFmpeg and H.264 Codec - Default Settings Problems

If you've read my previous blog post you;d be aware that I'm currently in the process of creating mp4 videos encoded with the H.264 codec and to do that, I'm using AForge.NET. Unfortunately, the functionality to encode in H.264 isn't readily available and I went through the steps to enable it in that previous blog post.

However, nothing is ever as easy as it should be and as soon as I had enabled it, I got the following error message:
"broken ffmpeg default settings detected"

After a bit of research I found the cause of the problem and unsurprisingly, it's exactly what it says on the tin. The defaults settings being sent to the codec are broken. In actual fact, the default settings set by the FFmpeg library (that's the library which AForge.NET wraps around) are a load of rubbish. If we want to get this working then we're going to need to set some sensible defaults.

If you open up the Video.FFMPEG project from the AForge.NET solution (the one found here) and open up VideoFileWriter.cpp and find the add_video_stream method, you should see an if statement that looks like this:


if (codecContex->codec_id == libffmpeg::CODEC_ID_MPEG1VIDEO)
{
    codecContex->mb_decision = 2;
}


We can now add to this if statement and set up some default values which will work like so:


if (codecContex->codec_id == libffmpeg::CODEC_ID_MPEG1VIDEO)
{
    codecContex->mb_decision = 2;
}
else if(codecContex->codec_id == libffmpeg::CODEC_ID_H264)
{
    codecContex->bit_rate_tolerance = 0;
    codecContex->rc_max_rate = 0;
    codecContex->rc_buffer_size = 0;
    codecContex->gop_size = 40;
    codecContex->max_b_frames = 3;
    codecContex->b_frame_strategy = 1;
    codecContex->coder_type = 1;
    codecContex->me_cmp = 1;
    codecContex->me_range = 16;
    codecContex->qmin = 10;
    codecContex->qmax = 51;
    codecContex->scenechange_threshold = 40;
    codecContex->flags |= CODEC_FLAG_LOOP_FILTER;
    codecContex->me_subpel_quality = 5;
    codecContex->i_quant_factor = 0.71;
    codecContex->qcompress = 0.6;
    codecContex->max_qdiff = 4;
    codecContex->directpred = 1;
    codecContex->flags2 |= CODEC_FLAG2_FASTPSKIP;
}


If you now compile that and use the resulting DLL in your project, you'll see the error has gone!

But.... as always, it's not that simple! I got to this stage and when I was just using a simple bitmap image to create a very simple (and very short) video I'd get the following warning during every frame that I sent to be encoded:
non-strictly-monotonic PTS

However, it didn't seem to have any effect, my video file was still created and played so I thought it wouldn't really matter. I was wrong.

When I put the DLL into my final project that involves creating much larger movies, the program would just randomly crash. I say randomly because there was no real consistency to it. At different times during the writing process the WriteVideoFrame method would throw an exception and that'd be the end of that.

On that basis, I thought it be best that I resolve this "PTS" warning and see if that solves the problem. But, what on earth is a "non-strictly-monotonic PTS"? That's a good question which I hope to answer in my next blog after I've fully understood it myself!

Monday, 3 February 2014

HTML5 - Video and Encoding

I've recently thought I'd dive into the world of showing video online and being an up to date web developer, I don't want to be using no flash stuff.... I want to use the latest and greatest HTML5 video tags after all, it's meant to be easy right?

Wrong. Well, kind of.

If you have a video that is in the right format and encoded with the correct codec (take a look at w3schools for a list of them), then it is actually very simple, you can use the HTML5 Video tag like so:
HTML5 - The Future


<video width="320" height="240" controls>
  <source src="movie.mp4" type="video/mp4">
  <source src="movie.ogg" type="video/ogg">
Your browser does not support the video tag.
</video>


The multiple sources allow you to define different formats of the same video. The browser will go down the list until it finds a format it can play. If it finds a playable format then it'll do just that.

However, what if you don't have a video in the correct format? What if you're trying to generate your own content, on the fly using a simple web cam on your laptop? Surely saving a video in the format you want is pretty straight forward?

Wrong.

Let's take you through the dark and nasty world of video's in managed code but first, let's give you some idea of what I'm trying to achieve.
I've just been given a Raspberry Pi with a camera module (a great Christmas present by the way) so I thought I'd go about and set up a little home CCTV system. To go a step further, I want the system to be able to detect movement and at that point, start uploading a live feed to a website, where I can then log on and view this live feed. I've also got a couple of laptops around the house equipped with web cameras so my plan is to use them as extra cameras to the system, when they're turned on.

That's the simple brief. I say simple, when you scratch beneath the surface, it gets complicated. The laptops are on various versions of Windows (Windows 7 and Windows Vista) with various versions of the .NET framework installed. The Pi runs on Raspbian which is a port of Debian wheezy which is of course, a version of Linux. So we've got different OS versions with different architectures. Because of these complexities, I want to make this little system with managed code using the .NET Framework. There are quite a few challenges to over come here and I don't want the fundamentals of a language I don't really know to be getting in the way, so I'm going to play it safe and stick with what I know.

Now at this point, I should say this is a work in progress, this project isn't completed by a long shot but I thought I'd blog about the problems I encounter as and when I encounter them.

So, for the time being at least, I'm going to ignore the Raspberry Pi camera module, I'll come back to that later. I haven't done the necessary research but I suspect Mono (the cross platform, open source .NET development framework) won't support the necessary libraries I need to use to be able to capture video feeds but I have a cunning plan for that... that, however, is for a separate blog post. For now I just want to be able to capture a video feed from one of my laptops.

So, where to start?

I said this system should detect movement. To do that I need to compare a frame from one moment in time to a frame in another and if there's a difference then something has moved. Fortunately, there's some great blog posts around movement detection algorithms and I implemented one that's shown here: http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms

As you go through the above post  you'll notice it has the option of writing to file. Great!
You'll then notice it's writes it as an AVI file. Bad!

AVI uses the Windows Media Video 9 VCM codec. The word "Windows" in there should give you a pretty good indication that browser vendors like Google aren't going to support it and you'd be right. It's not a supported codec for HTML5 Videos and browsers like Chrome and Safari won't play it.

So how we go about saving this thing in a format that is supported by most browsers? In particular, how do we save this thing in mp4 format encoded with H.264?

Well, the motion detection algorithm uses a framework called the AForge.NET Framework. This is a very powerful framework and as their website states, it's a "C# framework designed for developers and researchers in the fields of Computer Vision and Artificial Intelligence - image processing, neural networks, genetic algorithms, machine learning, robotics, etc.". I'm particularly interested in the "image processing" part of that.

As it turns out, AForge has a library called AForge.Video.FFMPEG. This is a managed code wrapper around the FFMPEG library. This library has a class called "VideoFileWriter" and it seems like we're on to something here. It has an Open method with the following specification:


public void Open(string fileName, int width, int height, int frameRate, VideoCodec codec);


That last parameter allows you to define a VideoCodec to encode it with. Great! Now we're getting somewhere. Surely all we need to do is set that to H264 and we're there! VideoCodec is an enum so let's check out it's definition.


public enum VideoCodec {
    Default = -1,
    MPEG4 = 0,
    WMV1 = 1,
    WMV2 = 2,
    MSMPEG4v2 = 3,
    MSMPEG4v3 = 4,
    H263P = 5,
    FLV1 = 6,
    MPEG2 = 7,
    Raw = 8
}


What?! No H264? To make matters worse, none of those codecs are supported by the major browser vendors. You've got be kidding right? I'm so close!
Surely the FFMPEG library has an encoder for H.264? It's meant to be the "future of the web" after all...

Let's check the FFMPEG documentation. After a bit of searching you'll come across that yes, it does. Why on god's green earth can we not use it then?! Unfortunately, that's not a question I can answer. However, with AForge being open source, we have access to the source code and with us being software developers, we can solve such problems! After all we know the the AForge.Video.FFMPEG library is just a wrapper around FFMPEG. Come on, we can do this!

If you open up the AForge.Video.FFMPEG solution after downloading the source code of AForge, the first thing that will hit you is this isn't C# we're looking at... this is Visual C++. Now I haven't touched C++ since University but not to worry, we're only making a few modifications and I'm sure it'll all coming flooding back when we start getting stuck into it.

Now where on earth do we start? We've got a library written in an unfamiliar language which is wrapped around another library that we have absolutely no knowledge of. I could download the source code for FFMPEG but let's cross that bridge if and only if I have to.

First off, we know we need an H264 option under the VideoCodecs enum, so let's add that. Open up VideoCodec.h and you'll see the enum definition. Add H264 to the bottom so it looks something like this:


public enum class VideoCodec {
    Default = -1,
    MPEG4 = 0,
    WMV1 = 1,
    WMV2 = 2,
    MSMPEG4v2 = 3,
    MSMPEG4v3 = 4,
    H263P = 5,
    FLV1 = 6,
    MPEG2 = 7,
    Raw = 8,
    H264 = 9
}


Unsurprisingly, we can't just add an extra option and expect it to work. At some point that enum will be used to actually do something. The first thing it does is to select the actual codec and pixel format to use for the encoding of your video. It does that by looking up the codec and the format from two arrays using the enum value as the position of the item in the array.
These arrays are stored under VideoCodec.cpp. Open that up and you'll see the definition of the video_codecs and pixel_formats array. We just need to add our options in here like so:


int video_codecs[] = 
{
    libffmpeg::CODEC_ID_MPEG4,
    libffmpeg::CODEC_ID_WMV1,
    libffmpeg::CODEC_ID_WMV2,
    libffmpeg::CODEC_ID_MSMPEG4V2,
    libffmpeg::CODEC_ID_MSMPEG4V3,
    libffmpeg::CODEC_ID_H263P,
    libffmpeg::CODEC_ID_FLV1,
    libffmpeg::CODEC_ID_MPEG2VIDEO,
    libffmpeg::CODEC_ID_RAWVIDEO,
    libffmpeg::CODEC_ID_H264
}

int pixel_formats[] =
{
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_YUV420P,
    libffmpeg::PIX_FMT_BGR24,
    libffmpeg::PIX_FMT_YUV420P
}


Now we're getting somewhere. Now when we compile this and add it to our project, when we open up a VideoFileWriter using VideoCodec.H264 as the final parameter, the system finds our codec and tries to encode the video using it. Yes! We're there.

Wrong.

What's the red error appearing in our console window?
"broken ffmpeg default settings detected"

Damn. So close. What's going wrong now? As it turns out, the default settings that FFMPEG set for the H264 codec are a load of rubbish. Nothing is ever easy eh?

More on that in the next blog post...

Tuesday, 31 December 2013

Book Review - Secrets of a JavaScript Ninja

I was watching a talk given by Angelina Fabbro on YouTube named "JavaScript masterclass". Its all about trying to become an expert in a particular field. It's a great talk and I suggest you give it a watch. In that talk, she mentions a book, "Secrets of a JavaScript Ninja" and being what I'd consider as an intermediate JavaScript developer myself, I thought it deserved a look. 

Just to give you an idea of the level of my JavaScript expertise, I've never been "taught" JavaScript, I've never attended any courses, I didn't cover it at University. My general method has been to look up pieces of code online as and when I've needed it. After doing this for a while you get a general feel of the language and after 10 odd years (on and off), I feel that I'm pretty knowledgeable in the area however, due to this learning methodology, undoubtedly there's going to be gaps in my knowledge and so I bought this book in hope to fill those gaps.

I'm very happy to report that it does fill in those gaps and more! It does so in a clear and concise way. Any new concept is backed up with code that's been written in such a way that its easy to follow and if there is a difficult concept, virtually all of the code has been broken down into small snippets so you can quickly stick it into JSFiddle and have a play around.

The book covers the core JavaScript language from topics ranging from the importance of functions (and they're far more powerful than I ever imagined) to regular expressions, runtime code evaluation and with statements. These are all areas that you can get by with not knowing in detail but when you do, you'll realize there are far simpler ways of doing the things you've been doing for the past 10 years. As frustrating that is, it is enlightening.

It also covers some of the problematic areas of programming in the browser and the cross browser problems that come hand in hand with this. From event handling, manipulating the DOM and CSS selectors, it covers them all and offers some inventive solutions to problems that you've probably come across yourself.

The really good thing about the book is that throughout it introduces you to patterns of programming JavaScript that you probably don't already use and really wish you did. If you're anything like me you'll find yourself thinking "I wish I had programmed x like this" or "I wish I knew about this feature before I programmed x, y, z".

The book is co-authored by John Resig, the creator of the most popular JavaScript library, jQuery and the book often uses methodologies and solutions that are used within that library. That to me, really gives this book substance, you're learning methods that are out there in the real world and that work so well that it's led to the immense popularity of jQuery.

If you're an intermediate JavaScript developer like me then this book is a must. Some of it you'll already know but some of it you won't and having that extra knowledge at your disposal will give you the tools to write far more elegant code.

If you're new to JavaScript development then I'd suggest holding off on this book for now. The book assumes a certain amount of knowledge regarding the language. You probably could work your way through the book and pick it up as you go along but it will take you a significant amount of time (ok, you'll be learning a good portion of a language so that's to be expected) but I think that process would take away something from the book so if you are in this category, I'd suggest going away, learning the basics and then picking this book up in a month or two's time.

Saturday, 23 November 2013

HTML5 - Prefetching

Once upon a time I blogged about the new features included in the HTML5 spec and I was slowly making my way through the big new additions.

That pretty much died out due to a lack of time but I recently attended WebPerfDays and a new feature mentioned there jumped out at me. This feature is prefetch and it has some fantastic implications for web performance.

What is Prefetch?


Prefetching is the ability to request a page even though you're not on it, in the background. Sounds odd right? Why would you want to do that? Well, requesting a page means the browser can pretty much download all the content of a particular page before the user has requested to see it so, when the user does click on a link to go to that page, the content is immediately shown. There's no download time required, it's already been done.

To enable this, all you have to do is add a link tag like so.

<link rel="prefetch" href="http://clementscode.blogspot.com/somepage.html" />

And that's it. When the browser comes across that tag, it'll initiate a web request in the background to go and grab that page. It will not affect the load time of your original page.

The implications of this for web performance is obvious. Having the content of the page available before it's even requested by the user can only speed up your website but it has to be used properly. Adding prefetching to every web link on your website will cause unnecessary load on your web server so this functionality needs to be thought about before being used. A good example of this is Google. If you search for a term on Google, the first link brought back by Google will be prefetched (feel free to check the source to prove that I'm not lying!). The other links brought back are not prefetched. That's because Google know that in the vast majority of cases, the user clicks on the first link brought back and this functionality allows Google to provide you with that page as quickly as possible.

Are There Any Other Benefits?


That depends on your point of view... I primarily work on ASP.NET WebForms applications most of which are not pre-compiled... not ideal but we have our reasons. Using prefetching enables us to request pages before they're hit which, if it's the first time that page has been hit, forces it to be compiled. So we're improving performance two-fold. That initial compilation time has now been taken away from the user and we're getting the usual benefit of prefetching so users are presented with a page almost instantly after clicking.

That Sounds Awesome But What Are The Downsides?


Well, you're requesting additional pages, as long as the user actually goes to that page then that's great but, if they're not, you're placing an additional load on your server that serves no purpose.

Also, if you're gathering website statistics such as number of page hits and such then this will throw those stats off as technically, the user may not actually view that page even though it's been requested.

Finally, this obviously uses client resources, where as this may not be a problem on a nice big powerful desktop, it may be a problem on small mobile device.

And that's about it. Another great addition to the HTML5 spec. As with most things in our world, you need to think about its use rather than just blindly prefetching everything without any thought of the disadvantages of doing so.

Enjoy!