Tuesday 31 December 2013

Book Review - Secrets of a JavaScript Ninja

I was watching a talk given by Angelina Fabbro on YouTube named "JavaScript masterclass". Its all about trying to become an expert in a particular field. It's a great talk and I suggest you give it a watch. In that talk, she mentions a book, "Secrets of a JavaScript Ninja" and being what I'd consider as an intermediate JavaScript developer myself, I thought it deserved a look. 

Just to give you an idea of the level of my JavaScript expertise, I've never been "taught" JavaScript, I've never attended any courses, I didn't cover it at University. My general method has been to look up pieces of code online as and when I've needed it. After doing this for a while you get a general feel of the language and after 10 odd years (on and off), I feel that I'm pretty knowledgeable in the area however, due to this learning methodology, undoubtedly there's going to be gaps in my knowledge and so I bought this book in hope to fill those gaps.

I'm very happy to report that it does fill in those gaps and more! It does so in a clear and concise way. Any new concept is backed up with code that's been written in such a way that its easy to follow and if there is a difficult concept, virtually all of the code has been broken down into small snippets so you can quickly stick it into JSFiddle and have a play around.

The book covers the core JavaScript language from topics ranging from the importance of functions (and they're far more powerful than I ever imagined) to regular expressions, runtime code evaluation and with statements. These are all areas that you can get by with not knowing in detail but when you do, you'll realize there are far simpler ways of doing the things you've been doing for the past 10 years. As frustrating that is, it is enlightening.

It also covers some of the problematic areas of programming in the browser and the cross browser problems that come hand in hand with this. From event handling, manipulating the DOM and CSS selectors, it covers them all and offers some inventive solutions to problems that you've probably come across yourself.

The really good thing about the book is that throughout it introduces you to patterns of programming JavaScript that you probably don't already use and really wish you did. If you're anything like me you'll find yourself thinking "I wish I had programmed x like this" or "I wish I knew about this feature before I programmed x, y, z".

The book is co-authored by John Resig, the creator of the most popular JavaScript library, jQuery and the book often uses methodologies and solutions that are used within that library. That to me, really gives this book substance, you're learning methods that are out there in the real world and that work so well that it's led to the immense popularity of jQuery.

If you're an intermediate JavaScript developer like me then this book is a must. Some of it you'll already know but some of it you won't and having that extra knowledge at your disposal will give you the tools to write far more elegant code.

If you're new to JavaScript development then I'd suggest holding off on this book for now. The book assumes a certain amount of knowledge regarding the language. You probably could work your way through the book and pick it up as you go along but it will take you a significant amount of time (ok, you'll be learning a good portion of a language so that's to be expected) but I think that process would take away something from the book so if you are in this category, I'd suggest going away, learning the basics and then picking this book up in a month or two's time.

Saturday 23 November 2013

HTML5 - Prefetching

Once upon a time I blogged about the new features included in the HTML5 spec and I was slowly making my way through the big new additions.

That pretty much died out due to a lack of time but I recently attended WebPerfDays and a new feature mentioned there jumped out at me. This feature is prefetch and it has some fantastic implications for web performance.

What is Prefetch?


Prefetching is the ability to request a page even though you're not on it, in the background. Sounds odd right? Why would you want to do that? Well, requesting a page means the browser can pretty much download all the content of a particular page before the user has requested to see it so, when the user does click on a link to go to that page, the content is immediately shown. There's no download time required, it's already been done.

To enable this, all you have to do is add a link tag like so.

<link rel="prefetch" href="http://clementscode.blogspot.com/somepage.html" />

And that's it. When the browser comes across that tag, it'll initiate a web request in the background to go and grab that page. It will not affect the load time of your original page.

The implications of this for web performance is obvious. Having the content of the page available before it's even requested by the user can only speed up your website but it has to be used properly. Adding prefetching to every web link on your website will cause unnecessary load on your web server so this functionality needs to be thought about before being used. A good example of this is Google. If you search for a term on Google, the first link brought back by Google will be prefetched (feel free to check the source to prove that I'm not lying!). The other links brought back are not prefetched. That's because Google know that in the vast majority of cases, the user clicks on the first link brought back and this functionality allows Google to provide you with that page as quickly as possible.

Are There Any Other Benefits?


That depends on your point of view... I primarily work on ASP.NET WebForms applications most of which are not pre-compiled... not ideal but we have our reasons. Using prefetching enables us to request pages before they're hit which, if it's the first time that page has been hit, forces it to be compiled. So we're improving performance two-fold. That initial compilation time has now been taken away from the user and we're getting the usual benefit of prefetching so users are presented with a page almost instantly after clicking.

That Sounds Awesome But What Are The Downsides?


Well, you're requesting additional pages, as long as the user actually goes to that page then that's great but, if they're not, you're placing an additional load on your server that serves no purpose.

Also, if you're gathering website statistics such as number of page hits and such then this will throw those stats off as technically, the user may not actually view that page even though it's been requested.

Finally, this obviously uses client resources, where as this may not be a problem on a nice big powerful desktop, it may be a problem on small mobile device.

And that's about it. Another great addition to the HTML5 spec. As with most things in our world, you need to think about its use rather than just blindly prefetching everything without any thought of the disadvantages of doing so.

Enjoy!

Monday 2 September 2013

Improving Build/Start Up Time

In fairness, there's nothing better to do while
waiting for compilations
I've recently had the pleasure of upgrading to Visual Studio 2012 and for the most part, I love it. However, I've noticed that debugging my web application has become a very time consuming event. From hitting the F5 button to getting to my start up page, it was taking over 3 minutes and needless to say, it was driving me crazy and was seriously affecting my productivity. The xkcd comic on your right may just point to why...

For the sake of my sanity, I set out to find out how to improve this and I've now got that time down to 20-30 seconds. Here's what I've done...

Web.config Changes

There's a couple of changes you can make in order to speed things up. I should say these should only be applied to your local development environment. They're not changes that should be applied to a production environment as they'll have a direct impact on your application's performance.

Firstly, the configuration tag has two attributes you should make use of. There's the batch attribute and the optimizeCompilation attribute. You should set the batch attribute to false and the optimizeCompilation attribute to true, so your tag should look something like this:

<compilation debug="true" batch="false" optimizeCompilations="true" />

Let me explain what this does. First off, the batch attribute. By default, this is set to true and what this means is that when your web application starts up, ASP.NET will pre-compile all your un-compiled files (aspx, ascx files for example) in one big batch. Depending on the size of your application, this can significantly increase your load time. By setting it to false, this will no longer occur and instead, the file will be compiled as and when you access it. That means when you visit a particular page, the load up time for the first time you access that page will be a little slower as it'll need to be compiled but the chances are, if you're debugging a particular problem, you're only going to be visiting a very small subset of the total files that get compiled when the batch attribute is set to true so overall, you'll be saving yourself a significant portion of time. For more info, check out the MSDN documentation.

The optimizeCompilations tag is a bit of an odd one. I can't seem to find any documentation about it apart from this blog post so I don't know if it's valid in .NET 4 (Although I use it in my applications and it seems to do the job). Anyway, the reason this is helpful is that by default every time you change a file in the bin directory, the global.asax file or anything under the App_Code directory, the application is re-compiled during start up (the same re-compile process we spoke about above). By setting this to true, this re-compilation no longer occurs. Now this can cause problems which is why it's not turned on be default (more info about that can be found on the blog post mentioned above) but in the majority, if you're like me, you're usually changing method implementations rather then creating or change method specifications. So, turning it on means no more recompilations, again saving more time on start up.

Fusion Log

For those of you that don't know, fusion is a tool that enables you to log DLL binds to disk (more info on this tool can be here). This is particularly helpful when at runtime you're getting errors about DLL versions or particular DLLs not being found and you can't work out why. The log will tell you where ASP.NET is looking for those DLLs, what it's finding and whether or not it fails. This can be a very handy tool. 
In .NET 3.5, I found that it was sometimes a little unreliable. I'd turn on logging and the logs wouldn't be generated. It was a tad frustrating it must be admitted. In .NET 4, I don't have this problem and logs seem to be generated as you'd expect. The problem with this is that creating these logs takes time and if you happen to have hit the "Log All Binds" option and then forgot about it, you'll notice a significant performance decrease.
Long story short, only log binds if you absolutely need to and make sure it's turned off when you're not using it.

Application_Start

Application_Start is a method in the global.asax file and it'll fire when when the application domain starts up. This is great but if you're running code in there then it'll obviously effect load up time. So, if you don't need to run that code, don't. I know in our application we have features that are initiated in that method. If I'm not testing one of those features then I don't need it enabled which helps speed up the application start up time.

Solution Changes

Finally, you can make a few changes to your solution to ensure only what you need to be built, is built. 

Firstly, do a full re-build of your application so that all necessary DLLs are generated. Then, unload any projects you're not working on. If you change a class or method in one project, Visual Studio will compile all projects that have references to that project. Unloading those projects will ensure they're not rebuilt saving you some precious seconds on your build time.

Secondly, if you right click on the solution at the top of the solution explorer within Visual Studio, you can select "Configuration Manager". In here you can un-tick the "build" option on all projects that you're not working on. As with the option above, this will ensure un-neccesary projects aren't built. What it does mean is that when you change code in a project that you've marked not to be built, you have to explicitly build that project otherwise your changes won't be picked up.

Warning. These solution changes do have down sides. If you change a method signature within a project and projects that reference that have not been built, then any errors caused by that change will not be picked up at compile time. With that in mind, it's well worth compiling everything before committing/pushing/releasing any code, just to ensure everything compiles as you'd expect.

And that's all I've got. If you've got any other tips or tricks regarding application start up time then I'd love to hear them. Very few things frustrate me as much as waiting for the web applications start up screen to actually load.

Happy coding!

Wednesday 31 July 2013

JavaScript - Memory Leak Diagnostics

Memory leaks in JavaScript seem to be becoming an ever increasing problem. This is no surprise with JavaScript being used more and more but have you ever tried to solve a memory leak in JavaScript? It's no simple task, the tools simply don't exist to help determine what objects are leaking. You're essentially trying to find a needle in a haystack while blindfolded. Not Cool!

Until now.

There are three tools I would like to talk about. Sieve, Google Chrome's Heap Snapshot and the new boy on the block, Internet Explorer 11 Developer Tools.

In most cases, as a developer, you need a bit of a nudge in the right direction. Once you have an idea of where the problem may lie, we're pretty intelligent, we can usually work it out. Sieve gives you that nudge.
It's a memory leak detector for Internet Explorer. When running it'll show you all the DOM elements that are currently in memory. It'll then go one step further and it'll show you the DOM nodes that are currently leaking with an ID and everything. You can then use that information to try and find out why it's leaking. Usually it's because some piece of JavaScript some where references the element which has since been removed from the DOM.
sIEve - Memory Leak Detector

I must admit, I used this on a complex web application which had popup windows with iframes inside iframes and if it didn't crash then it did report some nodes were leaks when they weren't but, it did at least give me that nudge to look at a particular screen.

Chrome Heap Snapshot
Now we're talking! This is, to my knowledge, is the first proper way of determining what objects are specifically leaking. It allows you to take a snapshot of what objects are in memory at a given point in time and then you can compare these snapshots.

Chrome Heap Snapshot
This is rather handy. It means you can see which objects were created in the first snapshot and still exist in the second snapshot, i.e. the ones that are causing your problem!

The good thing about these snapshots is that it also tells you the "retaining tree". This is essentially the path from the root objects to the object in question, this means you can trace the path and work out why your object isn't being garbage collected.

The tool has a few other ways of helping you find your leak if comparing snapshots isn't quite cutting it. There is a "containment" view and a "dominator" view. I haven't had much use for the containment view (see here for more details) but the dominator view essentially lists the objects with the biggest memory consumption which can be helpful if you've got leaking global objects.

And a late entry.... Internet Explorer 11 Heap Snapshot
A developers preview has just been released on Windows 7 but so far so good. It's much the same as Chrome's version, if a little easier to read.

Internet Explorer 11 Developer Tools
There are two exceptions, firstly, on a positive note, it has a search functionality which Chrome doesn't have. This allows you to find objects that you know the id of. On a negative, it seems you can only compare sequential snapshots. You could not for example, compare your first and third snapshot which means you have to really think about when to take a snapshot.

I haven't had much time to really play around with this and it is only a developers preview but so far it looks like it could be a very useful tool. In actual fact, the whole new developer tools has a real potential but that's another blog post for another day.

For more info on the memory tab within the developer tools check out the MSDN documentation.

Conclusion...
As always, use the best tool for the job. For simple leaks, sieve is very good in finding the problem. For more complex problems the heap snapshots are the way to go.

The work Google and Microsoft have done in this area recently show how big JavaScript has now become and these tools are a great addition to any web developers tool kit.

If you do ever have to look for a memory leak, my thoughts are with you.

Good luck!

Sunday 26 May 2013

Web App Upgrade From .NET 3.5 to .NET 4.5

We've recently gone about upgrading our web application from .NET 3.5 to .NET 4.5 and as you could probably guess, it didn't quite go as smoothly as one would hope.

As we go through this process I'm going to blog about the difficulties and what we did to overcome them.

So, here we go...

System.Web.UI.HtmlControls.HtmlIframe


This is a whole new type in .NET 4.5 and oddly, it can cause a few problems.

Take this line of code for example:

<iframe src="about:blank" id="myFrame" runat="server" />

If you wanted to refer to this control in C# code, in 3.5 you'd write something like this (preferably in a designer.cs file):

HtmlGenericControl myFrame;

In .NET 4 however, an iframe is no longer an HtmlGenericControl, it's an HtmlIframe which does not inherit from HtmlGenericControl. This means you need to change the above line of code to something that looks like:

HtmlIframe myFrame;

Creating this HtmlIframe class makes sense and means that iframes have their own object, much like the HtmlTable class but, it does seem odd that it does't inherit from HtmlGenericControl. Unfortunately, this design decision has knock on effects for upgrades. Any iframe which has been defined as an HtmlGenericControl now needs to be changed to an HtmlIframe. To make matters worse, if you've manually defined these controls and they're not wired up via an auto-generated designer file, then the problem won't be picked up at compile time. You'll need to actually run the application and wait for it to fall over to find the problem.

The joys of upgrades eh?



Saturday 23 March 2013

Stack Overflow - Much more than just answers

I'm guessing everyone who is reading this knows what Stack Overflow is but I bet most people aren't getting the most out of what really is a very useful tool for developers.

For those of you that don't know, Stack Overflow is in it's simplest form, a forum. Developers post problems or questions that they can't find the solutions for and other developers answer them. Those questions and answers are then stored so that anyone with a similar question can find the answer. After years of this, Stack Overflow has built up a pretty comprehensive archive of common problems that developers have faced and the solutions to those problems. It's one of the reasons that if you search for a development related problem on the internet then nine times out of ten, Stack Overflow is the first hit. It's a great idea and it's been well executed.

So, why am I writing a blog post about it? You already know all that. Well, up until recently that's all I knew about Stack Overflow as well. Until I actually decided to give something back.

I registered for an account and I thought I'd try and answer a few questions. I may not be the greatest programmer in the land but I do have a fair amount of experience with various technologies/frameworks so I should be able to answer the odd question or two. Turns out I was correct, I can answer the odd question. What's more, it's addictive.

Just about every interaction within the Stack Overflow community gives the rest of the community to give you reputation points. Someone likes your answer? They'll vote it up. That's 10 points. Your answer gets accepted as the correct answer, that's 15 points. And the same works in reverse. If you post a load of rubbish, you'll get voted down and that's minus points. Why is this important? Well, the entire website is moderated by the community and these reputation points gauge what you can and can't do in order to help that moderation. I suppose in essence, it runs a bit like humanity does in Star Trek, in the words of Picard "We work to better ourselves and the rest of humanity". I wouldn't class it as work but, you get the idea.

As I said, answering questions becomes addictive because the more you answer, the more reputation points you get. This leads you to reading a lot of questions, a lot of which you won't be able to answer. This is good. Very good. Why? Because you learn a lot. Just by reading questions and their corresponding answers I've learnt all sorts of things, in fact I wish Stack Overflow had the ability to "notify you via e-mail when an answer is posted", there are so many good questions that get posted. I've found better ways of solving problems I solved years ago, I've read questions to problems I haven't even come across yet. It really is a great tool for learning.

In conclusion, it's a good tool for learning to communicate accurately to fellow developers. It gives you the ability to give back to the development community and it is a great tool for learning. So, if you're waiting for something to compile or you've just got a spare 5 minutes, head over there. Try and answer a few questions, read other questions and just learn.


Tuesday 19 February 2013

IE8, Filters and IFrames

Everyone loves supporting old versions of Internet Explorer right?

Well, I came across an odd "quirk" with IE8 and it took me a little bit of time to track it down.

The problem occurs when you use a DropShadow IE filter, an Alpha filter and an iframe. When you stick them all together, the iframe becomes totally transparent. Very odd.

Let me walk you through it.

So, the set up is that we have a normal page, that page consists of a div with a DropShadow and that div contains an iframe which loads a new page. In that page, we have an overlay which has a 100% transparency set. When you strip out all the complexity, you end up with two HTML pages that look a little like the below. I've given each page a background colour just to make the problem a little more obvious.

Main.html

<html>
<head>
<title>Top Window</title>
</head>
<body style="background-color: Green;">
<center>
The top window
<br />
<br />
<div style="position: absolute; top: 55px; left; 25px; z-index: 1">Some general text in the top level window</div>
<div style="border: 1px solid black; z-index: 2; position: absolute; top: 50px; left; 20px; filter: progid:DXImageTransform.Microsoft.DropShadow(OffX=5, OffY=5, Color=#888); width: 400px; height: 400px;" >
  <iframe src="IFrame.html" style="width: 400px; height: 400px;" frameborder="0" />
</div>
</center>
</body>
</html>


IFrame.html

<html>
<head>
<title>Inner Frame</title>
</head>
<body style="background-color: blue;">
Text within the iframe

<div style="position: absolute; left: 0px; top: 0px; width:400px; height: 100%; filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0); background-color: black;">
  This is an overlay within the iframe.
</div>
</div>
</body>
</html>


All pretty straightforward so far right?

Wrong.

Here's two images of the what the above produces. One is produced by IE8 and the other, IE9.

Internet Explorer 8 Internet Explorer 9

See the problem? The entire content of the iframe has become transparent. That's not what we wanted at all! IE9 on the other hand, renders it correctly.

The solution? Remove the DropShadow on the div. Ok, so it doesn't look as good but at least it gives a consistent look across IE browsers. You can always reproduce the box shadow effect using a different method, perhaps a second div that has a grey scaled colour, placed underneath the div containing the iframe but with a bit of an offset, I'd imagine that'd have the same effect although I haven't actually tried it.

Oh the joys of old versions of Internet Explorer.

Tuesday 8 January 2013

HttpHandlers and Session State

By default, if you create a new HttpHandler, it does not have access to the session object. Take the following as a very simple example:


public class MyHandler : IHttpHandler
{
    #region IHttpHandler Members

    public bool IsReusable
    {
        get { return true; }
    }

    public void ProcessRequest(HttpContext context)
    {
        string s = (string)context.Session["MySessionObject"];
        context.Response.Write(s);
    }

    #endregion
}


Do you see the problem? HttpContext.Current.Session will be null and an exception will be thrown.

So, how do you access the Session object from within an HttpHandler? I've tried all sorts of magical workarounds, some worked, some didn't but by far the easiest is just to simply add the IReadOnlySessionState interface to your handler, so it'll look like this:


public class MyHandler IHttpHandler, IReadOnlySessionState
{
    #region IHttpHandler Members

    public bool IsReusable
    {
        get { return true; }
    }

    public void ProcessRequest(HttpContext context)
    {
        string s = (string)context.Session["MySessionObject"];
        context.Response.Write(s);
    }

    #endregion
}


And as if by magic, your session object is populated and you can access your session objects like you usually would. Fantastic news! You can't write to the session object by the way, but I've not come across a scenario where I've needed to yet.

Thanks to Scott Hansleman's blog for the solution to that little problem!

Saturday 5 January 2013

League Predictor

For those of you that don't know, I play a fair amount of football and I run the website for the Sunday league team for which I play, Sumners Athletic. For many years I've used that site, and the server it's hosted on, to test new technologies and new methodologies and to improve my understanding of other web technologies.

Every now and again, when I'm playing around I create a control that I can share with the world. It's usually not the polished article but it does a job. Five or six years ago I created one of these controls. I created a "league predictor" in the form of a Java applet (who remembers those?).

What is a league predictor? Well, for a football team like mine that play in a league and each team plays each other twice (home and away), given all the results of the league to date, will work out the remaining fixtures. Those fixtures are then presented to the user so that they can make predictions about those games. At the end, the user hits a button and then re-draws the final league standing based on the predictions that the user has made.

Well, unless you've been out of the web application loop for the past 5 years, you'll probably know that Java applets are all but dead. JavaScript and HTML5 are the way forward so, I thought I'd re-write that original control using those technologies. The re-write is now done so I thought I'd make the code available to all. At some point I'd like to add animations to the league re-draw but that's another post for another day.

Anyway, the JavaScript file and corresponding CSS and HTML files can be found in this zip file.

If you want to see a working example, check this out: Sumners Athletic League Predictor
(Please ignore the fact that my team, Sumners Athletic, are currently bottom of the league!)


Just a few notes about the "control"...

Firstly, the JavaScript requires the initial data to be able to work out the league and what fixtures need to be played. To do this, it needs what I call, a results matrix in the form of a CSV file.

What's a results matrix? Well, it's essentially a grid of all the games played. You can imagine it as a grid, with the team names forming the first row and the first column. The results of each game (assuming you play home and away once) can then fit in the corresponding cells. If you open the CSV file included in the above zip file within a spreadsheet, you'll see what I mean.

This initial data is then requested by JavaScript and processed. It converts the CSV file into a 2D array and constructs team objects from that array. A team object consists of the name of the team, the amount of games played, won, drawn and lost. From this, a league can be constructed.

Then, from the results matrix, we can work out what fixtures are remaining. We can then display these fixtures to the user. Once they hit the "Predict" button, the predictions made are then fed back into the results matrix and the league is re-drawn based on the new results matrix.

And that's it.

Nothing too complex but I thought I'd share the code. I remember when I first started building my first football team website, I looked around for something that would do just this and couldn't find anything, now there's an option out there!