Tuesday, 15 May 2012
IE, JavaScript and the Story of the Weeping Angels
I came across a very odd problem the other day in the way in which Internet Explorer handles DOM items with an ID.
Take the following piece of HTML for an example.
<html>
<head><title></title></head>
<body>
<div id="testElement" style="width: 100px; height: 100px; border: 1px solid black;"></div>
</body>
</html>
You can't get much simplier than that. Now say you want to access testElement and change the width of the element. You'd probably do that using the following piece of JavaScript code:
document.getElementById('testElement').style.width = '200px';
All very straightforward so far. There is another way of doing this though, one which isn't recommended but is supported by all the major browsers. You can simply write:
testElement.style.width = '200px';
If an element in your HTML has an ID, the browser will automatically put it in the window scope so you can access it directly. No need for document.getElementById. Cool eh?
Well, it turns out Internet Explorer supports this little feature in a bit of an odd way. Take the following HTML page:
<html>
<head><title></title>
<script language="JavaScript" type="text/javascript">
<!--
TestObject = function() {
this.id = 'TestObject';
}
//-->
</script>
</head>
<body>
<div id="testElement"></div>
<script language="JavaScript" type="text/javascript">
// alert(testElement.id); // We'll uncomment this line a bit later.
window.testElement = new TestObject();
alert(testElement.id);
alert(window.testElement.id);
</script>
</body>
</html>
What you've done here is create a DOM element with an id of testElement. So, the browser should have created a window.testElement variable that'll give you the appropriate DOM element when accessed. You've then explitically defined the testElement variable to be a new TestObject. So in theory, when the first and second alert is shown, the testElement variable should be pointing at our TestObject. The id should therefore be 'TestObject'. In both alert boxes, 'TestObject' should be displayed.
When you run the above, that's exactly what happens. No big surprise there.
Ok, now uncomment the commented line. What I'd expect here is that the first box should display "testElement" as that's the id of the DOM element. You then assign the TestObject to testElement so, when the second and third alert box is shown, you'd expect to see "TestObject".
When you run the above, the first alert box displays 'testElement'. Good so far. The second alert box displays 'testElement'. Eh? That's surely wrong. The third alert box displays 'TestObject'. What? How can window.testElement and testElement be pointing at different things? They're the same variable! Comment the line again and everything goes back to normal. How can this be?!
Weeping Angels!
For you Doctor Who fans, you'll know what I'm talking about when I talk about Weeping Angels, but for those who have no idea, a weeping angel is a creature that, when looked at, automatically turns to stone. When not being viewed, they go about their usual business. It's a good analogy for this behaviour because, after a bit of experimenting, I found that as soon as you look at the testElement variable, it's at that point that the browser actually points the variable at the DOM element and makes it read only. This means that if you reference the variable anywhere, then it'll affect what your code is actually doing. Even if you're debugging and place a watch on the variable, it'll have the same effect. These kind of variables, in my book, are about as ugly as a weeping angel, just see the above picture for an example.
I should say, only Internet Explorer (I tested on IE9) seems to handle DOM variables like this. The above code behaves exactly as you'd expect in both Chrome and Firefox.
So, how to avoid this? As most JavaScript programmers know, programming in the global (window) scope is just bad practice, for a variety of reasons but the main one is doing so can lead to naming conflicts pretty easily, especially if you're using third party libraries. This problem re-affirms this. It is a naming conflict, just not in the traditional sense as the browser is doing some of the work for you. Anyway, if you avoid programming in global scope then you won't come across this problem. Unfortunately, from time to time, it's unavoidable, especially if the problem is actually caused by a third party library, like in my case. In these cases, as you saw before, if you reference the variable using window.variableName, then it seems that that will always point to your object, not the DOM item, which should hopefully give the behaviour that you want.
Enjoy!
Tuesday, 8 May 2012
C# - Method Overload Resolution
Some of even the most basic of concepts catch you out from time to time. I was working the other day and came across a problem where what was happening didn't immediately make much sense. So, I thought I'd post it up here as a reminder that even basic concepts of computer programming can leave you a little confused.
So, here's your overview of the problem... I had a piece of code that allowed me to run queries against a database. To add parameters to the query I had to write something like:
cursor.AddParameter("@ParameterName", "value");
All very straight forward so far. The problem arose because that piece of code can be run against either an Oracle database or a SQL Server database. These databases handle empty strings differently. In Oracle an empty string is treated exactly the same as a null value. In SQL Server an empty string is an empty string.
Anyway, I found a piece of code that read:
cursor.AddParameter("@ParameterName", string.Empty);
In this instance, the developer tested this on Oracle and didn't actually mean string.Empty. They meant null. But the code ran perfectly. The application was then hooked up to a SQL Server database and the query brought back the wrong records (actually, it didn't bring back any records at all). This was due to the difference in the Oracle and SQL Server databases.
So, an easy fix then. Change string.Empty to null and we're done. After all, that's what the original developer meant in the first place. Both Oracle and SQL server will handle it in the same manner and we're good to go. Or so I thought.
Here's the interface definition of ICursor (well, a stripped down definition at least) which is what our cursor variable in the above example is defined as:
public interface ICursor
{
void AddParameter(string name, object value);
void AddParameter(string name, Type type);
}
Who can see the problem?
The rules for resolving method overloads state that the method header with the most specific type match should be used. This makes perfect sense, if you had two methods defined: one that accepts an object and another that accepts a string. If you passed in a string then you'd expect the method defined with a string to be used.
However, null is a little special. It can match any reference type. Is the problem becoming more apparent now?
When we put string.Empty in as the second parameter, the string matches the first method, where the second parameter is an object. However, when we change the call to this:
cursor.AddParameter("@ParameterName", null);
the second method is now matched. The call does match the first method, null is a valid value for an object variable but it also matches the second method as well, as Type is a reference type. That is the most specific match and so that method is invoked.
Unfortunately, that second method does something entirely different and so my parameters weren't being mapped in my SQL query correctly and the application was falling over.
If I change the call so that I explicitly define the type of null to the lesser type, as below, then we have our solution.
cursor.AddParameter("@ParameterName", (object)null);
So, just a friendly reminder that even the fundamentals can catch you out from time to time!
As always, if you want to actually see this in action, I've knocked up a little demo solution which can be found here.
Have fun and happy coding!
So, here's your overview of the problem... I had a piece of code that allowed me to run queries against a database. To add parameters to the query I had to write something like:
cursor.AddParameter("@ParameterName", "value");
All very straight forward so far. The problem arose because that piece of code can be run against either an Oracle database or a SQL Server database. These databases handle empty strings differently. In Oracle an empty string is treated exactly the same as a null value. In SQL Server an empty string is an empty string.
Anyway, I found a piece of code that read:
cursor.AddParameter("@ParameterName", string.Empty);
In this instance, the developer tested this on Oracle and didn't actually mean string.Empty. They meant null. But the code ran perfectly. The application was then hooked up to a SQL Server database and the query brought back the wrong records (actually, it didn't bring back any records at all). This was due to the difference in the Oracle and SQL Server databases.
So, an easy fix then. Change string.Empty to null and we're done. After all, that's what the original developer meant in the first place. Both Oracle and SQL server will handle it in the same manner and we're good to go. Or so I thought.
Here's the interface definition of ICursor (well, a stripped down definition at least) which is what our cursor variable in the above example is defined as:
public interface ICursor
{
void AddParameter(string name, object value);
void AddParameter(string name, Type type);
}
Who can see the problem?
The rules for resolving method overloads state that the method header with the most specific type match should be used. This makes perfect sense, if you had two methods defined: one that accepts an object and another that accepts a string. If you passed in a string then you'd expect the method defined with a string to be used.
However, null is a little special. It can match any reference type. Is the problem becoming more apparent now?
When we put string.Empty in as the second parameter, the string matches the first method, where the second parameter is an object. However, when we change the call to this:
cursor.AddParameter("@ParameterName", null);
the second method is now matched. The call does match the first method, null is a valid value for an object variable but it also matches the second method as well, as Type is a reference type. That is the most specific match and so that method is invoked.
Unfortunately, that second method does something entirely different and so my parameters weren't being mapped in my SQL query correctly and the application was falling over.
If I change the call so that I explicitly define the type of null to the lesser type, as below, then we have our solution.
cursor.AddParameter("@ParameterName", (object)null);
So, just a friendly reminder that even the fundamentals can catch you out from time to time!
As always, if you want to actually see this in action, I've knocked up a little demo solution which can be found here.
Have fun and happy coding!
Labels:
C#,
method,
null,
overloading,
reference types,
resolution
Location:
London, UK
Wednesday, 2 May 2012
HTML5 - Web Workers
It's been a while since my last blog but, here's the next chapter of my HTML5 overview, Web Workers.
So, what are Web Workers? Let's start off with some background information about JavaScript. JavaScript was originally developed by Netscape back in 1995. Its primary use was to allow developers to manipulate web pages, which, as you can imagine, were very basic back in 1995. In order to do this JavaScript was designed as a single-threaded language. Unlike its namesake Java (which is a completely unrelated language by the way) and many other languages, JavaScript does not support threads. The reason for this, I imagine, was very simple. How would you go about designing a multi-threaded language whose primary aim was to modify something (the Document Object Model (DOM)) that was shared between threads, without incurring deadlock problems? This problem remains unsolved. And so JavaScript runs on one single thread.
One Single Thread - A one-trick pony?
Is it really a bad thing? I suppose it can be argued that in itself it's not. The design decision not to support threading in JavaScript was a good one. It makes it simpler to learn; it avoids some potentially horrific problems and; as the web has thrived in the past two decades, so has JavaScript. It can't be that bad right? Well, yes and no. When everything runs on one thread it can lead to a very poor user experience. The User Interface (UI) can become non-responsive if not programmed correctly. In order to address this problem two functions were built into JavaScript: setTimeout and setInterval. These allow a piece of code to run after a pre-defined amount of time. The idea being that you could schedule long running code to run when the UI wasn't busy and the thread was free, essentially "hiding" the fact that JavaScript all runs on a single thread. These little hacks have allowed developers to get pretty inventive and have allowed JavaScript to flourish.
Ok, all is good then. What's the problem?
As I said, these are basically "hacks". What happens when the user starts clicking but you've already started to execute a long-running piece of code? You have a problem! The system will not be able to respond to the user's action until the code has completed its execution. And after all, some code, especially data centric code, just takes a long time to run. When this occurs, you'll see an error similar to this:
There's not a whole lot you can do about that. If you do have code that'll take a long time to run then you're a little stuck.
So, where do Web Workers come in?
Simple. Web workers bring multi-threading to JavaScript. They come with a few restrictions though and one is quite a biggy. Web workers cannot access the DOM. Allowing multiple threads access to a non-thread safe resource (the DOM) would cause all sorts of problems so the same design decision was made as in 1995. What they do allow you to do is to process and return data in a separate thread to the UI, so the time of seeing those pesky "unresponsive script" errors should now be gone forever!
Multi-threading eh? Woo! Where do I start?
Well, first you need to make sure you're using a web browser that actually supports web workers. To find that out you can simply visit caniuse.com and look it up. I should mention here that if you're using Chrome and the JavaScript file you're testing is stored locally and isn't running on a web server such as IIS, then you need to enable a flag on Chrome for everything to work. Simply start up Chrome with this command: chrome.exe --allow-file-access-from-files. This problem does not exist with Firefox. For more information, check out this Stack Overflow post.
Now that you are using a web worker enabled browser, you need to define your web worker. As the worker is in an entirely different thread, it has no access to loaded scripts, so you need to tell the worker which script to load. To do this we can use the following line:
var workerOne = new Worker('worker.js');
where worker.js is the name of your script.
Web workers communicate with the main UI thread in the form of messages. When a message is sent to a web worker, it causes the message event to fire within the thread. To hook in to this, your worker.js file needs to have the following content:
self.addEventListener('message', function(e) {
var message = e.data;
// Do something with the message
self.postMessage(message.sort());
}, false);
To give you a quick overview of what's happening here, when the web worker is sent a message, the message event will be fired and the inner function defined above will run. It'll get the sent message by fetching it from the event object. Then, in a useful scenario some action would be performed based on that message. You'd then post a message back to the caller (usually the main UI thread). This could just be to notify the thread that it's completed or if you've done some data manipulation, you could post back the modified data. In the above example, the message is sorted and sent straight back.
So, that's the web worker defined. How do you now post messages to that worker thread so you can use it effectively? Well, you've defined your worker object earlier, you just need to:
a) Define what happens when the UI thread receives a message from the web worker and;
b) Send a message to the web worker which will start the whole process.
In much the same way that you need to hook into the message event within the web worker, you also need to hook into the message event on the web worker object itself, within the UI thread. Something like the following should do the job:
workerOne.addEventListener('message', function(e) {
var numbersOne = e.data; // Do something with this data
}, false);
This will fire when a message is posted from the web worker to the UI thread. In the previous example, e.data will now contain your sorted data!
Ok, now all you need to do is send your original data to the web worker for processing. You use the same method as when you posted the message from the web worker to the UI thread but this time you perform it on the worker object within the UI thread, so you'll have something like this:
workerOne.postMessage([1,4,2,7,9,2,4,7,6,9,4]);
Now you have something that's a working demo, the array of integers (1,4,2,7,9,2,4,7,6,9,4) is sent to the web worker. The web worker starts up in it's own thread; picks that message up; sorts it and then sends the data back to the UI thread. The UI thread now has a sorted array of data but it hasn't actually done any processing to get that information. It has left the UI thread free, so to the user the system seems responsive. Ok, in this particular example with 10 or so integers there isn't going to be much of a difference, but when you're playing with millions of objects, this can have a significant impact.
Performance
While I was looking at this, I wondered if I could make use of Web Workers so that it would give some significant performance gains, especially in terms of data processing. If web workers work like standard threads then this should be fairly straightforward to test.
Here's my very simple test case:
How quickly can I sort three arrays containing two million integers each?
I'm going to test in three ways:
1. Use standard javascript. Sort each array, one after the other and time how long it takes.
2. Use a single web worker. The sorting of all of the arrays will occurr in one web worker.
3. Use a web worker for each array sort.
With what I knew about threads and web workers, I thought I'd find the following...
- The first and second test case would be comparatively similar in terms of time taken.
- The first test case would freeze the web browser until all data had been sorted. The other methods would not.
- The third test case would be the fastest, with all three sorting algorithms occurring in parallel. In theory, the time it takes for the third test case should be roughly 66% quicker than that of the first test case.
Each test case was repeated 10 times and an average time was taken, here are the results:
Test Case One: 11.24 seconds
Test Case Two: 13.75 seconds
Test Case Three: 7.21 seconds
(If you wish to actually repeat the demo yourself, you can pick up the files from here)
Interesting! Ok, I wasn't quite right about Test Case Three being 66% quicker, but it is around 33% quicker which isn't too bad. What is interesting is that test case two is almost 2.5 seconds slower than test case one. Just to open up a new web worker and to send/receive the massive arrays adds an extra 2.5 seconds to the processing time, that's almost a 22% time increase. That seems rather high to me but, it's good to know at least.
It's around about this time that I should mention just how the UI thread and worker threads post messages to each other as it can have an impact upon performance. You're transferring data across threads so you can’t just pass a variable by reference. Instead, you need to do a full copy of the variable. How this occurs depends on what you’re doing and how you’re doing it. If you’re passing across a string then the data will be serialized into JSON and sent to the worker thread. It’ll then be de-serialized at the other end. If however, you’re using a complex data type, File or Blob for example, then an algorithm called structured cloning will occur. This will effectively copy the contents of the variable, which for a variable containing megabytes worth of data, can be slow. There is however, another way! Google have come up with a concept of “transferable objects". This allows you to transfer the owner of an object from one thread to another using a zero-copy which is significantly faster. There is one down side to this: once you’ve transferred the object, you can’t then use it in the thread you transferred it from. It can only be accessed by the thread that has ownership. For more information on this, check out this page on HTML5 Rocks.
Ok, now I’ve got that covered, just out of interest, I thought I'd run the same tests as before but this time instead of using unsorted data I'd sort the data on already sorted data, making the sort function significantly faster (as it won't do anything meaningful). I was expecting to find the same sort of patterns as above, just with smaller numbers. Here's the actual results:
Test Case One: 1.91 seconds
Test Case Two: 4.10 seconds
Test Case Three: 3.22 seconds
Two interesting things are highlighted here:
Well, no. First, slower or not, the UI thread is always responsive when using web workers so, to your user, the system will seem faster than taking the traditional method. Secondly, although using web workers performed worse than the traditional approach in the last test, that won't be the case in all scenarios, as shown by the first experiment. If the overhead of creating a web worker and passing messages to and from it outweigh the amount of time saved by performing calculations in parallel on different web workers, then, yes, the overall performance will be worse, but, if you're performing a vast array of data manipulation on a great many records, then you should see a big performance gain. Like always though, it's best to see how it would perform with your actual data (or something similar). Only then will you be able to gauge just how much quicker Web Workers will make your web application, they are however a tool that you should definitely be aware of as we approach the on-coming HTML5 world!
Finally, if you want to follow this blog post up with further reading about HTML5 Web Workers, the best tutorial I found was posted on the Mozilla website, here.
Enjoy!
So, what are Web Workers? Let's start off with some background information about JavaScript. JavaScript was originally developed by Netscape back in 1995. Its primary use was to allow developers to manipulate web pages, which, as you can imagine, were very basic back in 1995. In order to do this JavaScript was designed as a single-threaded language. Unlike its namesake Java (which is a completely unrelated language by the way) and many other languages, JavaScript does not support threads. The reason for this, I imagine, was very simple. How would you go about designing a multi-threaded language whose primary aim was to modify something (the Document Object Model (DOM)) that was shared between threads, without incurring deadlock problems? This problem remains unsolved. And so JavaScript runs on one single thread.
One Single Thread - A one-trick pony?
Is it really a bad thing? I suppose it can be argued that in itself it's not. The design decision not to support threading in JavaScript was a good one. It makes it simpler to learn; it avoids some potentially horrific problems and; as the web has thrived in the past two decades, so has JavaScript. It can't be that bad right? Well, yes and no. When everything runs on one thread it can lead to a very poor user experience. The User Interface (UI) can become non-responsive if not programmed correctly. In order to address this problem two functions were built into JavaScript: setTimeout and setInterval. These allow a piece of code to run after a pre-defined amount of time. The idea being that you could schedule long running code to run when the UI wasn't busy and the thread was free, essentially "hiding" the fact that JavaScript all runs on a single thread. These little hacks have allowed developers to get pretty inventive and have allowed JavaScript to flourish.
Ok, all is good then. What's the problem?
As I said, these are basically "hacks". What happens when the user starts clicking but you've already started to execute a long-running piece of code? You have a problem! The system will not be able to respond to the user's action until the code has completed its execution. And after all, some code, especially data centric code, just takes a long time to run. When this occurs, you'll see an error similar to this:
There's not a whole lot you can do about that. If you do have code that'll take a long time to run then you're a little stuck.
So, where do Web Workers come in?
Simple. Web workers bring multi-threading to JavaScript. They come with a few restrictions though and one is quite a biggy. Web workers cannot access the DOM. Allowing multiple threads access to a non-thread safe resource (the DOM) would cause all sorts of problems so the same design decision was made as in 1995. What they do allow you to do is to process and return data in a separate thread to the UI, so the time of seeing those pesky "unresponsive script" errors should now be gone forever!
Multi-threading eh? Woo! Where do I start?
Well, first you need to make sure you're using a web browser that actually supports web workers. To find that out you can simply visit caniuse.com and look it up. I should mention here that if you're using Chrome and the JavaScript file you're testing is stored locally and isn't running on a web server such as IIS, then you need to enable a flag on Chrome for everything to work. Simply start up Chrome with this command: chrome.exe --allow-file-access-from-files. This problem does not exist with Firefox. For more information, check out this Stack Overflow post.
Now that you are using a web worker enabled browser, you need to define your web worker. As the worker is in an entirely different thread, it has no access to loaded scripts, so you need to tell the worker which script to load. To do this we can use the following line:
var workerOne = new Worker('worker.js');
where worker.js is the name of your script.
Web workers communicate with the main UI thread in the form of messages. When a message is sent to a web worker, it causes the message event to fire within the thread. To hook in to this, your worker.js file needs to have the following content:
self.addEventListener('message', function(e) {
var message = e.data;
// Do something with the message
self.postMessage(message.sort());
}, false);
To give you a quick overview of what's happening here, when the web worker is sent a message, the message event will be fired and the inner function defined above will run. It'll get the sent message by fetching it from the event object. Then, in a useful scenario some action would be performed based on that message. You'd then post a message back to the caller (usually the main UI thread). This could just be to notify the thread that it's completed or if you've done some data manipulation, you could post back the modified data. In the above example, the message is sorted and sent straight back.
So, that's the web worker defined. How do you now post messages to that worker thread so you can use it effectively? Well, you've defined your worker object earlier, you just need to:
a) Define what happens when the UI thread receives a message from the web worker and;
b) Send a message to the web worker which will start the whole process.
In much the same way that you need to hook into the message event within the web worker, you also need to hook into the message event on the web worker object itself, within the UI thread. Something like the following should do the job:
workerOne.addEventListener('message', function(e) {
var numbersOne = e.data; // Do something with this data
}, false);
This will fire when a message is posted from the web worker to the UI thread. In the previous example, e.data will now contain your sorted data!
Ok, now all you need to do is send your original data to the web worker for processing. You use the same method as when you posted the message from the web worker to the UI thread but this time you perform it on the worker object within the UI thread, so you'll have something like this:
workerOne.postMessage([1,4,2,7,9,2,4,7,6,9,4]);
Now you have something that's a working demo, the array of integers (1,4,2,7,9,2,4,7,6,9,4) is sent to the web worker. The web worker starts up in it's own thread; picks that message up; sorts it and then sends the data back to the UI thread. The UI thread now has a sorted array of data but it hasn't actually done any processing to get that information. It has left the UI thread free, so to the user the system seems responsive. Ok, in this particular example with 10 or so integers there isn't going to be much of a difference, but when you're playing with millions of objects, this can have a significant impact.
Performance
While I was looking at this, I wondered if I could make use of Web Workers so that it would give some significant performance gains, especially in terms of data processing. If web workers work like standard threads then this should be fairly straightforward to test.
Here's my very simple test case:
How quickly can I sort three arrays containing two million integers each?
I'm going to test in three ways:
1. Use standard javascript. Sort each array, one after the other and time how long it takes.
2. Use a single web worker. The sorting of all of the arrays will occurr in one web worker.
3. Use a web worker for each array sort.
With what I knew about threads and web workers, I thought I'd find the following...
- The first and second test case would be comparatively similar in terms of time taken.
- The first test case would freeze the web browser until all data had been sorted. The other methods would not.
- The third test case would be the fastest, with all three sorting algorithms occurring in parallel. In theory, the time it takes for the third test case should be roughly 66% quicker than that of the first test case.
Each test case was repeated 10 times and an average time was taken, here are the results:
Test Case One: 11.24 seconds
Test Case Two: 13.75 seconds
Test Case Three: 7.21 seconds
(If you wish to actually repeat the demo yourself, you can pick up the files from here)
Interesting! Ok, I wasn't quite right about Test Case Three being 66% quicker, but it is around 33% quicker which isn't too bad. What is interesting is that test case two is almost 2.5 seconds slower than test case one. Just to open up a new web worker and to send/receive the massive arrays adds an extra 2.5 seconds to the processing time, that's almost a 22% time increase. That seems rather high to me but, it's good to know at least.
It's around about this time that I should mention just how the UI thread and worker threads post messages to each other as it can have an impact upon performance. You're transferring data across threads so you can’t just pass a variable by reference. Instead, you need to do a full copy of the variable. How this occurs depends on what you’re doing and how you’re doing it. If you’re passing across a string then the data will be serialized into JSON and sent to the worker thread. It’ll then be de-serialized at the other end. If however, you’re using a complex data type, File or Blob for example, then an algorithm called structured cloning will occur. This will effectively copy the contents of the variable, which for a variable containing megabytes worth of data, can be slow. There is however, another way! Google have come up with a concept of “transferable objects". This allows you to transfer the owner of an object from one thread to another using a zero-copy which is significantly faster. There is one down side to this: once you’ve transferred the object, you can’t then use it in the thread you transferred it from. It can only be accessed by the thread that has ownership. For more information on this, check out this page on HTML5 Rocks.
Ok, now I’ve got that covered, just out of interest, I thought I'd run the same tests as before but this time instead of using unsorted data I'd sort the data on already sorted data, making the sort function significantly faster (as it won't do anything meaningful). I was expecting to find the same sort of patterns as above, just with smaller numbers. Here's the actual results:
Test Case One: 1.91 seconds
Test Case Two: 4.10 seconds
Test Case Three: 3.22 seconds
Two interesting things are highlighted here:
- Test Case Two is slower than Test Case Three. Why? I haven't managed to find an answer to that yet. I can only assume that the overhead of sending all three arrays at once, which I wrap up into one object, performs badly when using the structured cloning algorithm to post messages to the worker thread.
- Test Case One is the fastest. This case doesn't use any fancy web workers, it's just plain old JavaScript executing each sort function one after another. So, by adding web workers, we've actually slowed down the data processing process, which is the exact opposite of what we were trying to achieve. The reason for this... the overhead of creating a web worker and communicating with it out-weighs the benefit we get by using a web worker and running data processing in parallel.
Well, no. First, slower or not, the UI thread is always responsive when using web workers so, to your user, the system will seem faster than taking the traditional method. Secondly, although using web workers performed worse than the traditional approach in the last test, that won't be the case in all scenarios, as shown by the first experiment. If the overhead of creating a web worker and passing messages to and from it outweigh the amount of time saved by performing calculations in parallel on different web workers, then, yes, the overall performance will be worse, but, if you're performing a vast array of data manipulation on a great many records, then you should see a big performance gain. Like always though, it's best to see how it would perform with your actual data (or something similar). Only then will you be able to gauge just how much quicker Web Workers will make your web application, they are however a tool that you should definitely be aware of as we approach the on-coming HTML5 world!
Finally, if you want to follow this blog post up with further reading about HTML5 Web Workers, the best tutorial I found was posted on the Mozilla website, here.
Enjoy!
Friday, 30 September 2011
HTML5 - Offline Web Applications
As I'm sure some of you are aware, one of the more highly anticipated features of the HTML5 spec is the ability to make websites available offline. This is becoming more and more useful with the explosion of the mobile/tablet market where internet connectivity may just not be available.
I've now got a bit of experience in dealing with this part of the spec so, I thought I'd share a few things with you. For the most part, making your site available offline is pretty simple, but before we start, let me make one thing clear, mainly because this caught me out a bit...
When you think about it, this makes perfect sense. Usually dynamic content will require some sort of connection to a server and if you're offline then this isn't possible but, what caught me out is that even if there is a connection to a server (i.e. you do have your internet connection), then it's still not possible to update your content, well, not easily anyway.
So, why is this? Essentially, offline support works by the developer specifying what files should be loaded into a cache (the application cache, more about this later). Your users will then hit the site the first time, download all the files asked of them and the files specified by the web developer will be put into the browsers application cache. From this point on, every time the user visits that website, their web browser will check it's application cache for each and every file required by the website, if it finds that file within the application cache then it'll load it from there, if not, it'll go fetch it from the web server. So, if you're offline and the files required are in the browsers application cache then they'll be loaded from there, the web server will never be hit and there you have it, your website is available offline. However, this process happens regardless of whether you're offline or not. This causes problems for dynamic content, take this situation for example:
Ok, now I've got that warning out of the way, let's go into detail about how to actually implement this.
The whole HTML5 offline support revolves around getting files into the browsers application cache. To do this, you need to create a manifest file. What's a manifest file? Essentially, it's just a normal text file that has a specific format which will define which files to go in the application cache and which should be fetched from the web server (if available). A few details about the manifest file:
<html manifest="/cache.manifest">
<head>
...
</head>
<body>
...
</body>
</html>
CACHE MANIFEST
I've now got a bit of experience in dealing with this part of the spec so, I thought I'd share a few things with you. For the most part, making your site available offline is pretty simple, but before we start, let me make one thing clear, mainly because this caught me out a bit...
HTML5 offline web applications only truly work with static content.
When you think about it, this makes perfect sense. Usually dynamic content will require some sort of connection to a server and if you're offline then this isn't possible but, what caught me out is that even if there is a connection to a server (i.e. you do have your internet connection), then it's still not possible to update your content, well, not easily anyway.
So, why is this? Essentially, offline support works by the developer specifying what files should be loaded into a cache (the application cache, more about this later). Your users will then hit the site the first time, download all the files asked of them and the files specified by the web developer will be put into the browsers application cache. From this point on, every time the user visits that website, their web browser will check it's application cache for each and every file required by the website, if it finds that file within the application cache then it'll load it from there, if not, it'll go fetch it from the web server. So, if you're offline and the files required are in the browsers application cache then they'll be loaded from there, the web server will never be hit and there you have it, your website is available offline. However, this process happens regardless of whether you're offline or not. This causes problems for dynamic content, take this situation for example:
- User A goes to a website, and the file that contains the latest news story is put into the users application cache.
- User A re-visits that site a few minutes later, the latest news story is loaded from the application cache but as the latest news story hasn't changed, everything looks fine.
- User A visits the site a week later. The latest news story is loaded from the application cache, the web server still isn't hit. Now, the latest news story is thoroughly out of date, your user is effectively looking at a snapshot of your website which was taken the first time they visited. This obviously isn't what you wanted.
Ok, now I've got that warning out of the way, let's go into detail about how to actually implement this.
The whole HTML5 offline support revolves around getting files into the browsers application cache. To do this, you need to create a manifest file. What's a manifest file? Essentially, it's just a normal text file that has a specific format which will define which files to go in the application cache and which should be fetched from the web server (if available). A few details about the manifest file:
- This file is defined within the <html> tag of your web page, so, for example:
<html manifest="/cache.manifest">
<head>
...
</head>
<body>
...
</body>
</html>
- The file must be served with a content type of text/cache-manifest. How you do this depends on what web server you're running. Personally, when using ASP.NET, I set up a new HTTP Handler to handle .manifest files and set the ContentType on the Response object to be text/cache-manifest.
- The first line of a manifest file must be CACHE MANIFEST
- There are three different sections to manifest file:
- CACHE - This section defines files that will be added to the browsers application cache and therefore, will be available offline.
- NETWORK - This section defines files that will ALWAYS be loaded from the web server. If no network connection is available then these will error.
- FALLBACK - If a resource can't be cached for whatever reason then this specifies the resource to use instead.
Let's see an example of a valid manifest file now:
CACHE MANIFEST
CACHE:
/picture.jpg
/mystyle.css
NETWORK:
*
So, what's going on here? Well, the files, picture.jpg and mystyle.css are both added to the application cache (note, that the HTML page you're currently viewing is by default, added to the cache). Under the network section there's a * symbol. This is a special wildcard symbol which effectively says "whatever isn't cached, go and fetch from the web server".
And that's it, you've now got an offline web application.
And that's it, you've now got an offline web application.
But.... when are things ever that simple to develop? There's a few more things you should know about developing offline web applications. I'm going to put to you a couple of scenarios and offer a solution to each:
Scenario 1: You've added a new file to your website and need it to be added to the application cache. How do you go about doing this?
Well, logic suggests you'd update your manifest file to include your new file and hey presto, it should be added. Well, you're half right. The problem is, with all HTTP requests, browsers will try and cache the files they retrieve, this is no different for manifest files. So, you'll update your manifest file but, the user won't ever retrieve the new manifest file due to the fact that the browser has cached the old version.
To solve this, I made sure that the manifest file is never cached by the browser and as I use an HTTP Handler to deliver the manifest file, that's easily accomplished by using something like this:
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.MinValue);
Scenario 2: The content of one of the cached files has changed. How do I force the user to re-download the new file?
A web browser will only re-fetch cached files when it detects a change with the manifest file. In this particular case, there is no change with the manifest file so how do you get around this? I simply use comments within the manifest file. So, taking our previous example:
Scenario 1: You've added a new file to your website and need it to be added to the application cache. How do you go about doing this?
Well, logic suggests you'd update your manifest file to include your new file and hey presto, it should be added. Well, you're half right. The problem is, with all HTTP requests, browsers will try and cache the files they retrieve, this is no different for manifest files. So, you'll update your manifest file but, the user won't ever retrieve the new manifest file due to the fact that the browser has cached the old version.
To solve this, I made sure that the manifest file is never cached by the browser and as I use an HTTP Handler to deliver the manifest file, that's easily accomplished by using something like this:
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.MinValue);
Scenario 2: The content of one of the cached files has changed. How do I force the user to re-download the new file?
A web browser will only re-fetch cached files when it detects a change with the manifest file. In this particular case, there is no change with the manifest file so how do you get around this? I simply use comments within the manifest file. So, taking our previous example:
CACHE MANIFEST
#Version 1
#Version 1
CACHE:
/picture.jpg
/mystyle.css
NETWORK:
*
You'll see I've added a version comment. Now, when the content of one of the cached files changes, I increment the version comment and hey presto. The browser will detect the change and will re-fetch all the files to be cached. Be warned, you'll still have the problem of scenario 1 though!
And finally...
Just a few more things to bare in mind while you're developing:
You'll see I've added a version comment. Now, when the content of one of the cached files changes, I increment the version comment and hey presto. The browser will detect the change and will re-fetch all the files to be cached. Be warned, you'll still have the problem of scenario 1 though!
And finally...
Just a few more things to bare in mind while you're developing:
- If for some reason, one of the files you wish to cache cannot be downloaded then the whole caching process fails. This can be a bit of a pain when you're trying to track down problems.
- There are JavaScript events you can hook in to, to see what's going on. There's an actual applicationCache object on the window object that exposes useful methods and events. (see here for more details and examples).
- To maximize the benefits of offline support, you could use local data storage to store data that could then be used offline and/or uploaded to a server when an internet connection is available. See the following for more information: Dive Into HTML5 - Storage for more information.
- While developing, I suggest you use Google Chrome as your browser. It provides some very useful tools that a developer can utilize for offline web application development, here's a couple I found particularly useful:
- If you hit F12 to bring up the developer tools then, go to the Resources tab, at the bottom there's an Application Cache option. This will list all the files currently stored in the application cache for the site you're currently viewing. It should help you track down problems when downloading particular files for the application cache. (If they're not listed then something's gone wrong!).
- Within the address bar, if you type: chrome://appcache-internals then Chrome will list all the applications it has stored within it's application cache. It then gives you the very handy option of deleting it meaning you can be assured that the next time you visit the site, new content will be fetched from the web server.
I've covered a fair amount here, but, if you want further resources, I've found that the Dive Into HTML5 website to be a great resource for all things HTML5-esque. For their article on Offline Web Applications, try here.
And that's it from me for the time being.
Good luck!
Monday, 1 August 2011
MS Office 2010, ActiveX and Microsoft.Office.Interop
Ok, so you want to create some sort of plug-in to your website that enables some sort of integration with Microsoft Office. Maybe you want to export some data into Excel or perform a mail merge with Word.
Microsoft Internet Explorer is the only browser you need to support and the only version of Office you need to support is 2010 and it only ever needs to run on 32-bit systems (ok, I know these conditions are unlikely, but stay with me....) so, you decide that the best way of doing this is to create an ActiveX control using the Microsoft.Office.Interop DLLs. You run and test it on your system and everything works fantastically well, you run and test it on other machines, all running different versions of IE and different operating systems, still, everything works fine. Fantastic.
You release this shining light of coding to the great wide world and within five minutes one of your users logs a bug, "Export to excel doesn't work! I get an error!".
How can this possibly be? You've tested it, it works fine on your machine. You get the user to take a screenshot of the error, you have a look and the following error is reported:
What on earth is that? That doesn't happen on any of your test machines. After putting in a few debug statements and with help from the user in question, you track down the line causing the problem...
MSExcel.ApplicationClass excelApp = new MSExcel.ApplicationClass();
At this point, I suspect you've little hair left and still have no clue what's causing the problem. It's at this point, during my investigation, purely by accident, I came across something odd. I ran the ActiveX control on a system that didn't have MS Office installed and hey presto, I reproduced the error! But that doesn't make much sense, my user clearly has MS Office 2010 installed, why then can my ActiveX control not find it?
The answer is because in MS Office 2010, Microsoft have introduced a new "software delivery mechanism" called "Click-to-Run". I've only read the marketing blurb (found here), but essentially, it virtualizes the program. How exactly Microsoft have implemented this, I have no idea, what I do know is that because of this virtualization, none of the DCOM components that the Microsoft.Office.Interop.Excel DLL uses have been installed, hence the error and why it can't be found.
For this to work, MS Office has to be installed in the standard way, not with Click-to-Run.
I had many fun filled hours tracking this down so I hope this may prove helpful for some others of you out there.
Have Fun!
Microsoft Internet Explorer is the only browser you need to support and the only version of Office you need to support is 2010 and it only ever needs to run on 32-bit systems (ok, I know these conditions are unlikely, but stay with me....) so, you decide that the best way of doing this is to create an ActiveX control using the Microsoft.Office.Interop DLLs. You run and test it on your system and everything works fantastically well, you run and test it on other machines, all running different versions of IE and different operating systems, still, everything works fine. Fantastic.
You release this shining light of coding to the great wide world and within five minutes one of your users logs a bug, "Export to excel doesn't work! I get an error!".
How can this possibly be? You've tested it, it works fine on your machine. You get the user to take a screenshot of the error, you have a look and the following error is reported:
System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80040154
What on earth is that? That doesn't happen on any of your test machines. After putting in a few debug statements and with help from the user in question, you track down the line causing the problem...
MSExcel.ApplicationClass excelApp = new MSExcel.ApplicationClass();
At this point, I suspect you've little hair left and still have no clue what's causing the problem. It's at this point, during my investigation, purely by accident, I came across something odd. I ran the ActiveX control on a system that didn't have MS Office installed and hey presto, I reproduced the error! But that doesn't make much sense, my user clearly has MS Office 2010 installed, why then can my ActiveX control not find it?
The answer is because in MS Office 2010, Microsoft have introduced a new "software delivery mechanism" called "Click-to-Run". I've only read the marketing blurb (found here), but essentially, it virtualizes the program. How exactly Microsoft have implemented this, I have no idea, what I do know is that because of this virtualization, none of the DCOM components that the Microsoft.Office.Interop.Excel DLL uses have been installed, hence the error and why it can't be found.
For this to work, MS Office has to be installed in the standard way, not with Click-to-Run.
I had many fun filled hours tracking this down so I hope this may prove helpful for some others of you out there.
Have Fun!
Thursday, 23 June 2011
System.Web.HttpException - Maximum request length exceeded
If you're using ASP.NET WebForms and you want to allow a user to upload files to your web server, then I'm guessing you've used a FileUpload server control. The problem with the whole concept of "uploading files" is that if a user decides they want to be a pain, they could upload gigabytes worth of files which eat up your server's hard drive, finally causing it to crash in a big heap.
Happy Coding!
Well, Microsoft aren't stupid, they realise this is a pretty big security implication and as such have put safe guards to prevent this. By default, the maximum upload of a file can be 4MB, any bigger and your application will throw the following exception:
System.Web.HttpException: Maximum request length exceeded.
Now that's all fair and good but there's a couple of problems with this which I'll address now.
Note: This is for use with Internet Information Service 6 (IIS 6). In IIS 7, Microsoft have changed how you set the maximum upload size. You can use the rest of this article but, if you're using IIS 7, remember to change the relevant tags within the web.config file as defined in this article.
Note: This is for use with Internet Information Service 6 (IIS 6). In IIS 7, Microsoft have changed how you set the maximum upload size. You can use the rest of this article but, if you're using IIS 7, remember to change the relevant tags within the web.config file as defined in this article.
Firstly, and most obviously, what do you do if you want the user to be able to upload more than 4MB? Well, that's pretty simple, you can override the default!
Within the web.config file, you can add/find the httpRuntime tag as follows...
<system.web>
...
<httpRuntime
maxRequestLength="4096" />
...
</system.web>
...
</system.web>
The maxRequestLength is the maximum upload file size, in kilobytes. So, if you wanted to up it to 6MB, you'll enter the value 6144. If you are going to increase the file size, be careful. Do not increase it to any very large numbers, if you do, you'll be leaving your website vulnerable. It'll only take one careless (or malicious) user to upload a few massive files and your web server will come crashing down.
Ok, well, so far so good. We've increased our maximum file upload size to 6MB, but, what happens if a user does, accidentally or unknowingly, try to upload a file greater than 6MB? Well currently, you'll get a 404 error (I know, weird eh?). The HTTP Runtime will throw an exception, this will prevent the server from sending a response. The browser, expecting a response, won't receive one so will assume the page has magically vanished and hence the 404. So, how can we get around this?
There's a few ways, I'm only going to discuss two, one server side solution, one client side. Ideally, they should be used together.
1. You can catch the error and show a custom made error page. To do this, within the Application_Error method in your global.asax, you can have something that looks like this:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
if (ex is HttpUnhandledException)
{
ex = ex.InnerException;
if (ex != null && ex.Message.Contains("Maximum request length exceeded."))
{
this.Server.ClearError();
this.Server.Transfer("~/MaxUploadError.aspx");
}
}
}
Where MaxUploadError.aspx is an error page you've set up describing the problem.
Note: This doesn't work for the development server found in Visual Studio so, when testing, you'll still get your 404 error. It will work when you deploy to IIS, or, if you have Visual Studio hooked into IIS.
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
if (ex is HttpUnhandledException)
{
ex = ex.InnerException;
if (ex != null && ex.Message.Contains("Maximum request length exceeded."))
{
this.Server.ClearError();
this.Server.Transfer("~/MaxUploadError.aspx");
}
}
}
Where MaxUploadError.aspx is an error page you've set up describing the problem.
Note: This doesn't work for the development server found in Visual Studio so, when testing, you'll still get your 404 error. It will work when you deploy to IIS, or, if you have Visual Studio hooked into IIS.
2. You can use HTML 5! Unfortunately, JavaScript before HTML 5 was unable to interrogate files on the users computer, for obvious security reasons. With HTML 5, you can now query information from a file, given that the user has selected that file for upload. Ok, so this is only going to work with the latest and greatest browsers that support the File API within the HTML 5 specification but, where available, it should be used. It'll save your server having to deal with an extra, possibly time consuming request and it'll give your user an immediate response. They won't be directed to some error page where they'll then have to go back to re-submit a file, they can just change the file there and then and re-submit.
Ok, so to demo this, I'm going to assume you have a FileUpload control defined within your aspx as so:
<asp:FileUpload runat="server" ID="fileUpload" />
Ok, so to demo this, I'm going to assume you have a FileUpload control defined within your aspx as so:
<asp:FileUpload runat="server" ID="fileUpload" />
Now, within your Page_Load method within your code behind, you can add the following:
protected void Page_Load(object sender, EventArgs e)
{
HttpRuntimeSection section = ConfigurationManager.GetSection("system.web/httpRuntime") as HttpRuntimeSection;
string errorMessage = "Sorry, you cannot select this file for upload.\\r\\n\\r\\nMaximum file size " + section.MaxRequestLength + "Kb, your file is ' + (evt.target.files[0].size/1024) + 'Kb.";
string script = "if(typeof(evt) != 'undefined'){ if(evt.target.files.length == 1 && (evt.target.files[0].size/1024) > " + section.MaxRequestLength + "){ alert('"+ errorMessage + "'); evt.target.value = ''; } }";
this.fileUpload.Attributes.Add("onchange", script);
}
And that's it, basic HTML 5 support with minimum fuss. Maybe a small, but hopefully effective method of validating the size of your user uploads! We simply check to make sure the File API is available, grab the file size in kilobytes, compare it to the known maximum value and if it's more, show an alert to the user and reset the upload control.
If you want to view the source of all this, then I've set up a simple project that you can download here.
Just as a warning, there is a maximum upload size that you will not be able to override but it'll depend on your set up. Essentially, when uploading, IIS will put the file your uploading into memory before writing to the hard disk, this means you can only use the amount of memory that the IIS worker process has available (usually about 1GB). For more information regarding this, have a look at this knowledge base article provided by Microsoft.
If you want to view the source of all this, then I've set up a simple project that you can download here.
Just as a warning, there is a maximum upload size that you will not be able to override but it'll depend on your set up. Essentially, when uploading, IIS will put the file your uploading into memory before writing to the hard disk, this means you can only use the amount of memory that the IIS worker process has available (usually about 1GB). For more information regarding this, have a look at this knowledge base article provided by Microsoft.
Happy Coding!
Wednesday, 25 May 2011
SSRS 2008 - Logged In User within Data Extension
For those of you that don't know what a Data Extension is, it essentially allows the developer to define how to retrieve data from various different data sources. Microsoft provide some of the core extensions, for example, an extension exists for Oracle and MS Sql Server databases. Then, because Microsoft have taken this modular approach with extensions, you, the developer, can build your own extension that defines how to connect and retrieve data from different types of data source, you can then plug that straight into SSRS and you're good to go. To be able to create a data extension, your classes need to implement specific interfaces, and there's a bit of configuration file tinkering required, for more information regarding this, you can read this article.
In order to maintain this modular design, all the details about a particular instance of a data source are defined outside the extension and are then passed to the extension when the report is run. For example, the extension may require a username, password and/or connection string. These pieces of information are set up when the data source is first created within SSRS, or, if it's credential based, the user may be prompted just before the report is run.
However, I've recently come across the need to find out who the user currently running the report is from within a Data Extension. There are a variety of reasons you may want to do this, in our particular example, we have database level security and so all our users have their own database user. We also use SSRS with Forms Authentication connecting to an Active Directory user store, the user logs in to SSRS using that active directory name but, the database user that they connect to is different. The user is never aware of this and so does not know their database credentials. We needed a way of finding out what user was connected, then, using that information we could find out their database credentials on the fly and run the report for that user.
At first, this seems a pretty simple problem, first of all, we create a new data extension for our data source type using the tutorial in the above link. Then, we just need to grab the user that's logged in. That should be pretty simple right? After all, the whole of SSRS seems to run as a web application so surely, we can just use
string user = HttpContext.Current.User.Identity.Name;
And in the majority of cases, you would be correct, this works, however, when you actually go to run the report, HttpContext.Current is magically set to null and then you start getting NullReferenceExceptions.
So, why is this?
After searching through many a DLL, I eventually found that, maybe unsurprisingly, that to run the report, the application uses a separate thread. This separate thread obviously doesn't have access to HttpContext.Current. But, fortunately for us, threads also have a user associated with them and that can be found with this piece of code:
string user = System.Threading.Thread.CurrentPrincipal.Identity.Name;
So, if you stick these two pieces of code together, you'll have a reliable way of getting the user that's currently running the report. Your final bit of code should look something like this:
string name;
if (HttpContext.Current != null)
name = HttpContext.Current.User.Identity.Name;
else
name = System.Threading.Thread.CurrentPrincipal.Identity.Name;
Now with this information, we can query a seperate database, grab the database credentials and use them as the username and password of the data extension. Hey presto, everything works seamlessly without the user ever knowing.
Yes, this does break some of the modular design of extensions but, in this particular scenario, it seems like the best, and only option.
In order to maintain this modular design, all the details about a particular instance of a data source are defined outside the extension and are then passed to the extension when the report is run. For example, the extension may require a username, password and/or connection string. These pieces of information are set up when the data source is first created within SSRS, or, if it's credential based, the user may be prompted just before the report is run.
However, I've recently come across the need to find out who the user currently running the report is from within a Data Extension. There are a variety of reasons you may want to do this, in our particular example, we have database level security and so all our users have their own database user. We also use SSRS with Forms Authentication connecting to an Active Directory user store, the user logs in to SSRS using that active directory name but, the database user that they connect to is different. The user is never aware of this and so does not know their database credentials. We needed a way of finding out what user was connected, then, using that information we could find out their database credentials on the fly and run the report for that user.
At first, this seems a pretty simple problem, first of all, we create a new data extension for our data source type using the tutorial in the above link. Then, we just need to grab the user that's logged in. That should be pretty simple right? After all, the whole of SSRS seems to run as a web application so surely, we can just use
string user = HttpContext.Current.User.Identity.Name;
And in the majority of cases, you would be correct, this works, however, when you actually go to run the report, HttpContext.Current is magically set to null and then you start getting NullReferenceExceptions.
So, why is this?
After searching through many a DLL, I eventually found that, maybe unsurprisingly, that to run the report, the application uses a separate thread. This separate thread obviously doesn't have access to HttpContext.Current. But, fortunately for us, threads also have a user associated with them and that can be found with this piece of code:
string user = System.Threading.Thread.CurrentPrincipal.Identity.Name;
So, if you stick these two pieces of code together, you'll have a reliable way of getting the user that's currently running the report. Your final bit of code should look something like this:
string name;
if (HttpContext.Current != null)
name = HttpContext.Current.User.Identity.Name;
else
name = System.Threading.Thread.CurrentPrincipal.Identity.Name;
Now with this information, we can query a seperate database, grab the database credentials and use them as the username and password of the data extension. Hey presto, everything works seamlessly without the user ever knowing.
Yes, this does break some of the modular design of extensions but, in this particular scenario, it seems like the best, and only option.
Subscribe to:
Posts (Atom)