Friday 30 September 2011

HTML5 - Offline Web Applications

As I'm sure some of you are aware, one of the more highly anticipated features of the HTML5 spec is the ability to make websites available offline. This is becoming more and more useful with the explosion of the mobile/tablet market where internet connectivity may just not be available.

I've now got a bit of experience in dealing with this part of the spec so, I thought I'd share a few things with you. For the most part, making your site available offline is pretty simple, but before we start, let me make one thing clear, mainly because this caught me out a bit...

HTML5 offline web applications only truly work with static content.

When you think about it, this makes perfect sense. Usually dynamic content will require some sort of connection to a server and if you're offline then this isn't possible but, what caught me out is that even if there is a connection to a server (i.e. you do have your internet connection), then it's still not possible to update your content, well, not easily anyway.

So, why is this? Essentially, offline support works by the developer specifying what files should be loaded into a cache (the application cache, more about this later). Your users will then hit the site the first time, download all the files asked of them and the files specified by the web developer will be put into the browsers application cache. From this point on, every time the user visits that website, their web browser will check it's application cache for each and every file required by the website, if it finds that file within the application cache then it'll load it from there, if not, it'll go fetch it from the web server. So, if you're offline and the files required are in the browsers application cache then they'll be loaded from there, the web server will never be hit and there you have it, your website is available offline. However, this process happens regardless of whether you're offline or not. This causes problems for dynamic content, take this situation for example:
  1. User A goes to a website, and the file that contains the latest news story is put into the users application cache. 
  2. User A re-visits that site a few minutes later, the latest news story is loaded from the application cache but as the latest news story hasn't changed, everything looks fine.
  3. User A visits the site a week later. The latest news story is loaded from the application cache, the web server still isn't hit. Now, the latest news story is thoroughly out of date, your user is effectively looking at a snapshot of your website which was taken the first time they visited. This obviously isn't what you wanted.
There are ways to force the application cache to refresh (again, more about that in a bit) but, it's not straightforward and requires the user to visit the website twice so is less than ideal so, only use this for content that will very rarely change.

Ok, now I've got that warning out of the way, let's go into detail about how to actually implement this.

The whole HTML5 offline support revolves around getting files into the browsers application cache. To do this, you need to create a manifest file. What's a manifest file? Essentially, it's just a normal text file that has a specific format which will define which files to go in the application cache and which should be fetched from the web server (if available). A few details about the manifest file:
  • This file is defined within the <html> tag of your web page, so, for example:


<html manifest="/cache.manifest">
<head>
...
</head>
<body>
...
</body>
</html>


  • The file must be served with a content type of text/cache-manifest. How you do this depends on what web server you're running. Personally, when using ASP.NET, I set up a new HTTP Handler to handle .manifest files and set the ContentType on the Response object to be text/cache-manifest.
  • The first line of a manifest file must be CACHE MANIFEST
  • There are three different sections to manifest file:
    • CACHE - This section defines files that will be added to the browsers application cache and therefore, will be available offline.
    • NETWORK - This section defines files that will ALWAYS be loaded from the web server. If no network connection is available then these will error.
    • FALLBACK - If a resource can't be cached for whatever reason then this specifies the resource to use instead.
Let's see an example of a valid manifest file now:


CACHE MANIFEST
CACHE:
/picture.jpg
/mystyle.css

NETWORK:
*


So, what's going on here? Well, the files, picture.jpg and mystyle.css are both added to the application cache (note, that the HTML page you're currently viewing is by default, added to the cache). Under the network section there's a * symbol. This is a special wildcard symbol which effectively says "whatever isn't cached, go and fetch from the web server".
And that's it, you've now got an offline web application.

But.... when are things ever that simple to develop? There's a few more things you should know about developing offline web applications. I'm going to put to you a couple of scenarios and offer a solution to each:

Scenario 1: You've added a new file to your website and need it to be added to the application cache. How do you go about doing this?

Well, logic suggests you'd update your manifest file to include your new file and hey presto, it should be added. Well, you're half right. The problem is, with all HTTP requests, browsers will try and cache the files they retrieve, this is no different for manifest files. So, you'll update your manifest file but, the user won't ever retrieve the new manifest file due to the fact that the browser has cached the old version.

To solve this, I made sure that the manifest file is never cached by the browser and as I use an HTTP Handler to deliver the manifest file, that's easily accomplished by using something like this:

context.Response.Cache.SetCacheability(HttpCacheability.Public); 
context.Response.Cache.SetExpires(DateTime.MinValue);

Scenario 2: The content of one of the cached files has changed. How do I force the user to re-download the new file?

A web browser will only re-fetch cached files when it detects a change with the manifest file. In this particular case, there is no change with the manifest file so how do you get around this? I simply use comments within the manifest file. So, taking our previous example:


CACHE MANIFEST
#Version 1
CACHE:
/picture.jpg
/mystyle.css

NETWORK:
*


You'll see I've added a version comment. Now, when the content of one of the cached files changes, I increment the version comment and hey presto. The browser will detect the change and will re-fetch all the files to be cached. Be warned, you'll still have the problem of scenario 1 though!

And finally...
Just a few more things to bare in mind while you're developing:
  1. If for some reason, one of the files you wish to cache cannot be downloaded then the whole caching process fails. This can be a bit of a pain when you're trying to track down problems.
  2. There are JavaScript events you can hook in to, to see what's going on. There's an actual applicationCache object on the window object that exposes useful methods and events. (see here for more details and examples).
  3. To maximize the benefits of offline support, you could use local data storage to store data that could then be used offline and/or uploaded to a server when an internet connection is available. See the following for more information: Dive Into HTML5 - Storage for more information.
  4. While developing, I suggest you use Google Chrome as your browser. It provides some very useful tools that a developer can utilize for offline web application development, here's a couple I found particularly useful:
    1. If you hit F12 to bring up the developer tools then, go to the Resources tab, at the bottom there's an Application Cache option. This will list all the files currently stored in the application cache for the site you're currently viewing. It should help you track down problems when downloading particular files for the application cache. (If they're not listed then something's gone wrong!).
    2. Within the address bar, if you type: chrome://appcache-internals then Chrome will list all the applications it has stored within it's application cache. It then gives you the very handy option of deleting it meaning you can be assured that the next time you visit the site, new content will be fetched from the web server.
I've covered a fair amount here, but, if you want further resources, I've found that the Dive Into HTML5 website to be a great resource for all things HTML5-esque. For their article on Offline Web Applications, try here.

And that's it from me for the time being.
Good luck!


Monday 1 August 2011

MS Office 2010, ActiveX and Microsoft.Office.Interop

Ok, so you want to create some sort of plug-in to your website that enables some sort of integration with Microsoft Office. Maybe you want to export some data into Excel or perform a mail merge with Word.

Microsoft Internet Explorer is the only browser you need to support and the only version of Office you need to support is 2010 and it only ever needs to run on 32-bit systems (ok, I know these conditions are unlikely, but stay with me....) so, you decide that the best way of doing this is to create an ActiveX control using the Microsoft.Office.Interop DLLs. You run and test it on your system and everything works fantastically well, you run and test it on other machines, all running different versions of IE and different operating systems, still, everything works fine. Fantastic.

You release this shining light of coding to the great wide world and within five minutes one of your users logs a bug, "Export to excel doesn't work! I get an error!".

How can this possibly be? You've tested it, it works fine on your machine. You get the user to take a screenshot of the error, you have a look and the following error is reported:


System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80040154


What on earth is that? That doesn't happen on any of your test machines. After putting in a few debug statements and with help from the user in question, you track down the line causing the problem...


MSExcel.ApplicationClass excelApp = new MSExcel.ApplicationClass();


At this point, I suspect you've little hair left and still have no clue what's causing the problem. It's at this point, during my investigation, purely by accident, I came across something odd. I ran the ActiveX control on a system that didn't have MS Office installed and hey presto, I reproduced the error! But that doesn't make much sense, my user clearly has MS Office 2010 installed, why then can my ActiveX control not find it?

The answer is because in MS Office 2010, Microsoft have introduced a new "software delivery mechanism" called "Click-to-Run". I've only read the marketing blurb (found here), but essentially, it virtualizes the program. How exactly Microsoft have implemented this, I have no idea, what I do know is that because of this virtualization, none of the DCOM components that the Microsoft.Office.Interop.Excel DLL uses have been installed, hence the error and why it can't be found.

For this to work, MS Office has to be installed in the standard way, not with Click-to-Run.

I had many fun filled hours tracking this down so I hope this may prove helpful for some others of you out there.

Have Fun!


Thursday 23 June 2011

System.Web.HttpException - Maximum request length exceeded

If you're using ASP.NET WebForms and you want to allow a user to upload files to your web server, then I'm guessing you've used a FileUpload server control. The problem with the whole concept of "uploading files" is that if a user decides they want to be a pain, they could upload gigabytes worth of files which eat up your server's hard drive, finally causing it to crash in a big heap.

Well, Microsoft aren't stupid, they realise this is a pretty big security implication and as such have put safe guards to prevent this. By default, the maximum upload of a file can be 4MB, any bigger and your application will throw the following exception:

System.Web.HttpException: Maximum request length exceeded.

Now that's all fair and good but there's a couple of problems with this which I'll address now.

Note: This is for use with Internet Information Service 6 (IIS 6). In IIS 7, Microsoft have changed how you set the maximum upload size. You can use the rest of this article but, if you're using IIS 7, remember to change the relevant tags within the web.config file as defined in this article.

Firstly, and most obviously, what do you do if you want the user to be able to upload more than 4MB? Well, that's pretty simple, you can override the default! 
Within the web.config file, you can add/find the httpRuntime tag as follows...


<system.web>
        ...
        <httpRuntime
             maxRequestLength="4096" />
        ...
</system.web>


The maxRequestLength is the maximum upload file size, in kilobytes. So, if you wanted to up it to 6MB, you'll enter the value 6144. If you are going to increase the file size, be careful. Do not increase it to any very large numbers, if you do, you'll be leaving your website vulnerable. It'll only take one careless (or malicious) user to upload a few massive files and your web server will come crashing down.

Ok, well, so far so good. We've increased our maximum file upload size to 6MB, but, what happens if a user does, accidentally or unknowingly, try to upload a file greater than 6MB? Well currently, you'll get a 404 error (I know, weird eh?). The HTTP Runtime will throw an exception, this will prevent the server from sending a response. The browser, expecting a response, won't receive one so will assume the page has magically vanished and hence the 404. So, how can we get around this?

There's a few ways, I'm only going to discuss two, one server side solution, one client side. Ideally, they should be used together.

1. You can catch the error and show a custom made error page. To do this, within the Application_Error method in your global.asax, you can have something that looks like this:


protected void Application_Error(object sender, EventArgs e)
{
    Exception ex = Server.GetLastError();
    if (ex is HttpUnhandledException)
    {
        ex = ex.InnerException;
        if (ex != null && ex.Message.Contains("Maximum request length exceeded."))
        {
            this.Server.ClearError();
            this.Server.Transfer("~/MaxUploadError.aspx");
        }
    }
}

Where MaxUploadError.aspx is an error page you've set up describing the problem. 

Note: This doesn't work for the development server found in Visual Studio so, when testing, you'll still get your 404 error. It will work when you deploy to IIS, or, if you have Visual Studio hooked into IIS.

2. You can use HTML 5! Unfortunately, JavaScript before HTML 5 was unable to interrogate files on the users computer, for obvious security reasons. With HTML 5, you can now query information from a file, given that the user has selected that file for upload. Ok, so this is only going to work with the latest and greatest browsers that support the File API within the HTML 5 specification but, where available, it should be used. It'll save your server having to deal with an extra, possibly time consuming request and it'll give your user an immediate response. They won't be directed to some error page where they'll then have to go back to re-submit a file, they can just change the file there and then and re-submit.

Ok, so to demo this, I'm going to assume you have a FileUpload control defined within your aspx as so:


<asp:FileUpload runat="server" ID="fileUpload" />


Now, within your Page_Load method within your code behind, you can add the following:


protected void Page_Load(object sender, EventArgs e)
{
    HttpRuntimeSection section = ConfigurationManager.GetSection("system.web/httpRuntime") as HttpRuntimeSection;

    string errorMessage = "Sorry, you cannot select this file for upload.\\r\\n\\r\\nMaximum file size " + section.MaxRequestLength + "Kb, your file is ' + (evt.target.files[0].size/1024) + 'Kb.";

    string script = "if(typeof(evt) != 'undefined'){ if(evt.target.files.length == 1 && (evt.target.files[0].size/1024) > " + section.MaxRequestLength + "){ alert('"+ errorMessage + "'); evt.target.value = ''; } }";

    this.fileUpload.Attributes.Add("onchange", script);
}


And that's it, basic HTML 5 support with minimum fuss. Maybe a small, but hopefully effective method of validating the size of your user uploads! We simply check to make sure the File API is available, grab the file size in kilobytes, compare it to the known maximum value and if it's more, show an alert to the user and reset the upload control.

If you want to view the source of all this, then I've set up a simple project that you can download here

Just as a warning, there is a maximum upload size that you will not be able to override but it'll depend on your set up. Essentially, when uploading, IIS will put the file your uploading into memory before writing to the hard disk, this means you can only use the amount of memory that the IIS worker process has available (usually about 1GB). For more information regarding this, have a look at this knowledge base article provided by Microsoft.

Happy Coding!

Wednesday 25 May 2011

SSRS 2008 - Logged In User within Data Extension

For those of you that don't know what a Data Extension is, it essentially allows the developer to define how to retrieve data from various different data sources. Microsoft provide some of the core extensions, for example, an extension exists for Oracle and MS Sql Server databases. Then, because Microsoft have taken this modular approach with extensions, you, the developer, can build your own extension that defines how to connect and retrieve data from different types of data source, you can then plug that straight into SSRS and you're good to go. To be able to create a data extension, your classes need to implement specific interfaces, and there's a bit of configuration file tinkering required, for more information regarding this, you can read this article.

In order to maintain this modular design, all the details about a particular instance of a data source are defined outside the extension and are then passed to the extension when the report is run. For example, the extension may require a username, password and/or connection string. These pieces of information are set up when the data source is first created within SSRS, or, if it's credential based, the user may be prompted just before the report is run.

However, I've recently come across the need to find out who the user currently running the report is from within a Data Extension. There are a variety of reasons you may want to do this, in our particular example, we have database level security and so all our users have their own database user. We also use SSRS with Forms Authentication connecting to an Active Directory user store, the user logs in to SSRS using that active directory name but, the database user that they connect to is different. The user is never aware of this and so does not know their database credentials. We needed a way of finding out what user was connected, then, using that information we could find out their database credentials on the fly and run the report for that user.

At first, this seems a pretty simple problem, first of all, we create a new data extension for our data source type using the tutorial in the above link. Then, we just need to grab the user that's logged in. That should be pretty simple right? After all, the whole of SSRS seems to run as a web application so surely, we can just use


string user = HttpContext.Current.User.Identity.Name;


And in the majority of cases, you would be correct, this works, however, when you actually go to run the report, HttpContext.Current is magically set to null and then you start getting NullReferenceExceptions.

So, why is this?

After searching through many a DLL, I eventually found that, maybe unsurprisingly, that to run the report, the application uses a separate thread. This separate thread obviously doesn't have access to HttpContext.Current. But, fortunately for us, threads also have a user associated with them and that can be found with this piece of code:


string user = System.Threading.Thread.CurrentPrincipal.Identity.Name;


So, if  you stick these two pieces of code together, you'll have a reliable way of getting the user that's currently running the report. Your final bit of code should look something like this:


string name;
if (HttpContext.Current != null)
    name = HttpContext.Current.User.Identity.Name;
else
    name = System.Threading.Thread.CurrentPrincipal.Identity.Name;


Now with this information, we can query a seperate database, grab the database credentials and use them as the username and password of the data extension. Hey presto, everything works seamlessly without the user ever knowing.

Yes, this does break some of the modular design of extensions but, in this particular scenario, it seems like the best, and only option.

Thursday 28 April 2011

SSRS 2008 Release 2 Update 5

I said in my last post that after updating my SSRS R2 version to update 5, I started getting problems when running reports from the Reports Manager. In particular, I got the following JavaScript error:

"Type Microsoft.Reporting.WebFormsClient.ReportViewer has already been registered. The type may be defined multiple times or the script file that defines it may have already been loaded."

Once this error has occurred, there's a knock on effect which causes a few more JavaScript errors and the final result is that any parameters that need to be entered, can't. All the parameter fields are disabled.

After a bit of research, I found out the cause of this problem and I found a workaround. I'm now going to go through how I found out what caused the problem and suggest the fix I've implemented. If you want to skip how I found out what caused the problem then the source code can be found here.

So, going by the error, it seems that the JavaScript type Microsoft.Reporting.WebFormsClient.ReportViewer has been defined twice, so, it makes sense that that's where we should start. Let's try and find the script(s) that define that type.

I'm using Microsoft Internet Explorer 9 which includes a "Developer Tools" feature. This feature now includes a "Network" tab which, when capturing, will show which files are requested by the browser, how those files are retrieved (POST, GET etc.) and the header and body of each individual request and subsequent response. Using this feature when we click on the report, we get something that looks like this:




I've highlighted the two rows that look interesting, just by looking at the name of the resource, it seems like these two could be useful. So, if we look at the response body of those two requests, we find the following: ReportViewer.js and ViewerScript.

If you open those, and search for "Microsoft.Reporting.WebFormsClient.ReportViewer" you'll notice that they both create that type. As ReportViewer.js is the first to load, that creates it first, then ViewerScript runs and it tries to create it again. An exception is thrown and then we start getting all our problems.

So, how do we fix this? Well, if we look closely at the address, we'll see that it requests the file Reserved.ReportViewerWebControl.axd. Axd? That seems like an odd file extension. Well, it is, it's normally used for ASP.NET HTTP Handlers and this is no different. If you check the web.config file for the Report Manager, you'll find this:


<httpHandlers>
      <add verb="*" path="Reserved.ReportViewerWebControl.axd" type="Microsoft.Reporting.WebForms.HttpHandler, ReportingServicesWebUserInterface, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" />      
      <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" validate="false" />
</httpHandlers>


The first add tag clearly maps the path Reserved.ReportViewerWebControl.axd to the type Microsoft.Reporting.WebForms.HttpHandler. For those of you that haven't come across HttpHandlers before, they're a nifty way of requesting resources that have been compiled into a DLL. The class, in this case, Microsoft.Reporting.WebForms.HttpHandler has to implement the IHttpHandler interface which defines a ProcessRequest method, this method allows you to return anything you want back to the browser, it could be a resource from a DLL or it could be some text that's generated programatically on the fly. As long as there's a mapping for it within the httpHandlers section of your web.config then every time a request is made to the path defined then instead of looking for the physical file on your web server, it'll call the class instead. I've skimmed over http handlers in a hope that you kind of understand their purpose but for more information, see here: http://msdn.microsoft.com/en-us/library/bb398986.aspx#Features

So, how can we use this knowledge to fix our little problem? Well, we know the contents of the JavaScript files that are causing the problem. In particular, we know it's the type defined in the ViewerScript that's being problematic. So, if we make a copy of that file but remove the Microsoft.Reporting.WebFormsClient.ReportViewer type from it, then, if we change the http handler mapping to point to a http handler created by us that, when asked for ViewerScript, returns our modified ViewerScript, rather than the original, then we might just have a fix.

It's not quite that easy though. The path defined within the web.config can only contain just that, the path, it cannot contain any of the query string, so, we'll need our http handler to function just as the original did apart from when requesting the ViewerScript. Well, we can do this pretty easily by wrapping the original within side our own. To very quickly walk you through this, if you create a new class library project, then add a new Resource.resx file to the project. Then, create a new JavaScript file and called it ViewerScript.js. Copy the contents of the original ViewerScript but without the Microsoft.Reporting.WebFormsClient.ReportViewer into the new file. Then, add the file to Resource.resx. Finally, you need to add a reference to the ReportingServicesWebUserInterface DLL which you can pull directly out of the Report Manager bin directory. This will give you access to the original HttpHandler that Microsoft used.

If all that's been set up correctly, then you can create a new class and copy and paste the following:


using System.Web;

namespace SSRSR2
{
    public class HttpHandler : IHttpHandler
    {
        private Microsoft.Reporting.WebForms.HttpHandler handler;
        public HttpHandler()
        {
            handler = new Microsoft.Reporting.WebForms.HttpHandler();
        }

        public void ProcessRequest(HttpContext context)
        {
            if (context.Request.QueryString["Name"] != null && context.Request.QueryString["Name"] == "ViewerScript")
            {
                context.Response.Write(JsHandlerFix.Resource.ViewerScript);
                context.Response.End();
            }
            else
            {
                handler.ProcessRequest(context);
            }
        }

        public bool IsReusable
        {
            get { return handler.IsReusable; }
        }
    }
}


Essentially, this just creates a new instance of the original http handler, if the name of the resource requested isn't "ViewerScript" then it just passes the request on to the original handler and lets that deal with it, which will ensure that the functionality between the original handler and our handler is exactly the same. Then, when "ViewerScript" is requested, we grab our modified ViewerScript from our Resource file and return it.

With all that done, if you compile everything and place the resulting DLL into the bin directory of the Reports Manager, then, all we have to do is change the http handler mapping within the web.config and we should be good to go.

Assuming the name of your DLL is "ViewerFix" then your http handler tag within the web.config file should look like this:


<httpHandlers>
      <add verb="*" path="Reserved.ReportViewerWebControl.axd" type="SSRSR2.HttpHandler, ViewerFix" />
      <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" validate="false" />
</httpHandlers>


And that's it. Restart your reports server service and you should now be able to change the parameters within the reports manager so you can actually run reports.

Just incase you didn't follow the above, all the source code can be found here, feel free to use it as you please.

Now I imagine this error isn't something that'll be around for long, it's too big an error for Microsoft not to address it quickly but, if you need a solution now and can't hang around, here you go.

Enjoy!

Sunday 24 April 2011

SSRS 2008 - Enabling Forms Authentication with ActiveDirectoryMembershipProvider

SQL Server Reporting Services is a very powerful reporting tool however, by default, it uses Windows Authentication (NTLM) to authenticate users. This is fine if you're running over an intranet but if you're not then you need to do a little bit of work to enable forms authentication.

Fortunately, SSRS does support forms authentication in the form of a Security Extension. To give you some background, SSRS allows you, the developer, to extend it's functionality by the means of extensions. There are four main extension types:

  1. Data Processing Extension - This allows you to define how to access data from a specific data source type not currently supported by SSRS.
  2. Delivery Extension - Once a report has been generated it can be "delivered" to various locations, for example, it can be sent to someone via an e-mail address. This allows you to code for locations not currently supported by SSRS.
  3. Rendering Extension - Currently, you can render a SSRS report in a wide array of formats, within a PDF or HTML are just two for example. A rendering extension allows you to extend this to support even more formats that aren't supported by SSRS.
  4. Security Extension - This is the one we're interested in here. Security extensions allow you to precisely define how a user is authenticated and what permissions that user has. By default, this is set to work with Window authentication.
With the ability to write our own security extension, we can set up SSRS to use forms authentication rather than Windows. Fortunately, Microsoft provide a very good example of how to do this and that can be found here: http://msdn.microsoft.com/en-us/library/aa902691(SQL.80).aspx

This sample however, does not tell you how to use a MembershipProvider, in particular, I imagine the one most people will want to use is the ActiveDirectoryMembershipProvider which will validate the username and password provided by a user with an ActiveDirectory membership store. In my particular case, I want to be able to validate against an ActiveDirectory store but I also want to perform just a little bit more validation, so, I'm going to extend the ActiveDirectoryMembershipProvider to achieve this.

Fortunately, this is all very simple to set up. As all authentication is done via a web service, you, as a developer, can treat it as its own web application and so, with a couple of entries within the report server's web.config file, a couple of lines of code within your security extension and a class that extends the ActiveDirectoryMembershipProvider, then you're good to go. Here, I explain the changes required. 

First off, lets change the web.config file. We need to create a connection string that'll link to our ActiveDirectory store, to do this, just above the system.web tag, we need to add the following:


<connectionStrings>
    <add name="ADConnectionString" connectionString="LDAP://SERVERNAME:389" />
</connectionStrings>


Following on from the Microsoft sample, you should have changed the authentication tag to look something like this:


<authentication mode="Forms" >
<forms loginUrl="logon.aspx" name="sqlAuthCookie" timeout="60" slidingExpiration="true" path="/" />
</authentication>
<authorization> 
    <deny users="?" />
</authorization>


Under this tag, you'll need to add the following:


 <membership defaultProvider="MembershipADProvider">
    <providers>
        <add
          name="MembershipADProvider"
          type="MyNamespace.CustomADMembershipProvider, CustomADMembershipProvider"
          connectionStringName="ADConnectionString
          connectionUsername="DOMAIN\admin
          connectionPassword="Password"
          enableSearchMethods="true"
          attributeMapUsername="sAMAccountName"
          connectionProtection="None"/>
    </providers>
</membership>


Obviously, you'll need to change the connection username and password to be the admin username and password which will have the correct permissions to be able to read from the membership store.
That will register the membership provider with the web application so that we can then access it from code.

You'll notice I've changed the type to relate to our custom membership provider which we've yet to write. Lets do that now...


namespace MyNamespace 
{
    public class CustomADMembershipProvider : ActiveDirectoryMembershipProvider
    {
         public override bool ValidateUser(string username, string password)
        {
            bool isValid = base.ValidateUser(username, password);
            if (isValid)
            {
                // Extra validation, for example, may we don't want anyone with the username
                // of "BadUser" to have access.
                if(username.ToUpper() == "BADUSER"){
                    isValid = false;
                }
           }
            return isValid;
        }
    }
}



Just to give you a quick run down of what's going, we're overriding the ActiveDirectoryMembershipProvider so that we can use the base implementation to help us validate against the ActiveDirectory store that we defined within the web.config file. However, we now override the ValidateUser method so we can add in our own custom validation code. The first line of the ValidateUser method just ensures that the user is a valid ActiveDirectory user. If they are then we perform our custom validation, in this code I've given a very simple example of saying that if the username is some form of "BadUser" then we should not allow them access.

If we then compile that class and put the resulting DLL file into the bin directory of the ReportServer directory, it will now be accessible from the ReportServer.

Finally, we need to modify the security extension to use this membership provider. By this stage, I'm assuming you've atleast read over the Microsoft sample on how to enable forms authentication. If so, then you should know what I mean when I say that we need to modify the LogonUser method of the authentication extension. This method is the method that all logons will go through. It doesn't matter how you're logging on to the ReportServer, be it through a web service or through ReportsBuilder, this method will always be hit. We need to modify it so it uses our CustomADMembershipProvider. This is very simple now that we've modified the web.config file and placed our CustomADMembershipProvider DLL within the bin directory of the ReportServer, infact it's so simple, it only requires a single line of code, as shown below.


public bool LogonUser(string userName, string password, string authority)
{
    return Membership.ValidateUser(userName, password);
}


And that's it, in theory, you're good to go. Within SSRS 2008 Release 1, this worked first time, however, in release two, it didn't and I had to install an update (I installed Cumalative Update 5), doing this opened up a few more problems which I had to overcome before I could actually run reports from the Reports Manager. More about that in my next blog!

Sunday 17 April 2011

HTML 5 - Drag and Drop


So, every developer loves looking into new things right? Well I'm no different so with HTML 5 predicted to be the hot new technology on the block, I thought it only right that I take a bit of time to look into it. So, every now and again I'll be posting information regarding what new options HTML 5 will give us and today, I'm going to start with the drag and drop specification.

At the time of writing this, I have three browsers installed on my computer, Internet Explorer 9, Firefox 4 and Google Chrome 11. Drag and drop is only currently supported by two of these, Firefox and Chrome, so if you're not using one of those, none of the demo's provided in this post will work.

So, the HTML 5 specification provides us with a seven new JavaScript events to listen for:
  • dragstart
  • drag
  • dragenter
  • dragleave
  • dragover
  • drop
  • dragend
There's also a new property for HTML elements called draggable, just by setting it to true will enable an element to be draggable, for example:



Try dragging me, you should be able to move me around, although you can't drop me anywhere.

So, now we've made an element draggable, we need to be able to drop it somewhere. Using the events above, we can do just that. But first, let me quickly describe what each event is used for.

dragstart
As with most of these events, it does exactly what it says on the tin. This event fires when you very first try attempting to drag the element to which the event is attached. Returning true will enable the drag, returning false won't.

drag
Fires while you're dragging something. Essentially the same as onmousemove but obviously, only fires while you're dragging something.

dragenter
Fires when you first drag an element over the target element to which this event is attached. Return false if the target element is a drop zone.

dragleave
Fires when your mouse leaves the elements to which this event is attached while dragging another element.

dragover
Fires as you drag an element over the target element to which this event is attached. A little oddly, you need to return false if the target element is a drop zone.

drop
Fires when the user releases the mouse button while dragging over the target element that has this event attached, effectively dropping the dragged element.

dragend
Essentially the same as the drop event as in it fires when the user has released the mouse button while dragging the element, except this event is usually placed on the element being dragged rather than the drop zone element.

Now we know what all the events are used for, we can put together a clever combination and come up with a simple demo.

I'm draggable between the two grey boxes.

So, lets look at the HTML for this...

<table border="0" cellpadding="10" cellspacing="10" style="width: 100%;">
  <tbody>
     <tr>
       <td style="text-align: center;" width="50%">
          <div id="zoneOne" ondragenter="return dragEnter(event);" ondragover="return dragOver(event);" ondrop="return dragDrop(event);" style="background-color: grey; height: 100px; padding: 5px; text-align: center; width: 100%;">
              <div id="dragObj" draggable="true" ondragend="return dragEnd(event);" ondragstart="return dragStart(event);" style="background-color: red; margin: 5px; padding: 10px; width: 50%;">
I'm draggable between the two grey boxes.
              </div>
          </div>
       </td>      
       <td width="50%">
           <div id="zoneTwo" ondragenter="return dragEnter(event);" ondragover="return dragOver(event);" ondrop="return dragDrop(event);" style="background-color: grey; height: 100px; padding: 5px; text-align: center; width: 100%;">
           </div>
       </td>   
    </tr>
  </tbody>
</table>

I've highlighted the drag and drop related mark up in red. There's nothing special here, we mark our draggable element by setting the draggable property to true on that element. Then the rest is just event mapping. We map the dragstart and dragend events to the element that we're going to be dragging around screen. We then map the dragenter, dragover and drop events on the drop area elements. So, what do those mappings do?

Here's the code for them:

  function dragEnter(ev){
    return false;
  }
  function dragDrop(ev){
    var idelt = ev.dataTransfer.getData("Text");
    var elem = document.getElementById(idelt);
    ev.target.appendChild(elem);
    ev.stopPropagation();
    return false;
  }
  function dragOver(ev){
    return false;
  }
  function dragStart(ev){
    ev.dataTransfer.effectAllowed='move';
    var id = ev.target.getAttribute('id');
    ev.dataTransfer.setData("Text", id);
    return true;
  }
  function dragEnd(ev){
    ev.dataTransfer.clearData("Text");
    return true;
  }

Now, a quick walkthrough of each function...

  • dragEnter always returns false. There's no conditions on which I don't want the draggable item to be droppable within the area defined.
  • dragOver function does the same as the dragEnter for the same reason.
  • dragStart function sets the effectAllowed property of the dataTransfer object. This defines what the drag and drop event is actually allowed to do, in this case we say we can move it. Then we set the type of data that we want to move, in this case it's just text which we'll make the ID of the element we're dragging around. 
  • drop function then grabs that id defined within dragStart, finds the element with that id and then append its to our drop zone, effectively moving it from one zone to another.
  • dragEnd function just clears out the ID we were storing so it doesn't interfere with any future drag and drop operations.

As you've seen, I've made use of the dataTransfer object. This object only exists within the event object when we're dealing with a drag and drop operation. It essentially stores information regarding the operation as it happens. So, you can set the effectAllowed property which defines what effects are allowed within this drag and drop operation. The getData and setData methods allow us to store information regarding the operation, essentially saving us having to define extra global variables that all the functions need access to. For more information on the dataTransfer object and it's members, take a look here.

Ok, well, that's it for drag and drop. I think the next HTML 5 demo I'll be looking at, which is kind of related, is the HTML 5 File API, which will effectively allow users to be able to drag files from their desktop straight on to your web application and in the process, will upload the file to your web server. I believe this is now supported by Gmail to upload file attachments and it's all very clever stuff. More on that at a later date!

Sunday 27 March 2011

More Web-Optimization

Ok, so we've covered the basics of making the page size as small as possible.

Now on to the more obscure time saving methods!

CSS Placement
Always ensure your CSS styles, be them inline or external references, are placed inside the HEAD tag of your HTML page. If a web browser finds an element with a class that it can't find, it won't render that element until it's parsed the entire HTML page to ensure it doesn't have to re-draw the element at a later date. If you've referenced your styles inside the HEAD tag, the browser will have the required information available, if you don't, it won't.

CSS @Import 
This statement in CSS allows you to reference another stylesheet from your original stylesheet. While that is great, it unfortunately has the side effect of essentially placing a stylesheet reference at the bottom of your HTML page, which, as we've just covered, is a bad thing. Instead, just use another stylesheet reference directly within the HEAD of the HTML, or, as we'll cover in a second, combine the two stylesheets.

JavaScript Placement
Try and ensure all your JavaScript files are placed at the bottom of your HTML page. Unfortunately, scripts block parallel downloads so placing them at the end of your HTML page, after everything has been downloaded, prevents this from blocking anything useful.

Make CSS and JavaScript External
If you make CSS and JavaScript external then the web browser can cache the relevant files so, during the next page load, the browser can access the file straight from disk, rather than going off to web server to fetch it. Not only will it lower the load on your web server, it'll also save time. Loading from a local disk is a lot faster than fetching a file across the internet! Be warned though... if your website is running off of HTTPS then you can't cache files!

Reduce HTTP Requests
Every time you request a CSS file, or a JavaScript file, or an image, or just about anything that isn't within the plain HTML, then an HTTP request has to be made for the file. There's a performance overhead with this so, reducing them should speed things up. So, try combining all of your CSS files into one, then combine all of your JavaScript files into one. As for images, try using the Sprite and Image Optimization Framework produced by Microsoft. Essentially, it'll combine all of your images into one large image and then using CSS, will only display portions of that large image so it'll seem as if each image is actually a separate image to your users. Pretty fancy stuff and again, will reduce the number of HTTP requests!

Reduce DNS Lookups
If you're accessing your resources on different servers then each time you try and grab the resource from each different server then a DNS lookup needs to be performed. For those of you that don't know what that is, it's essentially the process of finding out the IP address of a given domain name (e.g. Microsoft.com -> 65.55.12.349). There's an overhead with this lookup so reducing the number will again improve performance. With this said however, a web browser can only download a certain amount of files in parallel for a given server (in IE 7, this is limited to 2 files at any one time, I think in IE8 it's been increased to 6). So, putting resources on different servers will enable the users web browser to download more files at a given time. Obviously there's a trade off here, the more servers you spread your resources over, the more you can download at any given time but the larger the DNS lookup time penalty.

Reduce 404 Errors
There's really no need to be getting any 404 error for a resource you may, or may not require. It may not even break anything but, a 404 means you've the added expense of creating an HTTP request that does absolutely nothing, and like I covered earlier, the less HTTP requests, the better.

Turn Debugging Off
This is an ASP.NET specific performance improvement. Within your web.config file, there will be something like this line:

<compilation defaultLanguage="c#" debug="true">

Make sure debug="false". When set to true, several things happen, firstly, extra dbg files are produced and run for each aspx page compiled, that will slow down your website. However, I've found that the bigger performance problem is the extra JavaScript validation that runs, especially if you're using the Microsoft AJAX framework. In one instance, just by turning debugging off, a page that was taking 18+ seconds to load, was reduced to 2.

Ok, and that's about all I can think of for the time being. Website performance optimization is a huge subject with many a web page devoted to it. Personally, I find Yahoo's research on this invaluable so if I were you, I'd check out this guide that they've produced. It covers everything above and more. Yahoo also make some pretty awesome tools for helping with this, specifically, I've used the .NET port of their compressor which is one of the best I've come across. If you've got any other tips that aren't covered here or on Yahoo's guide, feel free to let me know, I'd love to hear them!

Sunday 13 March 2011

HTTP Compression

Ok, so in my last post I said that minimizing the amount of data sent across the wire is a sure way of speeding up performance.

Well, there's a very simple way of doing this which I haven't discussed yet and that's by enabling HTTP compression.

HTTP Compression is a completely lossless way of making your data take up less space. There's two main forms of HTTP compression - GZip and Deflate. These two forms of compression are supported by virtually all of the main browsers now days so what one you choose to use is completely up to you but from my research, GZip seems to be the more popular.

So, how do you enable HTTP compression? Well, there's two ways:
  1. You can do it within IIS (See here for instructions on how to do that: MSDN)
  2. If you don't have access to IIS then you can do it in code using our friend Response.Filter. To do this, just use the following code and place it within your Application_BeginRequest method within your global.asax class:


void Application_BeginRequest(object sender, EventArgs e)
{
    if (Request.Headers["Accept-encoding"] != null 
        && 
        Request.Headers["Accept-encoding"].Contains("gzip"))
    {
        Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress, true);
        Response.AppendHeader("Content-encoding", "gzip");
    }
    else if (Request.Headers["Accept-encoding"] != null 
             && 
             Request.Headers["Accept-encoding"].Contains("deflate"))
    {
        Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress, true);
            Response.AppendHeader("Content-encoding", "deflate");
    }
}


So, what we're doing here is, we're checking to make sure that the web browser supports GZip compression and if so, we set up a new GZipStream which will compress our output before sending it out to the client. If the browser doesn't support GZip compression then we fall back to Deflate and check to see if the browser supports that and if so, we use that. If neither is supported then we just send the data back uncompressed.


All very simple so there's no excuse not to use it!

My next post will continue in the same web-optimizing vein, where I'll discuss other, lesser known methods of speeding up performance of web pages.