My Resume

  • My Resume (MS Word) My Resume (PDF)


  • Microsoft Most Valuable Professional
  • INETA Community Champion
  • Leader, NJDOTNET: Central New Jersey .NET User Group

Wednesday, October 24, 2007

Custom Authentication with Community Server

You may have scrolled down to the bottom of one of our blog posts over at Infragistics at some point and noticed the little Community Server logo in the page footer.  We've been using Community Server to host our blogs for some time now, and it's such a great platform that we're looking to leverage it a bit more.  To do what we're thinking of, you guys will need to be able to log in...  and what kind of user experience is it for you to have two accounts?  Enter Single Sign On (SSO), and lucky for us, Telligent sells a custom module that does just that.  Setting it up was a cinch: just drop the assembly in, tell Community Server which cookie to use, and make sure that your main site (the authenticating site) is setting the cookie correctly.  And, it really was that simple!  Well, almost...

The out-of-the-box solution did work great for almost all scenarios.  There were, however, a few problems we had:

Cookie Timeouts

Unfortunately, it doesn't seem like the custom cookie authentication module can be configured to provide any type of sliding expiration behavior to the cookie.  That means that after a user logs in on the main site and the cookie is created, they only have as long as the cookie lives to be "singlely signed on" and once it expires, they'd be redirected back to the main site to re-authenticate themselves.  The most obvious and easiest fix would be to just set no expiration date on the cookie.  Sure - that'd work, but I prefer the security of having users automatically signed out after a period of inactivity, at least by default. 

So, I decided to write a module to check for the cookie on each request, and update the expiration date accordingly.  BAM - sliding expiration.  Below is the code I used to do it:

Cookie Manager

public class CookieManager : IHttpModule
    // Lifetime of sliding expiration (in minutes)
    private const int COOKIE_LIFETIME = 20;

    #region CookieDomain
    private static string _CookieDomain =

    /// <summary>
    /// Gets or sets the cookie domain.
    /// </summary>
    /// <value>The cookie domain.</value>
    public static string CookieDomain
            if (String.IsNullOrEmpty(_CookieDomain))
                    string cookieDomain = null;
                    // If no domain name was specified, try to guess one
                    string hostname = HttpContext.Current.Request.Url.Host;
                    // Try to get the domain name
                    string[] hostnameParts = hostname.Split('.');
                    if (hostnameParts.Length >= 2)
                        cookieDomain = String.Format(
                            hostnameParts[hostnameParts.Length - 2],
                            hostnameParts[hostnameParts.Length - 1]);
                    _CookieDomain = cookieDomain ?? hostname;
                catch { /* We don't really care if this doesn't work... */ }

            // Don't allow an empty domain
            if (String.IsNullOrEmpty(_CookieDomain)) _CookieDomain = "localhost";

            return _CookieDomain;

        set { _CookieDomain = value; }

    #region SSOCookieName
    private static string _SSOCookieName;
    /// <summary>
    /// Gets the name of the SSO cookie that the 
    /// Custom Authentication module is using.
    /// </summary>
    /// <value>The name of the SSO cookie.</value>
    public static string SSOCookieName
            if (_SSOCookieName == null)
                Telligent.Components.Provider authProvider =
                    .Extensions["CustomAuthentication"] as Telligent.Components.Provider;

                if (authProvider != null)
                    _SSOCookieName = authProvider.Attributes["authenticatedUserCookieName"];
                    throw new ApplicationException(
                        "Could not find Community Server provider 'CustomAuthentication'.");
            return _SSOCookieName;
    private void KeepSSOCookieAlive(HttpContext context)
        // See if we've got an SSO cookie
        HttpCookie cookie = context.Request.Cookies[SSOCookieName];
        if (cookie != null)
            // Make sure the domain name is set
            cookie.Domain = CookieDomain;
            // Update the cookie's expiration (use sliding expiration)
            cookie.Expires = DateTime.Now.AddMinutes(COOKIE_LIFETIME);
            // Send the cookie back

    #region IHttpModule Members
    /// <summary>
    /// Disposes of the resources (other than memory) used 
    /// by the module that implements <see cref="T:System.Web.IHttpModule"></see>.
    /// </summary> public void Dispose() { }
    /// <summary> /// Inits the specified application.
    /// </summary>
    /// <param name="application">The application.</param>
    public void Init(HttpApplication application)
        application.BeginRequest += new EventHandler(application_BeginRequest);
    void application_BeginRequest(object sender, EventArgs e)
        KeepSSOCookieAlive((sender as HttpApplication).Context);

There are a few things in this class I'd like to point out:

The most important method, obviously, is the KeepSSOCookieAlive() method.  It's the heart of the module, but pretty straightforward.
I decided to try to get the CookieDomain programmatically, but didn't really get into too complex of an operation to get the correct domain.  If for whatever reason this didn't work in production, you could easily override the guessing by setting it in the AppSettings.
I'm getting the actual cookie name used by the provider - by actually retrieving the provider itself and asking it which cookie it's looking for - so that there won't be any confusion.

News Gateway Authentication

Ok, great - we've got the SSO for the main Community Server instance working like a charm, so I move on to getting Telligent's News Gateway service into place.  After reading through the docs for a bit, I see a very ominous line saying something to the effect of "Custom (cookie-based) Authentication won't work with the News Gateway."  Ugh!  I mean, it makes sense, but... Ugh! 

Of course, the first thing I do is try it and hope it works.  Unsurprisingly, it doesn't.  The next thing I do is take a step back, look through what's available, and spot Telligent's Form-based membership provider, which (I think) is the default provider in the Gateway.  The Cookie provider works by creating a Community Server account for a user that has logged in to the main site.  The problem with Forms (username/password combination) Authentication is that the Community Server account that is set up knows nothing about your password from the originating site, nor can it ask the originating site - that it knows nothing about - to authenticate you...  unless you help it.

The solution was obvious: override the ValidateUser(string username, string password) method of the Community Server Forms Membership Provider.  I wasn't going to paste the code because it's just a simple override, but here's an example snippet:

Custom Membership Provider

public class CustomMembershipProvider
    : CommunityServer.ASPNet20MemberRole.CSMembershipProvider
    public override bool ValidateUser(string username, string password)
        // TODO: Insert your custom validation logic here
        return (username == "superdude" && password = "wickedcool");

That's right - just a regular ol' override of the membership provider.  Then you can follow it up by copying the Membership provider section in the configuration file (CommunityServer.NntpServer.Service.exe.config or CommunityServer.NntpServer.Console.exe.config, depending on which one you're using) and replace the CSMembershipProvider with your own.  Then, BAM - you've got username/password authentication against whatever data source you like.


I hope this post can help you out if you were coming across some of the same problems we were!

What You Want vs. What I Want

You know what really grinds my gears? Bad UX!

I came across this login prompt today on a local county college library's website and it was just one more example of horrible UX that I see all too often and makes me physically sick:

Horrible Login

What's wrong with this? Oh geeze... where do I start!?

  • What's with the horizontal alignment?? Stack these fields vertically, please! Users are more than willing to scroll up/down almost indefinitely, but if you make them scroll horizontally, they may never come back again. Granted, this particular example isn't too wide, but why deviate from the de-facto standard?
  • "College ID number (ex G12345678)" This has always bothered me. Why can't I choose my own login name? Sure, I understand that it has to be linked to my asinine college ID number at some point - and that's fine - but let me choose a username I can remember. Some people defend this practice, citing "security concerns." What: security through obscurity? Give me a break - if I can log into my PayPal account with just a username and password, a county library system shouldn't need any more than that.
  • Status!?! What the heck!? Why do you need to know my status? And, what exactly is my status? This may sound a little philosophical, but what makes a Student or "Faculty/Staff", anyway? What if I am a student worker, employed by the college?
  • The only action on the page is "Access Databases." What databases? This is the prompt to login into an online library card catalogue . I don't want to access "databases", I want to search the card catalogue! Actually, my true goal is to find and request the book I'm looking for... but I've already accepted that I have to use the online catalogue to do that.

It's the second and third items that particularly bother me, and the issues that I'm going to attempt to tackle in this post.

The Problem

You may be saying, "calm down, it's just a library login..." But that's the point - it's not. You run into these things on a daily basis, and they need to stop! UI developers need to make the interface much more polite; they need to cater to their customers instead of insisting their customers cater to them. They ask you for things that they know just so they don't have to figure it out themselves. They can (or at least should be able to) take your ID number, cross-reference it against some database somewhere and determine what your status is, but - for whatever reasons - they don't. Of course, these atrocities aren't limited to login pages... but any time you're soliciting something from the user - asking them to give you some of their priceless time and energy to give you something you want.

It becomes an epic struggle between what you (the customer) and I (the developer) actually want out of this interaction. You usually want a few specific things - in the above example, it happens that you'd like to locate and possibly reserve a book without leaving the comfort of your home. I am here to write applications that give you what you need, but I also have concerns. They may be security-, performance-, business-related, or a myriad other things... but who's needs should take precedence? I'll give you a hint - if your needs didn't exist, I wouldn't have a job.

And the most annoying thing about it all is that the solution is very easy...

Put the Customer First

A lot of lip service is given to this concept, but somehow the sentiment often gets lost somewhere between the requirements phase and the UI design (if it was ever there to begin with)... But, there's a solution: design with the customer in mind. Obviously you're going to need some things - such as a username and password pair in order to authenticate a user, or billing information to complete an order - but before you ask them to spend their valuable time giving you something, be sure you really need what you're asking for! Not only that, ask for it without wasting their time. Here are a few tips:

  • For every field you're requiring the user to enter, ask yourself if there is any way you can figure it out yourself using some other piece of information they've already provided. If this is the case, don't waste their time asking them in the first place. For instance: in the above example I can probably determine whether the user is Staff or Student once they've given me their ID
  • Ask them for something they know (or can remember easily)! Don't require them to give you their randomly-assigned personal ID (e.g. college id, employee id, etc) every time! It may be very important to you, but you can help them out by asking them once (the first time) and then let them cross-reference it to something that they can remember, such as a username that they've picked. Let the system do the memorizing of obscure data!
  • It often helps to explain to customers - on a per-item basis - why it is you think this information is important. Not only is this likely to decrease their annoyance at having to answer your questions, it will also help them understand what, exactly, you're asking for and may even increase their chances of being honest!
    • I know, I know - you're thinking "My users don't lie!" Well, I guarantee that if you force them to give you information they don't want to give they will! Tell me - honestly - how many times have you filled out a form, and entered your email address as ""? I thought so... (Effeafdsfefe must be getting a LOT of junk mail, whomever he or she is!)
  • Ask yourself if you're going to - realistically - do something with this data, or if it's just going to hang out in your database/data warehouse not being put to any good use. Wasting your users' time asking them for information you don't really care about is the biggest insult you can give them. If you determine this is true - that you don't really care about a particular item - but you decide not to remove it anyway (e.g. some business group somewhere thinks it's really important), here are a few ideas that you might implement to ease the added burden you are now placing on your users:
    • Make it optional, please! If you're only going to use it at the rare times you're interested in it, let me decide whether or not I want to enter it. I'll enter it during the rare times that I feel like wasting my time entering this extraneous data...
    • Try grouping all of these optional items away from the ones so they don't detract from the importance of the required fields. Then, you may even want to go so far as hiding these fields from the user, allowing the user to choose whether or not they even want to see them.
    • Pre-populate it with sane default data that the user can simply accept as opposed to entering themselves
    • If none of these works - and you're dealing with a known set of values - use the technique that's all the rage lately: auto-complete. This is where the user begins entering data and you try to guess what they're try to say before they're finished saying it... saving them a bunch of energy. If you're going to make them work, make them work as little as possible.

The common idea shared between those last two suggestions bears repeating: pre-populate whenever you can! If you know something, by all means be polite and tell the user you know it already. This can make all the difference between a smart, friendly interface and a waste of time. After all, filling out a form doesn't really have to be a tedious, laborious task. Not if it was created right!

Take Action!

So, now I've made you aware of some of the worst things you can do to your users (along with some helpful tips to mitigate them). Well... we're the developers who propagate these atrocities! And, that means that we hold the power to end them. So, I want you to try something: every morning when you wake up, look in the mirror and say "Put the Customer First" three times. One of two things will happen: either you'll start changing the way you think about designing interfaces, or... you'll feel incredibly silly and stop doing it very quickly. It will probably be the latter (that's what happened to me), but at least you gave it a shot, and maybe just by trying you'll start thinking about it just a bit more; and that's a step in the right direction. Believe me: your users will thank you. I know I will.

Recommended Reading:

Tuesday, October 9, 2007

Awesome Article on Usability

A buddy of mine, Alex (AKA Usability Suspect) sent me a link to the most concise article on usability I've seen in a while. There may not be a slew of brand new ideas in here, but it is a great checklist of standard and common practices and ideas. It starts with age-old, simplistic axioms such as the "7+-2 rule", the "2-second rule", and the "3-click rule", then delves into deeper topics, even finishing with a glossary!

Click here to check out the article. Do it now!

I had to laugh when I came across the "Baby-Duck-Syndrome," since I was employing this theory just the other day during a discussion with Ed Blankenship. I was arguing that his tendency toward excluding parenthesis when optional (e.g. preferring Fig. B over Fig. A, shown below) was due almost solely to the fact that he was raised as a Visual Basic programmer and has only recently seen the light at the end of the tunnel, switching over to C#.

Fig. Aimage
Fig. AFig. B

Whoops, did I just alienate all of my VB readers? Please don't let my C# elitism get in the way of our relationship... :) Remember, we're all running the CLR underneath...

Thursday, October 4, 2007

Be Careful of What You Cache... and How You Cache It!

This post is about a wonderfully embarrassing experience I had recently from which you can (hopefully) learn something. Or - at the very least - this can serve as a reminder of what you already know.

A few weeks ago, a couple of guys from the Infragistics IS department and I decided to get to the bottom of some of the performance issues we had been living with for a while on our main website. So, we locked ourselves in a room for a few days and scoured through the code for the entire website. The biggest thing we found ended up being a few stupid lines of code I had written...

Introducing the Bug

RSS Feed SummaryIt all started with "hey, we've gotta add RSS feeds to our site." The idea was to be able to add a whole bunch of feeds to a page, and then have a summary control (shown on the left) to organize them. Since our site makes heavy use of master pages and user controls, I wanted the ability for any of the items in the page lifecycle to be able to add an RSS feed to the page and have it show up in this summary control. And, oh yeah - I wanted to cache them so future requests to the page didn't have to load them again. So, I created a static RssFeedManager class that pages and controls could register their feeds with and, on PreRender, my Summary control could get the list of feeds that had been registered and display them.

Finding the Bug

The weirdest/most difficult thing about this bug was the fact that it seemed to rarely ever occur. That is, on most requests throughout the site, everything went fine and nothing seemed out of the ordinary. And, on top of that, even though the culprit ended up involving a (supposedly) managed object, it never really showed up in our memory profiling. We made the final breakthrough when one of the guys from IS, Martin, noticed that we could reproduce the problem every time we hit the login page... Since this is one of the few pages on our site that is not cached via the Output Cache we immediately started hitting some of the other pages that are not cached and got the same result: the server memory grew and grew until it reached the maximum allocated memory and the application pool recycled. Cool - we were able to reliably reproduce it!

For the next step - actually figuring out the specific bug - we kicked up our trusty profiler. It didn't take long for Martin to locate the problem: the controls created and added to support our RSS feeds. Eventually, I ended up narrowing the problem down to a function in the RssFeedManager, listed below. Can you spot the problem?

private static Dictionary<Page, List<RssChannel>> RegisteredFeeds = new Dictionary<Page, List<RssChannel>>();

public static void RegisterPageFeed(Page page, RssChannel channel)


// If we don't have a page or a channel, bail out now

if (page == null  channel == null)


// Get the page feeds

List<RssChannel> pageFeeds = null;

if (RegisteredFeeds.ContainsKey(page))


// Get the existing feeds

pageFeeds = RegisteredFeeds[page];

// Add the feed to the existing feeds if it isn't there already

if (!pageFeeds.Contains(channel))




pageFeeds = new List<RssChannel>(new RssChannel[] { channel });

// Update the registered feeds list

RegisteredFeeds[page] = pageFeeds;


Yeah - that's right... I'm caching the list of registered feeds using the Page object. At first glance, that seems like it might work - associating the RssChannel with the current Page that it belongs on. It might work, that is, until you realize that the Page object is unique. That is to say, the Page object created for the first request to /default.aspx will not be the same (or Equals()) the one created for processing the second request to the same page, /default.aspx. The result of the preceding code is that the current Page object (referencing and referenced by literally thousands of objects) is placed into the static dictionary, never to be properly picked up by the garbage collector. The result of that is pure, asinine memory leakage: every request adds another Page object to the dictionary until the Application Pool finally reaches its limits and explodes... er, recycles.

The Fix

As usual, the fix for this one was pretty simple. I updated the above function to store the registered feeds the HttpContext.Current.Items collection. That way, I could track things on a per-request basis and everything was nicely cleaned up for me after the HttpContext was out of scope (after the request had been fully processed). The final, fixed method is shown below:

 1: public static void RegisterRssFeed(HttpContext context, RssChannel channel)

 2: {

 3: // If we don't have a page or a channel, bail out now

 4: if (context == null  channel == null)

 5: return;


 7: // Get the existing feeds

 8: List<RssChannel> currentFeeds = 

 9: (context.Items[RSS_FEEDS_KEY] as List<RssChannel>) ??

 10: new List<RssChannel>();


 12: // Add the feed to the existing feeds if it isn't there already

 13: if (!currentFeeds.Contains(channel))

 14: currentFeeds.Add(channel);


 16: // Update the registered feeds list

 17: context.Items[RSS_FEEDS_KEY] = currentFeeds;

 18: }

Since it was introduced in ASP.NET 2.0, I've been using the HttpContext.Items collection more and more. It's readily available to practically anything (modules, handlers, user controls, pages, etc.) and it comes in real handy for situations like this.

Exactly How I Screwed Up

Here's a checklist of things that I knew to watch out for, but didn't:

  1. The most obvious - and main point of this post- is to be incredibly careful what you use as your Key in any collection. Also, be especially wary of the context in which you're doing it, which brings us to...
  2. Think hard about exactly what you're putting in static variables, since it keeps them away from the Trash Man (garbage collector), which could be a very bad thing.
  3. Reinforcing and expanding on #2: when you've got something like a static Dictionary<[foo],[bar]>, your goal may be to keep the Values alive, but keep in mind that you're also keeping the Key objects alive as well, which may not be exactly what you're shooting for. For example, instead of using an entire User object as a key, consider using the User's Username value instead (assuming it's unique, of course).

    • At the risk of being redundant, I'll just go ahead and get real specific here, since it's the catalyst for this post: in the average page request, the Page object is, like, the worst thing to cache and/or keep in a static collection... Think about all of the objects that are latched on to it and are subsequently kept alive due to their association with this monster of an object. The thought of everything I was keeping alive with EVERY unique page request still haunts my dreams today...

  4. Overall, take a minute and think about why you're choosing to cache this particular item in the first place, and if you're actually duplicating data that's already been cached somewhere else, or even increasing performance at all.

As it turned out in my case, not only was I keeping large objects from being destroyed, I was also caching data that was already cached elsewhere (the individual RSS feed links themselves), and I wouldn't have been helping performance even it if was working like I'd planned! Unless the entire page was being cached in the Output Cache (in which case this entire discussion is moot), the page still had to go through its lifecycle, and the RSS feeds were still being registered with the RssFeedManager - the only thing that would've changed is that the RssFeedManager would check to see if they'd already been added and decide not to add them. Where's the performance gain in that?

ASIDE: Sorry if I lost you in that last paragraph... My point is that I was attempting to cache something that was already cached. Had I thought through the entire process, I (hopefully!) would've noticed that.

Remember how I said that the Page object was a "(supposedly) managed object"? Well, for the most part, it didn't show up in the memory profiler reports; at least not all of it. That is, if we started up the profiler and watched the server memory and the profiler report while we hit the page repeatedly, the profiler report showed approx. 400k difference for each page hit while the server memory would shoot up between 5 and 20 megs! It was very interesting and it certainly didn't help us get to the bottom of the problem any faster. I never did get to the bottom of this, but it's definitely good to know going forward.

So, there you have it - my great blunder. Hopefully my mistake can be to your benefit.

Wednesday, October 3, 2007

Halo 3: Nag, Nag, Nag...

A couple of buddies and I finally set aside some time to beat Halo 3 last night (on Legendary mode, of course). It was definitely pleasing, though there were a few things that were pretty damn annoying. Scratch that - one major thing: why - in the middle of an intense firefight - does Cortana have to pop into your head, give you motion sickness, slow down time, and damn near give me an epileptic seizure?? And what is she saying, anyway? Does it help me complete the mission? No. It's almost as if I've reverted back in time and my mom is yelling up to my room, "do your homework!", "take out the trash!", "save the universe!" NAG, NAG, NAG! (Sorry, mom, if you ever read this...) I know what I gotta do, and you flashing in my face and making me walk in slow motion isn't gonna get me any closer to it.

The looming possibility of seizures aside, the gameplay was quite good... but not incredibly unique. I've got to admit, I'd grown tired of the Halo franchise. The multi-player, however, is a completely different story. I've always been pretty ashamed to admit this, but I spend - on average - about 4 hours a night online, playing Halo multi-player. I've always loved it, and Halo 3's multi-player is the best yet. I absolutely love all of the new features! One of the coolest things is the mere fact that it was designed for the Xbox 360 platform and you don't have to run in the confines of an emulator, desperating longing for at all of the great Xbox 360 features you paid for. I do have one question, though: what's the deal with the lag?? I don't know if it's my router/network or not, but I've noticed - even in the few days leading up to the game - a lot more lag lately... and it sucks. Bungie: FIX THE LAG!

Folks, the game has arrived.

And it's awesome.

Bravo, Bungie.

*clap* *clap* *clap*