Build a LAMP stack on a 2host Debian Xen VPS

Well, it’s been awhile: I have been busy working too much, taking on new projects, changing jobs and a whole lot more. Over the last few weeks I have been spending a lot of time reacquainting myself with the world of LAMP (Linux/Apache/MySQL/PHP) but I will definitely be posting some more from the .NET world soon now that my day job will be a lot closer to 40 hours a week than 70 hours a week. Specifically, I will be diving head first into the world of WPF for that gig.

So, my newest side project is stabilizing and improving a Joomla CMS deployment relying heavily on JomSocial (an extension offering native social networking functionality). The first step was to get us on a good server poised for growth and that we had full control over, and that meant looking at the options available. First, I ruled out shared hosting such as Hostek or Bluehost (both are services I use and I have been happy with for the price) , because we want to make sure we have scalability and even though they advertise unlimited resources that really means there are arbitrary, unpublished practical limits.

Then I looked into cloud hosting. Although Windows Azure can run Joomla, I knew the price was more than we were looking to pay at this point in time. I also looked at Rackspace Cloud, Storm On Demand (from Liquid Web) and of course Amazon EC2. All of these are good services, but once again the minimum price point was over what we wanted to spend for now and with all of the cloud services you lose the benefit of a fixed, predictable cost. Dedicated servers were out of the question for the same price reasons, but it turns out VPS fit just right.

After some research, I settled on as the provider. They had a server comparable to other VPS providers for about $30 (USD) a month but with 100GB of storage, which we needed for images and videos that are currently stored on the file system. They also had some good forum reviews, highlighting great service and their website told a compelling story about the people who run the organization and their value system. I signed up and almost instantly had a server.

Evaluating the Linux Distributions

The advertised distributions of Linux available during the sign-up were Debian and CentOS. Most of my experience was from years ago with Slackware so I tried to find out which of these was the best to use, mostly on forums, and of course I found several dissenting opinions. With no clear direction other than a slight preference for Debian’s package install system, I went to see for myself and the first thing I found through the VPS Control Panel (SolusVM) was a wider variety of distributions: CentOS, Debian, Fedora, Gentoo, Slackware and Ubuntu (all in a variety of flavors except Debian which was a 64-bit build of version 5).

I spent the next day installing several of the distributions, and experimenting with building a working LAMP stack from scratch (not using a pre-built framework such as XAMPP). I ended up settling on Debian both because there were ways to get recent versions of the packages I needed and it was also my experience the package install process was the most straightforward. The one detail I should mention before getting to the steps to follow is that I chose PHP 5.2 vs. 5.3 because of Joomla compatibility.

Setting up the LAMP stack

These steps were pretty well tested and include post-modifications to fix small issues found later. Some are optional. It assumes you have used an SSH client (e.g. PuTTY or ConnectBot on an Android phone, both completely free) to connect to the server console (for the connection details are in the VPS Control Panel). When I say “RUN” below I mean execute those commands at the console prompt and “EXECUTE” means execute at the MySQL client console prompt.

  1. RUN:nano -w /etc/apt/sources.list
  2. ADD TO END OF /etc/apt/sources.list:deb stable all
    deb-src stable all
    Steps #1 and #2 are telling the Debian package install system that it can retrieve packages from the Dotdeb repository, which makes some more recent versions of packages such as PHP available than are provided with the standard distribution. The standard distribution has more rigorous testing standards and so is updated less frequently.
  3. HIT Ctrl-X to Exit, ‘Y’ for Yes, ENTER to Confirm (to save), THEN RUN:gpg --keyserver --recv-key 89DF5277
    gpg -a --export 89DF5277 | apt-key add -
    apt-get update
    apt-get install apach2-mpm-worker
    apt-get install mysql-client-5.1 mysql-server-5.1 mysql-common
    The first three commands install the key for packages from the Dotdeb repository and update the Debian package system with the list of packages available from there. The last two commands install Apache 2 in the mode where it will use multiple threads for the worker process to take advantage of the 4 cores available on the VPS, and start the install of MySQL 5.1 which is the most recent version available at the time of this writing.
    apache2ctl restart
    echo "<?php phpinfo(); ?>" > /var/www/info.php
    apt-get install lynx
    lynx http://localhost/info.php
    We want to leave the MySQL password blank because it doesn’t appear to be used in the Debian MySQL package, which ends setup with only a single Debian maintenance account existing. The first command installs PHP 5.2 (the latest version available from the main Dotdeb repository at the time of this writing, see their website for details on installing PHP 5.3 instead if you need it) and the second restarts the Apache process so it will be available. The third command creates a test PHP file. The fourth and fifth commands and the sixth line install the Lynx console web browser and view the PHP test file.
  5. VERIFY PHP VERSION AND SETTINGS, THEN ‘Q’ for Quit AND ‘Y’ to ConfirmAny problem viewing the test PHP file indicates some problem with one of the first four steps.
  6. RUN:rm /var/www/info.php
    /etc/init.d/mysql stop
    mysqld --skip-grant-tables &
    The first command removes the PHP test file (leaving it around is a potential security risk), the second stops the MySQL service, and the third restarts the process in the background in a mode that will not require a password for connecting.
  7. TAKE NOTE OF THE PROCESS ID (PID) FROM THE LAST COMMAND’S OUTPUT, OF THE FORM “[1] 2345” (‘2345’ WOULD BE THE PID)You will need to use the PID later on to stop the process before restarting the MySQL service.
  8. RUN:mysqlYou will now be at the MySQL client command console.
    GRANT ALL ON *.* TO 'user'@'localhost' IDENTIFIED BY 'password';
    UPDATE mysql.user SET Grant_priv = 'Y' WHERE User = 'user';
    This series of MySQL commands will create a new super user account by performing, in order: enable the permission system; grant all privileges to all databases to a new user account with the name and password you specify; update the user to give them permission to grant privileges to other users; apply the new user permissions; and finally close the MySQL console.
  10. RUN:kill <PID NOTED IN STEP #7>
    /etc/init.d/mysql start
    mysql -u <user FROM STEP #9> -p
    The first command stops the background MySQL process and the second restarts the service as normal. The third and last line login as the super user account created in step #9.
  11. EXECUTE:QUIT;Assuming step #10 and #11 work with no problem MySQL deployment is complete and verified.
  12. RUN:apt-get install phpmyadmin
    lynx http://localhost/phpmyadmin
    The first command and second line install and configure the must-have phpMyAdmin web-based MySQL administration tool. The second command (on the third line) and the fourth line verify that it is working and accessible, assuming that after following the Go link you end up seeing the phpMyAdmin UI.
  13. ***YOU MAY SEE THIS OR A SIMILAR WARNING IN phpMyAdmin:“Your PHP MySQL library version 5.0.51a differs from your MySQL server version 5.1.51. This may cause unpredictable behavior.”This can actually be ignored, or if it bothers you comment out the warning in:/usr/share/phpmyadmin/main.phpComment out the check following the comment containing the text “different MySQL library”.
    apt-key add jcameron-key.asc
    nano -w /etc/apt/sources.list
  15. ADD TO END OF /etc/apt/sources.list:deb sarge contribSteps #14 and #15 get the key for and add the repository of Webmin to the Debian package install system sources.
  16. HIT Ctrl-X to Exit, ‘Y’ for Yes, ENTER to Confirm (to save), THEN RUN:apt-get update
    apt-get install webmin
    These last two commands update the Debian package install system and install the Webmin package. The web control panel should now be accessible at http://localhost:10000. Use the system “root” account and password to sign-in to the control panel (or any “sudo” account).

This should be enough to get your basic PHP web applications up and running on a new VPS server and to give you control over all the services involved. As often is the case, this information was compiled from numerous websites, documents, forums and blog posts and without listing them all I would like to thank everyone who’s effort made this compilation possible. It is here as much for me to refer to in the future as it is to hopefully save you the hours of research that went into this.

I will be following this up with additional information on how to get the PHP mail() function working and how to set up name-based virtual Apache servers using Webmin. Most of this information should be applicable to any Debian installation and some of it (such as how to create a new MySQL super user without credentials) can be used in any environment. I also hope to get into some of the details of writing custom Joomla extensions and JomSocial addons in the future!

Posted in Linux | 43 Responses

Using Custom CSS Styles With Amazon Web Store

I have been slacking on my posts, but my second son was just born, I bought a house and I am working on 3 different teams plus other initiatives at my day job so hopefully you will excuse me, as I have myself. This is something I did awhile ago, but actually took some problem solving so I don’t want to lose the code. My wife had some interest in doing some e-commerce stuff, and so I set her up with an Amazon Web Store. Overall the integrated experience is pretty cool, although as far as really flexible templating goes they have some work to do. I simply wanted to be able to use a custom CSS stylesheet to apply the style, and this is how I accomplished that.

The problem is that you can choose only one template when you start and that cannot be changed, there is no way to specify a custom CSS file and a great deal of the styling is applied with inline style attributes. However, you do have the ability to change Site Wide Properties and add custom HTML to the HEAD tag. The following technique could be used for any site where you have access to inject custom HTML into the HEAD tag, such as many content management, blog or e-commerce tools.

Be warned however that users must have JavaScript enabled for this technique to work well. This is the code that you can cut and paste in for a cross-browser (tested on latest version of IE and FF) clean slate for styling:

<script type="text/javascript" src=""></script>
<script type="text/javascript">
jQuery("html").css("display", "none");
jQuery("head link").remove();
jQuery("head style").remove();
jQuery("body style").remove();
jQuery("html").css("display", null);
<link rel="stylesheet" type="text/css" href="">

The result of using this code is that when each page is initially loading, all content will be hidden. This is to avoid seeing the page rendering going on. Depending on the browser, connection speed and machine power you may otherwise see the initial stylesheet being applied, then removed, and the new reset CSS being applied. That can look very ugly to a user. Once all of the files are cached, the file load time should be quick and any blank screen should only be visible for a moment. Once everything is applied, this will leave the page with a vanilla, consistent but unformatted layout. The idea is you would insert one or more lines following this code referencing your custom CSS.

The functionality is quite simple:

  1. Hide all of the HTML: jQuery("html").css("display", "none");
  2. Remove all references to stylesheets (this could be made more specific if you have other link types): jQuery("head link").remove();
  3. Remove all inline style blocks: jQuery("head style").remove();
  4. Schedule a set of operations to run on page load (the rest of the script operations): jQuery(function()
  5. Remove any inline style blocks in the body: jQuery("body style").remove();
  6. Remove all style attributes, from all elements that have one: jQuery("*[style]").removeAttr("style");
  7. Remove all widths from elements that are not images (this is AWS specific): jQuery("*[width]").not("img").removeAttr("width");
  8. Show all of the HTML: jQuery("html").css("display", null);

This technique uses both jQuery, the excellent cross-browser JavaScript library, and YUI: Reset CSS, a stylesheet made to make the rendering of HTML elements consistent across browsers. Both of them are highly recommended for any web development project where you want to be highly productive and provide all of your users a great browsing experience. You can find out more information about jQuery and YUI at their respective homes on the web.

Posted in Development, HTML, Web | Tagged , , , , | 58 Responses

The Dragon, The Cross and The Forbidden Fruit: Using WampServer (Windows7 / Apache / MySQL behind PHP) with Komodo IDE, XDebug and PEAR

First of all, this was written in reference to WampServer 2.0i and Komodo IDE, and the versions of Apache, PHP, MySQL and other components that come with those packages. Since these distributions change fairly rapidly, some or all of this may be invalid by the time you read through. However, the basic process of setting up the environment should hold true. If you didn’t get it, the first part of the title is just a fun reference to the technology Komodo (dragon), XDebug (or “cross-debug” if using the more archaic description of the symbol) and PEAR (the, at least at first, forbidden fruit… this will make more sense when you read the instructions below).

I am not going to focus too much on the details of installing WampServer or Komodo IDE. There are already good tutorials on this. In my environment, I used all of the default setup options and everything pretty much just worked. Well, pretty much. The one thing I did have to do since I already had IIS7 installed was change the Apache port to 8080 (any unused port will do):

  1. Run WampServer using the desktop icon and find the system tray icon (may need “Show hidden icons”), click and make sure that it has been Put Online and all services are running
  2. Click on Apache and then httpd.conf to edit the file; Search for the phrase “Listen 80” and change it to a different port number (this is also where to change the IP binding if needed)
  3. Don’t use the shortcuts in the WampServer menu for Localhost and phpMyAdmin anymore because they still point to port 80, just point a browser to http://localhost:8080 (for example)
  4. I reset all of the services at this point to make sure everything was flushed out and copacetic

The project I am working on was started by a buddy of mine, and he was expecting the MySQL server to have a specific root password. I set this up by following these steps:

  1. Click the WampServer tray icon, click MySQL and then MySQL Console, hit enter to login using the default root password (blank, but next time whatever you change it to)
  2. Then execute this command by typing it and hitting enter “SET PASSWORD FOR root@localhost=PASSWORD(‘awesome_password’);” (use your own awesome password)
  3. Go to the phpMyAdmin directory (C:\WAMP\apps\phpmyadmin3.2.0.1 by default for my installation) and open up the file so we can restore access to the database
  4. Find “$cfg[‘Servers’][$i][‘password’] = ”;” and changed that to “$cfg[‘Servers’][$i][‘password’] = ‘awesome_password’;” (if you prefer to use a non-root account with privileges, go ahead)
  5. I reset all of the services at this point to make sure everything was flushed out and copacetic

Alright, now I can connect to MySQL securely, manage the database with phpMyAdmin and view PHP files under my C:\WAMP\www folder. Sounds pretty good, but what about source control integration, syntax highlighting, code completion, visual debugging and all of the other stuff I have come to expect from an IDE that keeps me at peak productivity? Well, even though there are a few options (Zend Studio, Eclipse, Visual Studio with extensions, etc.), I decided to go with Komodo IDE from ActiveState, who are also responsible for cool stuff like Perl and Python implementations for Windows, among other developer goodies. Let’s explore the setup and issues along the way…

Creating the initial project, opening files and being able to see syntax highlighting is straightforward. I had already checked out the files from Subversion into my web root directory using TortoiseSVN (remember, this is just a development environment), so in Komodo I went to File –> New –> Project and created a .kpf file for all the files to be added to. Now I decided I needed integrated source control, so I went to look under Edit –> Preferences –> Source Code Control and happily saw Subversion there along with CVS, Perforce, Bazaar, Git and Mercurial. When I clicked on the node, it notified me I didn’t have the binaries in my path and gave me a download button.

With a little research, I decided to install this version of the SVN binaries because the distribution site said it was compatible with Apache 2.2.9+ and WampServer 2.0i comes with Apache 2.2.11 (not that it was necessary but in case I wanted to play with web access for SVN, at least I’d have the option). As soon as the project was opened again in Komodo it seemed to notice I now had SVN and showed statuses of the files in the project. I could edit and commit. One thing I noticed though was that until I returned to the Preferences dialog and the Subversion node under Source Code Control, progress indicators on each file during the commit would stay on until the IDE was closed instead of when the commit completed. I also just noticed it listed my project file in the changed files list twice. Apparently the UI for this still has a few kinks to work out but it hasn’t broke anything yet.

Now I had access to 4 of the features I wanted, but I still had to figure out how to setup debugging, and this was the trickiest part of the setup. PHP needs an extension to allow integrated debugging, and while Komodo tries to automate the process of this setup the most recent verision of the IDE is distributing a buggy version of the library required. As a disclaimer, I don’t think the following steps are officially supported, so use them at your own risk. All you have to do differently than the normal Komodo debugging setup process is first replace the bad DLL in Komodo’s files:

  1. From the XDebug distribution download page, get this version of the php_xdebug.dll file (I am not sure if this will be the right DLL for everyone on every system, but this is the one Komodo was looking for during the debugger setup described below… my machine and Windows7 install are x64 but the IDE is x86 and this version from XDebug seems to be working)
  2. Replace the php_xdebug.dll located at C:\Program Files (x86)\ActiveState Komodo IDE 5\lib\support\php\debugging\5.3\ts-vc6-x86 (or under wherever you installed the IDE)
  3. With a PHP file open in Komodo, hit the Debug button in the toolbar (small right arrow) to jump into the Perferences window (or go through Edit –> Preferences –> Languages –> PHP)
  4. Click the Debugger Config Wizard button in the middle of the dialog on the right, then Next
  5. Browse to C:\WAMP\bin\php\php5.3.0 and select php-cgi.exe (not sure it matters which .exe, but since I wanted to emulate a web environment I figured this was probably my best bet)
  6. Click Next and leave the “INI file to be copied:” as is, then click Browse to select a “debug environment directory” (in my case, I created a new directory under C:\WAMP\bin\php\php5.3.0 called “debug” for this, I don’t think the exact location matters at all to Komodo)
  7. Click Next  and then Browse to select a location for “Use this extensions directory:” (this is where php_xdebug.dll will be copied, and some forum posts that helped me through this part seemed to suggest using the default “ext” directory was not a good choice, I chose to use my “debug” directory so everything was in one place)
  8. Click Next until you see Finish, then click Finish and if everything worked you are done! (if not the error messages are actually reasonably helpful to solving the problem)

While not strictly related to this process, I wanted to share the workaround to getting PEAR setup in the environment. PEAR will allow you to discover, download and install additional packages into your PHP environment. Out of the box though, there is a problem with the initial configuration that prevents setup. Follow these steps to complete the process:

  1. Click the WampServer tray icon, click PHP and then php.ini,  look for the string “;phar.require_hash = On” and change it to “phar.require_hash = Off” to enable setup
  2. Run go-pear.bat from C:\WAMP\bin\php\php5.3.0, you can safely hit enter to accept all of the default settings until the process is complete, then run PEAR_ENV.reg from there
  3. You can then run the pear.bat (in the same directory) to find packages and change your environment (e.g. “pear” to list all available commands or “pear search cache”)

If you skip step #1, go-pear.bat will fail with an inclusion error. That’s all for now. Happy wamping.

UPDATE: One final tip, some Komodo IDE weirdness (e.g. silently not persisting debug settings) seems to go away when you Run as Administrator. The following is for search engines and trackbacks.

This is the error that you will see if go-pear.bat fails:

phar "C:\wamp\bin\php\php5.3.0\PEAR\go-pear.phar" does not have a signaturePHP Warning: require_once(phar://go-pear.phar/index.php): failed to open stream: phar error: invalid url or non-existent phar "phar://go-pear.phar/index.php" in C:\wamp\bin\php\php5.3.0\PEAR\go-pear.phar on line 1236

Warning: require_once(phar://go-pear.phar/index.php): failed to open stream: phar error: invalid url or non-existent phar "phar://go-pear.phar/index.php" in C:\wamp\bin\php\php5.3.0\PEAR\go-pear.phar on line 1236

And Matt Refghi’s blog is where I found the solution. This is the error you will see if you have the XDebug version problem:

Sorry PHP debugging configuration failed. PHP is configured but is unable to load the debugger extension at C:\wamp\bin\php\php5.3.0\ext\php_xedebug.dll.

This is the forum post where my where my investigation started and this is the ActiveState bug report tracking the issue. Note that they suggest to use XDebug version 2.0.5 but another user points out that there is this bug in XDebug that can be worked around by using the version I linked to.

Posted in Development, PHP, WAMP, Web | Tagged , , , , , , , | 70 Responses

Silverlight 4 RC Dynamic Keyword Behavior Regressions

I have been looking around and can’t find any Release Notes or Known Issues for the SL4 RC release that came out at MIX10 a couple days ago, but our SL4 project was heavily using the dynamic keyword for Excel interop and upon testing out the new bits with it we noticed it didn’t work anymore. What I have been able to find out so far is there seem to be several new restrictions with the usage of the dynamic references.

First of all, you can no longer pass the result of a dynamic accessor to a method with the Conditional attribute. I used to be able to happily pass a dynamic property accessor result as the argument of Debug.WriteLine, which was very useful in debugging since the Immediate window in Visual Studio still doesn’t support the syntax. Now when you attempt this you will get an exception like this:

Microsoft.CSharp.RuntimeBinder.RuntimeBinderException was unhandled by user code
  Message=Cannot dynamically invoke method 'WriteLine' because it has a Conditional attribute
       at CallSite.Target(Closure , CallSite , Type , Object )
       at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid2[T0,T1](CallSite site, T0 arg0, T1 arg1)
       at NoDynamicIteration.MainPage.Button_Click(Object sender, RoutedEventArgs e)
       at System.Windows.Controls.Primitives.ButtonBase.OnClick()
       at System.Windows.Controls.Button.OnClick()
       at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e)
       at System.Windows.Controls.Control.OnMouseLeftButtonUp(Control ctrl, EventArgs e)
       at MS.Internal.JoltHelper.FireEvent(IntPtr unmanagedObj, IntPtr unmanagedObjArgs, Int32 argsTypeIndex, String eventName)

There is a simple workaround: Assign the value to a local variables first. But, it still seems unnecessary and annoying.

Then there is a problem that affects us a bit more painfully. As anyone who has tried using this feature knows, since everthing is dynamic you get no Intellisense. We were intending to release a solution to this that was working great on this project. Basically using a code generator based on Common Compiler Infrastructure (CCI) we were able to turn COM Primary Interop Assemblies (PIA) into wrapper classes that gave strongly typed access to all of the methods, properties, events and collections of COM objects. Suddenly being a business developer who has to interact with Office applications didn’t make us second class and throw us back 10 years into a land eerily reminiscent of VBScripting.

One big challenge of that effort was figuring out how to deal with property indexers, which don’t natively exist in .NET, and collection enumerators, since collections are represented in multiple and not always consistent ways in the COM world. We did come up with a solution that used a set of Generic classes to allow flexible access to indexed property values and collection members. However, with the SL4 RC you can no longer assign the result of a dynamic accessor to a Generic-typed variable or return a value from a Generic-return-typed property or method. When you try you get:

System.NullReferenceException was unhandled by user code
  Message=Object reference not set to an instance of an object.
       at Microsoft.CSharp.RuntimeBinder.ExpressionTreeCallRewriter.GenerateLambda(EXPRCALL pExpr)
       at Microsoft.CSharp.RuntimeBinder.ExpressionTreeCallRewriter.VisitCALL(EXPRCALL pExpr)
       at Microsoft.CSharp.RuntimeBinder.Semantics.ExprVisitorBase.Dispatch(EXPR pExpr)
       at Microsoft.CSharp.RuntimeBinder.Semantics.ExprVisitorBase.Visit(EXPR pExpr)
       at Microsoft.CSharp.RuntimeBinder.ExpressionTreeCallRewriter.Rewrite(TypeManager typeManager, EXPR pExpr, IEnumerable`1 listOfParameters)
       at Microsoft.CSharp.RuntimeBinder.RuntimeBinder.CreateExpressionTreeFromResult(IEnumerable`1 parameters, ArgumentObject[] arguments, Scope pScope, EXPR pResult)
       at Microsoft.CSharp.RuntimeBinder.RuntimeBinder.BindCore(DynamicMetaObjectBinder payload, IEnumerable`1 parameters, DynamicMetaObject[] args, DynamicMetaObject& deferredBinding)
       at Microsoft.CSharp.RuntimeBinder.RuntimeBinder.Bind(DynamicMetaObjectBinder payload, IEnumerable`1 parameters, DynamicMetaObject[] args, DynamicMetaObject& deferredBinding)
       at Microsoft.CSharp.RuntimeBinder.BinderHelper.Bind(DynamicMetaObjectBinder action, RuntimeBinder binder, IEnumerable`1 args, IEnumerable`1 arginfos, DynamicMetaObject onBindingError)
       at Microsoft.CSharp.RuntimeBinder.CSharpConvertBinder.FallbackConvert(DynamicMetaObject target, DynamicMetaObject errorSuggestion)
       at System.Dynamic.DynamicMetaObject.BindConvert(ConvertBinder binder)
       at System.Dynamic.ConvertBinder.Bind(DynamicMetaObject target, DynamicMetaObject[] args)
       at System.Dynamic.DynamicMetaObjectBinder.Bind(Object[] args, ReadOnlyCollection`1 parameters, LabelTarget returnLabel)
       at System.Runtime.CompilerServices.CallSiteBinder.BindCore[T](CallSite`1 site, Object[] args)
       at System.Dynamic.UpdateDelegates.UpdateAndExecute1[T0,TRet](CallSite site, T0 arg0)
       at NoDynamicIteration.MainPage.<GetWorksheets>d__6`1.MoveNext()
       at NoDynamicIteration.MainPage.Button_Click(Object sender, RoutedEventArgs e)
       at System.Windows.Controls.Primitives.ButtonBase.OnClick()
       at System.Windows.Controls.Button.OnClick()
       at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e)
       at System.Windows.Controls.Control.OnMouseLeftButtonUp(Control ctrl, EventArgs e)
       at MS.Internal.JoltHelper.FireEvent(IntPtr unmanagedObj, IntPtr unmanagedObjArgs, Int32 argsTypeIndex, String eventName)

Not only is this extremely disappointing, but if this is not a bug it seems quite amateur. I mean really, who ships production code that throws NullReferenceExceptions? And how would I know, without spending an hour narrowing down the potential problem (which I did), that that exception was being caused because of an attempted Generic cast?

Once again, there does appear to be a workaround: Assign the dynamic result to an object, then cast to a Generic type. In my case the Generic type IS object. Can the compiler and binder seriously not figure out what is going on here?

Maybe what is most concerning about all of this is why there was such a disruptive change to an important feature between the post-Beta 2 EAP bits and the RC released less than 2 months later? I am beginning to feel like Microsoft is returning to the days when you should wait for the first service pack before you install anything. Possibly the changes were to help address the ridiculous code bloat problems the Beta 2 release had. If you peeked under the covers it appeared every line of code that used a dynamic reference caused a static object instance to be generated by the compiler. The compiled assemblies using dynamic references do appear to be quite a bit smaller with the RC, but I haven’t peeked under the hood yet to see why. I am really not convinced though that this approach and design were appropriate when I am sure there must have been alternatives that could have been designed in such a way that they were strongly typed and would have avoided much of this. And, we would have had Intellisense.

As for us, we have to ship soon, so it is looking like a possibility we will have to scrap a big component of the effort and use glorified VBScript for all of the sections that interact with COM objects, taking a way a big part of the value proposition which was to provide a better and more productive environment for that type of business development. We can’t waste a bunch of time on any more surprises from these new, unexpected and undocumented (?) limitations.

Posted in Uncategorized | 2 Responses

Use any characters you want in your URLs with ASP.NET 4 and IIS 7!

After spending entirely too much time researching this issue today here is how you can use any characters you want for URLs in ASP.NET 4 and IIS 7. A bit of background: I am writing a web application that has a custom HttpModule and HttpHandler that should handle all requests and not limit the syntax of those requests at all. I could not find the information on how to do this in one place anywhere, and there are a reasonable amount of misleading, unanswered and naive responses on various forums that will likely lead you astray if you have an advanced configuration like mine. There are also a lot of completely out of date posts centered on .NET 1.1 and .NET 2.0.

The first thing I was trying to do was make a POST-ed form value containing a forward slash into something that could be used as a component in a RESTful URL. I tried to accomplish that by implementing a handler for AuthenticateRequest in my HttpModule (you can’t do it in BeginRequest unless you want to read the form data manually because Request.Form is not intialized yet) that would encode the value and call TransferRequest. First that made this happen:

A potentially dangerous Request.Path value was detected from the client (%).

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.Web.HttpException: A potentially dangerous Request.Path value was detected from the client (%).

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below

Stack Trace:

[HttpException (0x80004005): A potentially dangerous Request.Path value was detected from the client (%).]
   System.Web.HttpRequest.ValidateInputIfRequiredByConfig() +8815985
   System.Web.PipelineStepManager.ValidateHelper(HttpContext context) +59

Okay. ASP.NET, for security reasons, normally protects your web applications from potentially harmful content being sent to them, which is probably a good thing. But what if we want or need potentially harmful content? Should we be limited? I hoped the answer was “no”, but the almost complete lack of information on the subject sure wasn’t making it seem that way. After some not too helpful reading and some digging with Reflector, I came up with this:

<httpRuntime requestPathInvalidCharacters="" />
<pages validateRequest="false" />

Now no characters are invalid and requests shouldn’t even BE validated right? WRONG. While this did clear up the first exception, I faced a new one. And, while I was not sure if the validateRequest setting would even apply to my case since it is on the pages element, I assure you that for some reason that setting is an integral part of the above and following changes to work properly together. Here was my second roadblock:

Error Summary

HTTP Error 404.11 – Not Found

The request filtering module is configured to deny a request that contains a double escape sequence.

Detailed Error Information
Module RequestFilteringModule
Notification BeginRequest
Handler Clear
Error Code 0x00000000
Requested URL http://localhost:80/Clear/search/x%2Fy
Physical Path D:\Development\Clear\Clear\search\x%2Fy
Logon Method Not yet determined
Logon User Not yet determined
Most likely causes:
  • The request contained a double escape sequence and request filtering is configured on the Web server to deny double escape sequences.
Things you can try:
  • Verify the configuration/system.webServer/security/requestFiltering@allowDoubleEscaping setting in the applicationhost.config or web.confg file.
Links and More Information

This is a security feature. Do not change this feature unless the scope of the change is fully understood. You should take a network trace before changing this value to confirm that the request is not malicious. If double escape sequences are allowed by the server, modify the configuration/system.webServer/security/requestFiltering@allowDoubleEscaping setting. This could be caused by a malformed URL sent to the server by a malicious user.View more information »

Great, now it didn’t even look like it was getting from IIS to ASP.NET! However, deciding I “fully understood” the change, this problem was much easier to find a solution to and get past. Once again, it was a simple matter of adding the right magic words to the Web.config:

<requestFiltering allowDoubleEscaping="true" />

Finally my Frankenstein was coming to life and I thought I was in the clear. But, then I realized that when other clients (not form POST-ers who would not realize my HttpModule was secretly processing their inputs) wanted to send me a URL with a forward slash in a path component they would have to double encode or use some other strange method to identify it as a different kind of forward slash then the normal path component delimiter.

So, I decided to add a feature to my framework that would allow a syntax to specify that “the rest” of a URL is a single component and to do that I wanted to use “/*”. Convinced I could use any character I wanted now, I went ahead and ran the debugger with my test path, and to my chagrin ran into this little beauty:

System.ArgumentException occurred
Message=Illegal characters in path.
at System.Security.Permissions.FileIOPermission.HasIllegalCharacters(String[] str)

mscorlib.dll!System.Security.Permissions.FileIOPermission.HasIllegalCharacters(string[] str) + 0x117 bytes
mscorlib.dll!System.Security.Permissions.FileIOPermission.AddPathList(System.Security.Permissions.FileIOPermissionAccess access, System.Security.AccessControl.AccessControlActions control, string[] pathListOrig, bool checkForDuplicates, bool needFullPath, bool copyPathList) + 0x4a bytes
mscorlib.dll!System.Security.Permissions.FileIOPermission.FileIOPermission(System.Security.Permissions.FileIOPermissionAccess access, string[] pathList, bool checkForDuplicates, bool needFullPath) + 0x2c bytes
mscorlib.dll!System.IO.Path.GetFullPath(string path) + 0x5c bytes
System.Web.dll!System.Web.Util.FileUtil.IsSuspiciousPhysicalPath(string physicalPath, out bool pathTooLong) + 0x42 bytes
System.Web.dll!System.Web.Util.FileUtil.IsSuspiciousPhysicalPath(string physicalPath) + 0x18 bytes
System.Web.dll!System.Web.Util.FileUtil.CheckSuspiciousPhysicalPath(string physicalPath) + 0x9 bytes
System.Web.dll!System.Web.CachedPathData.GetPhysicalPath(System.Web.VirtualPath virtualPath) + 0x77 bytes
System.Web.dll!System.Web.CachedPathData.GetConfigPathData(string configPath) + 0x190 bytes
System.Web.dll!System.Web.CachedPathData.GetVirtualPathData(System.Web.VirtualPath virtualPath, bool permitPathsOutsideApp) + 0x6f bytes
System.Web.dll!System.Web.HttpContext.GetFilePathData() + 0x25 bytes
System.Web.dll!System.Web.HttpContext.GetConfigurationPathData() + 0x1b bytes
System.Web.dll!System.Web.Configuration.RuntimeConfig.GetConfig(System.Web.HttpContext context) + 0x2c bytes
System.Web.dll!System.Web.HttpContext.SetImpersonationEnabled() + 0xd bytes
System.Web.dll!System.Web.HttpApplication.AssignContext(System.Web.HttpContext context) + 0x5c bytes
System.Web.dll!System.Web.HttpRuntime.ProcessRequestNotificationPrivate(System.Web.Hosting.IIS7WorkerRequest wr, System.Web.HttpContext context) + 0x22f bytes
System.Web.dll!System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(System.IntPtr managedHttpContext, System.IntPtr nativeRequestContext, System.IntPtr moduleData, int flags) + 0x1fc bytes
System.Web.dll!System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(System.IntPtr managedHttpContext, System.IntPtr nativeRequestContext, System.IntPtr moduleData, int flags) + 0x29 bytes
[Appdomain Transition]

So, in a last desperate attempt to not have to give up, I circled back to a solution that had not worked for any of the other problems I ran into before and to my surprise, it all worked out. Unfortunately, it requires adding a registry value, apparently making your entire server less secure and not an option except for in the most “all access” hosting environments. Anyway, you need to set the following to get the last few “illegal” characters to be allowed:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\VerificationCompatibility = 1 (32-bit DWORD)

Before I get accused of hacking my way out of this, this is actually a recommended fix from Microsoft written when this feature was first added as a service pack for .NET 1.1 but it is apparently still in there and for guys like me who like to push the limits, I thank them for trusting us a tiny bit:;EN-US;826437

I did find one posting that sort of talks about this problem (on Scott Hanselman’s blog) but it appears rather than finding a solution they opted to change their URLs to workaround the problem. Yes, maybe that was a better idea, but not nearly as FUN:

Hopefully this will help someone get past the issue themselves without all the forumining and experiments, or at least convince them it’s a bad idea and give up on the special characters. 😉

Posted in Development, Web | Tagged , , | 560 Responses

Visual Studio 2010 RC Custom Tool for Code Generation + jQuery 1.4 w/ Intellisense for Script#

If you are like me, you probably just want the code, which you can find here:

Or, if you just want the binary DLL and XML doc file for referencing jQuery 1.4 from Script# projects, you can find that here (released under an “MIT license”, see sources for details):

To open the solution you will need the VS2010 RC, the VS2010 SDK and Script# for VS2010. You should also run VS as an administrator. Before you can build the jQuery.1.4 project, you will need to build the MethodXmlGenerator project and expose the custom tool to Visual Studio by using the .reg file provided (which also requires an IDE restart).

I also had to manually register “C:\Program Files\Microsoft Visual Studio 2010 SDK\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.Shell.Interop.dll” with “regasm /tlb” from a VS command prompt that was Run as Administrator. I am not sure if this is an installation bug or if my environment was borked from uninstalling Beta 2.

There are two projects in the solution. One is a Custom Tool for Visual Studio that translates the raw jQuery API documentation file into a CLR friendly catalog of method information. The other is a Script# library that contains interfaces for jQuery 1.4 functionality. Most of that code is automatically generated from the output of the Custom Tool by using T4 Templates (.tt files).

This release features access to most overloads of every jQuery method; global (e.g. jQuery(selector), etc.), static and the methods available on the jQuery element query results. It also has integrated XML doc comments from the same documentation on the jQuery website.  Future releases will have additional objects (such as strongly typed event and AJAX options objects) and hopefully better support for some callbacks. Although many callbacks are supported, they cannot be automatically generated and are associated based on inconsistent conventions being used in the jQuery documentation. I intend to report some of these issues and maybe they will get fixed eventually.

If you are interested in developing your own Custom Tools for Visual Studio 2010, here is a bit more information. First of all, I have to give credit to this link, which is out of date but got me headed in the right direction:

The basic steps I had to follow were:

  1. Create a new .NET 4.0 C# class library
  2. Install the VS 2010 SDK and add a reference to Microsoft.VisualStudio.TextTemplating.VSHost.10.0
  3. Add references to Microsoft.VisualStudio.OLE.Interop and Microsoft.VisualStudio.Shell.Interop (VS will tell you if they are not right)
  4. On my machine, I had to manually register Microsoft.VisualStudio.Shell.Interop (see instructions above if you need to do this)
  5. In project settings, set the assembly to COM Visible and to Register for COM Interop (the latter is only necessary to auto-register on build)
  6. Create a new class deriving from Microsoft.VisualStudio.TextTemplating.VSHost.BaseCodeGeneratorWithSite
  7. Override and implement GetDefaultExtension (any generated file postfix, e.g. “.tt.xml”) and GenerateCode
  8. Use Encoding.UTF8.GetBytes to return the file contents from GenerateCode (this has not been tested with non-ASCII characters yet)
  9. Manually install the custom tool in VS registry space under the C# project type (see the .reg file, the GUID in the path is for the type)
  10. Set the name of the custom tool entry in the registry as the Custom Tool property of an appropriate file in VS

Those are the basic steps. Of course, if you need to debug this at all, you have to fire up a second instance of Visual Studio and attach to the first with debugging type of Managed v4 and then either turn Debug –> Exceptions –> Thrown on for CLR or open the source code file and set breakpoints. You can invoke (and reinvoke, even if it fails) by choosing Run Custom Tool from the first instance.

One gotcha: If you use my setup you have to close VS before you can rebuild the custom tool because the IDE locks the file once accessed. If I remember anything else I will try to throw it up here later. Until then, have fun with custom tools and the power of jQuery, C# and Intellisense combined! Future releases may be combined with drops of WebScriptFX.

Posted in Development, Web, WebScriptFX | Tagged , , | 353 Responses


PROCEX (PX), short for “Process X”, is an experimental software project execution process that intends to be compatible in any environment and require just enough overhead to ensure that all customers, responsible parties, team members, stakeholders and managers stay satisfied. The goal is for it to be prescriptive and simple, and short enough to fit on a few pieces of paper. No book here.

PX assumes you already have some knowledge of software development and processes, and refers to them without sources. For purposes of illustration, the two process types that the rules will be applied against are Waterfall and Scrum. Anything not stated here is not covered by the rules.


PX does not indicate what micro-processes are used to fulfill the responsibilities laid out here. It is designed to be compatible with any phases, milestones, iterations, platforms, development processes, management structures, tools (as long as they are capable of tracking the required data points), applications and file formats desired or available.


PX requires only two roles to exist. Both roles could conceivably be filled by the same individual, but this would be too much overhead if there were no need for collaboration between responsible parties. The roles are Project Manager (PM) and Technical Lead (TL). In Waterfall, the TL would not necessarily need to start until the “tech plan” is started. Scrum does not directly address these roles but most of the time it would make sense that the PM is Product Owner and TL is ScrumMaster.

Missing in PX when compared to some traditional software development organizations are the Analyst, Test Lead and Development Lead. TL serves as both Test Lead and Development Lead, but in situations where there is a distinct Test Lead, they should be considered a delegate of the TL. This makes sense because a Development Lead should have to sign off that the Test Lead’s plan is sufficient to verify the deliverables being created. The same goes for the Analyst. They should be considered a delegate of the PM. Understand though that distinguishing them as delegates does not alleviate them of any organizational responsibilities or infer any authority over them.

This arrangement provides a clear and single point of responsibility for each of the components of the process in any arrangement. It also nearly equally balances the responsibilities and effort required of the two sides in a project: business and technical.


There should be no standard “Project Plan” document, but likely due to customer needs or PM preference there will be a specific format they use for laying out any specific deadlines, milestones or releases that exist. Since this is an internal need it is not specifically included in PX which is focused on inputs and outputs. It should be just as valid for this to be a list in a Word document or a complete Gantt chart in Microsoft Project, as long as it meets the requirements of the project. The only impact on the artifacts of PX is the project plan will define the dates when different components are thoroughly assessed. The project plan defines a structure to measure the different data points provided by PX against prior expectations.

In Scrum, the Product Backlog can represent the requirements but none of the other artifacts directly map to PX. Sprint Backlog and task status would be represented by the creation of issues and tasks and properly updating their individual statuses. Keeping these in sync with whatever team-level tracking is being used (like Scrum has) is an absolute necessity for PX.


All of these artifacts are required to be produced in some form, and their order is significant:

Artifact Output Notes
meeting notes, customer docs (PM) agreements input to requirements
requirements (PM) user stories input to functionality, test plan
functionality (PM) step by step input to tech plan, test plan, documentation
tech plan (TL) implementation input to test plan, configuration (e.g. schemas); includes information on
standards, processes and “definition of done”; NOTE: should only contain info
pertinent to inputs (no internal details)
deploy (TL) installation input to test plan, documentation; provides instructions for deployment of the
test plan (TL) verification approach and references to lists or builds of smoke, integration and E2E tests
or sub-plan
tech docs (TL) annotation all code commenting extended and verified, and customer required docs complete
ship pack (PM) completion all tracking reports and deliverables including any release and user docs

While all of these artifacts can be “living” depending on how agile the environment is, no artifact should be created or added to before the previous artifact in this list has been added to. The reason is all items in every artifact should somehow refer back to their source, which must be an item in a prior artifact. Providing this chain back to their source allows them to be implicitly justified and used in projection. Saying a ship pack could be “living” may sound strange, but consider projects with multiple releases.


Status of a project can be assessed by inspecting the groupings of different issue states for each of the different project deliverables. Take requirements as an example. At the beginning of a project some initial meetings take place and possibly some documents are provided or a vague proposal has been tentatively approved. Even before any document is generated or code is written, issues and tasks can be opened against the project deliverables. This begins the process of fleshing stuff out. Throughout that initial exercise, the set and scope of the unknowns will be defined as issues against the requirements. From then on, we can measure the rate at which requirements are being clarified as issue open rates and resolutions change. Issue and task state workflows will be addressed in a future PX version.


The most important points about issues and tasks are that they are created as soon as they are known about or considered rather than waiting until more information is available and they should always be inspectable by anyone, at any time. They provide a real-time snapshot into all project knowledge, unknowns, progress, risks and status. Reports can be used to track trends and significant shifts.

There is no prescription for who should enter and update these items in software systems, although for best results it may be a good idea to have rotating, dedicated or PM provided support to help the team get this done, so it is done always in a consistent and reliable manner.


Severity is an important aspect of an issue in accurately assessing project status and it should be used to indicate breadth of scope rather than impact to a user. For example, sometimes a severity is attached to a product bug relevant to the direct impact to the user, with a crash being the highest and an inconsistency or minor annoyance being the lowest. Severity in those cases is always in relation to the narrow case where it occurs. The question severity should answer is how often is the user going to run into this problem, not how bad will the problem be when it is encountered. It should tell you, as a percentage, its occurrence.

Here is the severity scale:

Severity Rule
0 >= 90% of features affected
1 >= 50% of features affected
2 >= 10% of features affected
3 < 10% of features affected
4 < 1% of features affected

This by no means should be used to judge the business value, only how widely affected the system is by the problem. The TL should be responsible for making sure this field is correct during triage (within 24 hours), either themselves or by delegating.


This needs to be a best or educated guess estimate of the amount of work required to resolve the issue or complete the task, and is also the responsibility of the TL. This must be set within 24 hours, as it may impact Priority assessment. PX does not define what is done with the estimate. Maybe it is the initial value of work to use in Scrum or the baseline for time tracked against the issue or task to use for calculating percentage done measurements. It is required for triage workflows. Measurement of this work is also not specifically defined, and PX is compatible with relative or absolute values.


Impact rather than severity should indicate both how often and how much the user is affected by the issue. The scale rather than being entirely linear like severity has a bit more of discretion to it but also a set of simple rules to choose the right level.

Here is the impact scale:

Impact Rule
0 >=10% of the time issue is fatal: unusable, data loss or false positives (e.g.
no feedback on fail)
1 rare fatal issue or significant confusion or inconvenience (led to do wrong
thing, start over, etc.)
2 functional but bad content, images, or presentation: false negatives, confusing
alerts, UI issues
3 non-fatal issues, normally a 1s or 2s, but only happen when uncommon things
occur together
4 minor detail (e.g. non-functional, non-branding capitalization mistake) or
unsupported platform

Impact should be set by a negotiation between the PM and the TL. PM can override if absolutely necessary, but if the TL is still sure that the assessed impact is wrong he can demand an explicit customer assessment that the PM can either get or withdraw from. This should be the first field assessed at triage. It should be correct when submitted, but corrected immediately if not.


Priority simply represents the decided choice of the order in which work is done to achieve the business goals, except when because of a properly expressed (through issue and task source relationships) dependency exists so that some other work must be done before the highest priority work can be. Even in those cases, only the amount of work to get some result on the higher priority should be finished. For example, if you need a hammer to build a dog house, and the task to put together the frame has a higher priority than a dependent a task for buying tools, you should only buy the hammer rather than all tools before focusing on building the frame.

Where severity setting is completely the responsibility of the TL and impact is a negotiation between the TL and the PM, priority is completely the responsibility of the PM. They may have more information and be representing business needs unknown to the TL. The priority of an issue must be set within 48 hours since the decision may be dependent on the severity (set within 24 hours). Priority also should have the same meaning across tasks and issues so the TL does not need to decide which is more important.


Performance of projects can be measured by creating formulas to analyze the counts of tasks and issues against different deliverables and in different states. In PX v-next I will go into deeper detail on using KPI.

Posted in Agile, Processes | Tagged | 58 Responses

WebScriptFX v0.1

This is my first crack at putting together several different technologies to create a framework for real-time asynchronous client applications downloaded as JavaScript and HTML. The goal is both to create a consistent toolkit for these types of Rich Internet Applications and also bring down the bar to only knowledge of HTML and CSS to implement most of the user interface. Furthermore, all application logic will be written and maintained in C#. This is achieved by using Script# integrated with the Visual Studio 2010 RC and a compatibility layer that allows us to express and link to models directly from the script projects. The model base class is designed to allow for real-time notification of property changes, enabling real-time data binding scenarios on the client as well as the server without any additional code. DOM traversal and manipulation is simplified by accessing features from jQuery 1.4, exposed to Script# by an evolving interface library. Dynamic content is achieved by HTML annotated with attributes and CSS classes that describe data binding, that is then processed to extrapolate template definitions into active elements and setup two-way data binding with the data model as sent down from the server. This is possible because both client and server can produce and consume that same JSON format for all data. Still to be done in this version is setting up sending the data back to the server, and resolving differences so they can be pushed to a persistence layer. The models are setup to use Fluent NHibernate, but this hasn’t been integrated yet. All of this is sitting on top of ASP.NET MVC 2, and additional routes will be added later for standard REST based access to all resources.

Posted in Development, Web, WebScriptFX | Tagged , , , , | Leave a comment

UriTemplate Formats

If you need to use a UriTemplate, for example when applying WebInvokeAttribute to a WCF OperationContract to create a “REST endpoint”, the formatting rules are here:

MSDN: UriTemplate and UriTemplateTable

Posted in Development, Web | Tagged , | 927 Responses

Version-Enabled Software Maintenance

As you may or may not notice, I have not posted for awhile. Unfortunately, this is what happens when you start working 7 days a week. On one hand I have been working on some cool stuff including Silverlight 3 and 4, NHibernate data models, Amazon SimpleDB powered social networking applications (MySpace, please fix your API for “MySpaceID applications” powered by OAuth) and too many other things to mention. On the other hand this has completely stalled my efforts on fully bringing jQuery into the world of Script#. After some conversations by e-mail with Nikhil Kothari though, I am not sure if Script# will be updated anytime soon so maybe I will have a chance to get back to it before that world changes. I am sure people will still be using C#, jQuery and web apps for awhile.

So, what’s the point? Well, a coworker posted a link to an article about “software maintenance” to our internal technical discussion alias and I felt compelled to respond on a few points. Understand that I am sure the authors are quite capable and knowledgeable in this area but they touched so many areas I thought some begged for clarification. Another coworker subtly hinted I should blog.

First, the link:

And, my thoughts:

Their section: Modern Examples

The example about using a new record type struck home because we have just been working on decompiling Office primary interop assemblies. This is apparently the versioning strategy that Office used to extend COM interfaces. It results in a gigantic footprint and decompilation that makes very little sense. Keep in mind that this strategy does not necessarily work well unless you have control of all clients, or you end up with monolithic monsters like Microsoft.Office.Interop.Excel.

The note about XML I suppose is interesting, except XML has no concept of NULL. Most XML processors treat the value of a missing attribute as an empty string. Yes, I am aware of “xsi:nil” but it is poorly supported and definitely not automatic. Also, it would be much better advised to use a new XML namespace containing a version number than simply changing the names of attributes in your XML schema.

Their section: Who Does it Right?

In the first section the article says, “Still, we must keep shutting down individual parts of the network to repair or change the software. We do so because we’ve forgotten how to do software maintenance.” But then in this section after the proposed process, they say, “Using this approach, there is never a need to stop the whole system, only the individual copies, and that can be scheduled around a business’s convenience.”

So their proposed process still contains the problem they are trying to solve…?

They then talk about how very old versions that are now obsolete should be handled and describe what should happen and the expected action, “It fails conspicuously, and the system administrators can then hunt down and replace the ancient version of the program.”

What is the point of automatic updating systems that create a new task that can only be done with human intervention? It seems they are trying to move forward with new ideas, but are still a little mentally stuck in the “old world” of software.

In Summary

Versioning of schemas is a widely accepted concept and can definitely be useful in some cases. However, in practice when versioning is used in prevalent public schemas it seems to be rarely used. For example, as far as I know if you include the version number for XML in a document it has always been and must always be “1.0”. Most OAuth implementations currently require that the version must be “1.0” even though there have been some changes to the protocol. The version checking usually becomes a detail that most implementors ignore, probably because most consumers ignore the expectation. Of course, this may not apply to proprietary systems.

If anything, I believe this article indirectly makes a great argument for cloud computing and web applications. In cloud computing it is much easier to propagate server component updates across a large number of nodes which should offer near seamless server upgrades. Furthermore, for HTML web applications, or even better Silverlight (yes, I realize this sounds suspiciously like a pitch for Microsoft’s “3-screens” strategy from the PDC ’09 keynote),  you have complete control over the client and updates are handled by a user’s web browser automatically if you use a good component versioning strategy for assets (CSS, JS, XAP, etc.). The only thing you might have to worry about is if a Data Transfer Object structure changes, but if you are using a good pattern such as Model-View-ViewModel with proper isolation of tiers, the worst case would be serving multiple ViewModels simultaneously. I suppose this approach is just loose coupling which the authors do mention.

Hopefully we will soon all realize just how easy things could be.

Posted in Development, Web | Tagged | 31 Responses