Author Archives: Alexandre Roba

About Alexandre Roba

Currently developing and delivering Application and Solution Architecture consultancy services as freelancer. Before working as IT Architect for IBM Business Consulting Services.

Implementing a flash middleware on express.js 3.x

I’m currently following the Pluralsight training “Full Stack Node.js”. The course was release more than a year ago and it is based on express.js 2.x. I have chosen to build the course sample on express 3.x and because of this some features described on the course are no more applicable. The purpose of this blog post is to actually demonstrate how to migrate properly one of the them: “The Flashing”.

Simply put, the flashing consist of displaying a status message once an operation executed on a post has completed. The Flashing is particularly useful when using the Post/Redirect/Get Pattern. The all scenario can be described as:

  1. The user submit a form with a POST action
  2. The user form is process and a result status is generated such as “Error” or “Success”
  3. The Post action completes with a redirect that will tell the client browser to GET another page.
  4. The rendering of the get result should display the processing status

I found several questions on stackexchange explaining how to do that but none of them was clear enough for me to understand… So I digged a bit and I decide to share my findings here.

The original solution was using app.dynamicHelpers which is not applicable anymore. The migration document from express 2.x to 3.x just says replace with:

middleware and res.locals…

Fine… How am I suppose to do that? Well I believe the answer is incomplete.

You will use the middleware to add processing into the handling of your request and its response. After all this is why the middleware is there for. Then in the middleware you will attach a function that can be used in the route handler. This function will use res.locals to attach something to your response that can be used on the rendering of the view. This is incomplete…  or at least this how I understood it and it felt incomplete.

Our problem here is that we are using the Post/Redirect/Pattern and this means that all data attach to the res.locals vanish as soon as you do a redirect. The redirect will instruct the browser to perform a GET and this will start the request from the processing of the HTTP request from the beginning. There is no way to solve our issue just with a middleware and res.locals!!! We need a way to pass information from the POST and REDIRECT to the following GET. The only to do this is via a cookie or in the session. But those will then need to be cleaned once the status message has been displayed.

This is how I have achieved this. First I have implemented the middleware and it looks like this:

"use strict";

var currentRes;
var currentReq;
module.exports = function(){
	return function(req, res, next) {
		//We save the response and request reference has we will need it later
		currentRes = res;
		currentReq = req;
		//We attach the flash function to be used from the route handler
		req.flash = _flash;
		//We read the status from the session if it is there and we remove it
		_flash();
		next();
	};
};

var _flash = function(type,message){
	//We need a session if there is none... Then we raise an exception
	if (currentReq.session === undefined) {
		throw Error('req.flash() requires sessions');
	}
	//This the usage from the route handler. This will store the status message in the POST processing
	if(type && message){
		currentReq.session.flash = {flashType:type,flashMessage:message};
	} else {
		//If no parameter are passed then we read the eventual value saved on the session and we attach it to res.locals for the rendering
		if (currentReq.session.flash){
			var flashObj = currentReq.session.flash;
			currentRes.locals.flashTypes = ['info','error'];
			currentRes.locals.flash = {};
			currentRes.locals.flash[flashObj.flashType] =flashObj.flashMessage;
			delete currentReq.session['flash'];
		}
	}
};

The middleware will need to be added to your app express instance as usual

var flash = require('./middleware/flash');

var app = module.exports= express();

app.configure(function(){
    app.set('port', process.env.PORT || 3000);
    app.set('views', path.join(__dirname, 'views'));
    app.set('view engine', 'jade');
    app.use(express.favicon());
    app.use(express.logger('dev'));
    app.use(express.json());
    app.use(express.urlencoded());
    app.use(express.methodOverride());
    app.use(express.cookieParser());
    app.use(express.session({
        secret:"mysupersecrethash",
        store: new RedisStore()
    }));
    app.use(flash());
    app.use(app.router);
    app.use(express.static(path.join(__dirname, 'public')));

    if('test' === app.get('env')){
        app.set('port',3001);
    }

    // development only
    if ('development' === app.get('env')) {
      app.use(express.errorHandler());
    }
});

Then from the route handler of the POST processing we have the following implementation. This is basically where we set the status of the processing.

app.post('/sessions',function(req,res){
	if(('admin' === req.body.user) && '12345'===req.body.password){
		req.session.currentUser = req.body.user;
		req.flash('info',"You are logged in as "+req.session.currentUser);
		res.redirect('/login');
		return;
	}
	req.flash('error',"Those credentials were incorrect. Try Again");
	res.redirect('/login');
});

And finally we can use the status information that will have been attached to the res.locals from the jade template

if (typeof(flash) !=='undefined')
	each flashType in flashTypes
		if flash[flashType]
			p.flash(class=flashType) #{flash[flashType]}

The “yield” keyword demistified

I had recently a discussion with a younger developper in C# that was asking question about the usage of the yield keyword. He was saying he never used and though it was useless. He then confessed me it didn’t really understood wath the keyword was exactly about. I tryed to explain him what it does and this the material I would have used it if I had it at that time. I will try with this post to explain what “yield” is all about with simple but concrete examples.

First thing first. Where can we use it?

It should be used in a function that returns an instance that implement IEnumerable or and IEnumerable<> interfaces. The function must return explicitely one onf those interfaces like the two following functions:

public IEnumerable GetIntegers1()
{
    yield return 1;
    yield return 2;
    yield return 3;
}

public IEnumerable<int> GetIntegers2()
{
    yield return 1;
    yield return 2;
    yield return 3;
}

By returning the IEnumerable interfaces those functions become iteratable and can now be used directly from the foreach loop like:

foreach (var i in GetIntegers1())
{
    Console.WriteLine(i.ToString());
}

foreach (int i in GetIntegers2())
{
    Console.WriteLine(i.ToString());
}

Ok but why using it?

What is the difference between those two functions and this one?

public IEnumerable GetIntegers1()
{
    return new List{1,2,3}
}

It might not be obvious at first sight as the result is identical but the execution flow is different.
Basically if you debug the program execution you will see the following for the returned list

  1. Enter the foreach loop
  2. Call the GetIntegers ONCE
  3. Write the first number
  4. Write the second number
  5. Write the third line

And you will see the following when using the yield return

  1. Enter the foreach loop
  2. Call the GetIntegers but leave at the first return
  3. Write the first number
  4. Call the GetIntegers but start at the second return and leave just after
  5. Write the second number
  6. Call the GetIntegers but start at the third return and leave just after
  7. Write the third line

That is all. It simply changes the execution flow and allow you to handle each element of the list one by one before the next element is called.

Fantastic! but is this magic?

No it is not. You could have achieve the same result by having implemented yourself the iterator pattern using the interface IEnumerable and IEnumerator and building a dedicated class to handle this like the following code (for simplicity I will only implement IEnumerable but IEnumerable<> could have been implemented as well):

public class IterableList : IEnumerable, IEnumerator
{
    public List numbers;
    public int index;

    public IterableList()
    {
        numbers = new List();
        int index = 0;
    }

    public IterableList(IEnumerable inputlist): this()
    {
        foreach (var i in inputlist)
        numbers.Add(i);
    }

    public IEnumerator GetEnumerator()
    {
        return this;
    }

    public bool MoveNext()
    {
        index++;
        if (index > numbers.Count)
            return false;
        return true;
    }

    public void Reset()
    {
        index = 0;
    }

    public object Current
    {
        get
        {
            if (index == 0)
                return 0;
            return numbers[index - 1];
        }
    }
}

And then define a function:

public static IterableList GetIntegers3()
{
    return new IterableList(new List{1,2,3});
}

Both of the code generated by the compiler will look very similar. This can be confirmed by looking at the IL code generated by both of our implementation. We can see that when using yield an extra class is generated for us that implements IEnumerable and IEnumerator (and their generic version).

Generated

The Iterable class we have written will look mostly the same (But for the generic versions that we have not implemented)

Capture02

To summarize!

Basically using the yield will allow us to have the control over the way the items in our IEnumerable result items and their processing happens. And no magic behind. It is simply an helper that will generate the code for you.

GAC Deployment versus BIN Deployment and security patches

In one of my assignment I had to investigate different ways to publish utility libraries to different projects and development team. The first idea that came to my mind was to build a Nuget package and to configure an internal Nuget Feed where I could publish my package. This sound like a good idea and I was going to close the analysis phase and settle down for the implementation when someone came to me and asked a question about how I was going to manage the Security patch deployment. Let me clarify what is a security patch.

A Security patch is a patch that needs to be deploy to production no matter of the risk for the production application to gets into trouble. It is a patch that does not contain any API or interface change but contains only internal corrections. Those patch are not deployed in the scope of a particular application but are deployed on any machine a specific component is used. In my situation, as I’m only delivering Libraries, I need to be able to tell to the ops team “Please deploy this on all machine the library is in use”. And this is my major problem. I don’t know. My library get used through Nuget and only the client applications know the package they are using. I also cannot guarantee that a fix on one package with the publication of a new Nuget package version will be picked up right away by the client application development team and included in their next deployment.

What option do I have here? Nuget package do not cover this scenario by design. My first reaction was to challenge the requirement? What kind of library might require a security patch? Not so many. You know what they say: “Show me a dragon and then I will show you excalibur”. This did not convince. I had to find a specific way to deploy those security sensible libraries.

This when I started investigating GAC deployment. How do I achieve GAC deployment? Well I build my library and I make it available through an MSI. This MSI will register that library in the GAC. The MSI being deploy on machine as a unit of deployment, it can be tracked and inventoried by the OPS team. I can find the list of machine where I will have to deploy my security patch.

The GAC deployment provides me with the possibility to deploy a new version version of a library on a machine and make sure any client application using the old version of the component will pick up the new version right away. I tested this and this is how I achieved this:

I wrote two libraries with the following code:

Version 1.0.0.0

using System.IO;

namespace CommonLibrary
{
    public class Library
    {
        public void Function()
        {
            var file = new StreamWriter("D:\\VersionLog.txt",true);
            file.WriteLine("This is version 1.0.0.0 of the library's function");
            file.Close();
        }
    }
}

Version 2.0.0.0

using System.IO;

namespace CommonLibrary
{
    public class Library
    {
        public void Function()
        {
            var file = new StreamWriter("D:\\VersionLog.txt",true);
            file.WriteLine("This is version 2.0.0.0 of the library's function");
            file.Close();
        }
    }
}

Both of them where compiled and strongly signed with the same key.

I wrote a small client app that was using the library It looked like that:

namespace ClientApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            var lib = new Library();
            lib.Function();
        }
    }
}

I built those components on .NET framework 4.5 meaning I’m using the GAC of the .NET framework 4.0.
I deploy the version 1.0.0.0 to the GAC using the following .NET framework 4.0 gacutil command from a visual studio 2012 command dos prompt:

gacutil /i CommonLibrary.dll

I could then use the following command to check the proper installation of my component in the GAC

gacutil /l CommonLibrary

And the result was:

Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.17929
Copyright (c) Microsoft Corporation.  All rights reserved.

The Global Assembly Cache contains the following assemblies:
  CommonLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b523b0464e4856a6, processorArchitecture=MSIL

Number of items = 1

Then I ran the client application application exe file and I could check in my output file the following line

This is version 1.0.0.0 of the library's function

Then I deployed the version 2.0.0.0 of the library using the same method as previously mentioned.
I then run the the following command to check the content of my GAC

D:\dev\GacVersionning\libraries\V2>gacutil /l CommonLibrary
Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.17929
Copyright (c) Microsoft Corporation.  All rights reserved.

The Global Assembly Cache contains the following assemblies:
  CommonLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b523b0464e4856a6, processorArchitecture=MSIL
  CommonLibrary, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b523b0464e4856a6, processorArchitecture=MSIL

Number of items = 2

This clearly shown the multiple version of my deployed library.
After running the client application I could see my client application was still using the version 1.0.0.0.
In order for my client application to use the version 2.0.0.0 of the library I have to deploy a policy file.
A policy file is a config file (XML) that gets compiled into a dll in order to be deployed to the gac.
This will tell the GAC to redirect all calls for a given version to a another version.

This is the content of my policy config file that I named RedirectPolicyFile.config.

<configuration>
    <runtime>
        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
            <dependentAssembly>
                <assemblyIdentity name="CommonLibrary" publicKeyToken="b523b0464e4856a6" culture="neutral" />
                    <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0"/>
            </dependentAssembly>
        </assemblyBinding>
    </runtime>
</configuration>

I compiled it using the following command

al /link:RedirectPolicyFile.config /out:policy.1.0.CommonLibrary.dll /keyf:StrongName.snk

Then i registered the policy “policy.1.0.CommonLibrary.dll” to the gac using the same command as usual

gacutil /i policy.1.0.CommonLibrary.dll

We can then run the client application and check the output file. It should contain the following line:

This is version 2.0.0.0 of the library's function

You have been security patched.

Isolated calls of dynamically loaded assembly

Definition of the issue:

In one of the project I’m currently working on, I need to be able to call a function from an assembly that will be provided at run time. One of the major requirement I have is to have a clear isolation of the call with a minimum of configuration. The second requirement is  to be able to provide a regular configuration file on my callee assembly in order for a lambda developer to implement a WCF call in that assembly using regular config files. Meaning they should be able to write a simple .NET Assembly referencing other assembly and making use of a config file and all that should work.

There is no particular performance requirement. It is left to the developper of the callee assembly to manage this issue. It will be up to him to dispatch and manage threads if needed.

My first solution:

The easiest solution I found was to create a new app domain. To load the callee assembly in that new domain and to execute the call there. This gave me the isolation level I was needing.

My calling class looks like this

//Setting the new app domaine configuration
AppDomainSetup appDomainSetup = new AppDomainSetup();
appDomainSetup.ConfigurationFile = "Custom.Config"; //The config file of my calle assembly
appDomainSetup.ApplicationName = "ProxyName"; //Just to be cleaner
appDomainSetup.ApplicationBase = @"D:\dev\Dummy\ConsoleApplication2\ProxyComponent\bin\Debug\"; //Where is my calee assembly
//Creating a new app domain
AppDomain domain = AppDomain.CreateDomain("IsolatedDomain", null, appDomainSetup);
//My parameters
string dllFilePath = @"D:\dev\Dummy\ConsoleApplication2\ProxyComponent\bin\Debug\ProxyComponent.dll";
string proxyFullName = "ProxyComponent.Proxy";
//Loading my assembly in my new app domaine
IScheduler myProxy = (IScheduler)domain.CreateInstanceFromAndUnwrap(dllFilePath, proxyFullName);
//Executing my call on the new app domain
Console.WriteLine(myProxy.GetSettingValue("Key01"));

My ProxyComponent.Proxy class looks like this

public class Proxy : MarshalByRefObject,IScheduler
{
public string GetSettingValue(string key)
{
var formatter = new Library();//Create a class form a referenced assembly to test the usage of referenced assembly in the callee.
return formatter.Format(ConfigurationManager.AppSettings[key]);
}
}

My custom config files looks like this

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appsettings>
    <add key="Key01" value="Value01"/>
  </appsettings>
</configuration>

The only shared assembly bet ween the callee and the caller  is the assembly that contains the IScheduler interface.

public interface IScheduler
{
string GetSettingValue(string key);
}

This could have been avoided using DLR and Dynamic. I’ll try to work on this in a near future.

What is wrong with Enum.ToString()?

This morning, while we where working on the creation of Key/Value pair table containing different kind of entities one of my senior team meber ran into an “Exotic” behavior. The key of our table is a string build on the composition of the entity Id and the entity type. The entity type is simply an enum value converted to a string. This looked like this:

_ToString = string.Format(“[{0}/{1}]”, id, contactType.ToString());

contact type is here the enum variable. Nothing really fancy or complicate untill we ran the performance monitor on it.

We saw that this line, according to its simplicity was taken too much time to  process.

We have to perform this operation for millions of records and every millisecond counts.

At first sight I though it was the string.Format() that was taking all the processing and we tried a quick optimization.

_ToString = “[” +  id + “/” + contactType.ToString() + “]”;

Same result. I had to face it, the ToString() of the enum variable was taking most of the processing.

The number of entries in enum being short, we tried the following code

switch (contactType)

{

    case ContactTypeEnum.Undefined:

        _ToString = “[” + id + “/Undefined]”;

        break;

    case ContactTypeEnum.Organisation:

        _ToString = “[” + id + “/Organisation]”;

        break;

    case ContactTypeEnum.NaturalPerson:

        _ToString = “[” + id + “/NaturalPerson]”;

        break;

    case ContactTypeEnum.OrganisationContact:

        _ToString = “[” + id + “/OrganisationContact]”;

        break;

    default:

        throw new ArgumentOutOfRangeException(“contactType”);

}

The difference was impressive. This long piece of code is almost 8 time faster.

The following image gives the performance numbers of each of the code block.

 

 

Enterprise, System, Software, Application, Technical Architect, what is my role?

Enterprise, System, Software, Application Architect, just pick one.

I’ve been on assignment building a strong commercial relationship with my current customer for several years now. Interesting projects, nice environment, smart and fun colleagues, everything is available for me to stay happy and work there ever after… but… Well, I am starting to have the need to meet new people and face new challenges. Of course I do not want to jeopardize my relation with my current customer, but I need to evolve. I need to get out of my comfort zone and start something fresh.

This is why I have decided to update my resume and to investigate different opportunities that could be available on the IT market.  First think first. What am I? What is my role? On my current assignment I ‘m called a “Technical Architect”. Yeah!!… but what does it means… This title is not available anywhere else. I checked on IEEE, Wikipedia, googled about it… there is no Technical Architect Role definition or at least I could not find one…

As a “Technical Architect” what are my duties:

Team coordination: Coordinate is not really the term. Lead and coach seems more appropriate. I guide the team on the activities we should conduct in order to deliver and act as scrum master.  I advice the project manager on what is achievable and what is not and if I feel a potential risk on the project.

Functional analysis: I do not conduct the functional analysis but I review it. I did it on the past but in Belgium you need to speak both national language to work on this area and unfortunately, I don’t. review the functional analysis for consistency and feasibility. I challenge the content as well. I will probably not let build a space ship to cross the street.

Technical Design and Architecture: I have an analytical mind. I like to conceptualize and modelize stuff. I enjoy putting structure and explain difficult concept. If I manage to make myself understood then I can say I really have understood what I was talking about. This is probably the activity I’m better at.

Development:  I love to code and unfortunately this is something I do not do enough. I’m told to be not to bad  at that and In order to stay up to date I try to do my TDD Kata at least three or four times a week. I have a pluralsight subscription and I complete at least one training every two week. This is a must have for any developer. I try to stay sharp or Csharp to be precise and pluralsight is a good way for this..

Coaching: This is also something I like to do. I like to share knowledge. On my current assignment I try to do this on a daily basis. I launch topics I read about. I advice on training people should follow. I share my tips I tricks and I try to understand what the others are doing. If I manage to bring the level of expertise of the team I’m working with to a higher level of expertise then this is already a success for the project.

Day to day management and client facing: I’m also organizing the deliveries and the operational hand over. Coordinating the testing and the delivering is also part of my duties.

I perform other activities but those are most probably the most frequent ones.

Now let see what title I can find to match those duties. l find on Wikipedia the following definitions:

Enterprise architect: handles the interaction between the business and IT sides of an organization and is principally involved with determining the AS-IS and TO-BE states from a business and IT process perspective.

Solution Architect: The Solutions Architect is responsible for the development of the overall vision that underlies the projected solution and transforms that vision through execution into the solution. Solution Architects in large organizations often act as the bridge between Enterprise Architects and Application Architects.

Application Architect: An application architect works with a single software application. This may be a full- or a part-time role. The application architect is almost always an active software developer.

The following tables indicates many of their differences

Architect Type Strategic Thinking System Interactions Communication Design
Enterprise Architect Across Projects Highly Abstracted Across Organization Minimal, High Level
Solution Architect Focused on solution Very Detailed Multiple Teams Detailed
Application Architect Component re-use, maintainability Centered on single Application Single Project Very Detailed

According to those definitions and tables, I must probably seat somewhere between the Solution architect and the application architect but closer to this last one.

When not asking the right question can lead to stupid answers.

I’m currently working on a project where the long ago choice to use SOAP as standard for all our services is making people stop thinking. I’m convincing that reconsidering technical choice when a valid and valuable context arises is an attitude that should be natural for all architects. But in the reality, it is not… Or maybe they might not be architects.

I’m the kind of guy who prefer understand the “why” instead of learning the “what to do”. I’m an architect and by definition I think ahead, and I design a minimum up front. If I found something illogical…Well I will challenge it.

Let me be more explicit on this. I need to build some services for a mobile middle ware. Those services are DATA services. They only provide CRUD operation on some entities that will made available to a mobile client. This mobile middle ware supports 4 kind of communication technologies (SOAP,SAP RFC, REST, SQL). It supports them in the same extend. Meaning it will not be able to support the full extent of features (transaction, security, reliability…) that each of those technologies support. It supports the bare minimum.

Because we want to be SOA……. (Yeah!!!!) We have to use SOAP…(Yeah… What a shortcut…). Fine we are building DATA Services, and we need to use the more complex protocol for our services… This is a non-sense. Those services are meant only for this middle ware. Their contracts are tailored for the middle ware. There is no possibility for them to be used by something else than the middle ware and we are choosing the most complex and universal protocol….

Why not considering something simpler like REST and a simple and optimized XML message? “Because this is not a company standard!”… What can I reply to this? I believe that this choice can make sense (Yes I say can make sense J) when the services you are building is meant to be a companywide service and used by different platform. But then it needs to be advertised like this. This is not our case… We are mainly providing services that list all instances of some entities. Those services will return hundreds of thousands of records. Let’s reconsider that companywide standard for this case and use it properly. Let’s build something which our customer will take advantage of for once. Let’s be “Thinker” and not follower this time.