BizTalk Convention Over Configuration processing by REST

During the 2012 I was looking for something real dynamic to drive the processes inside BizTalk Server, there are different options to use, one is using BRE from pipeline components and orchestrations and drive the processing by SQL Server tables and context message as Microsoft ESB Toolkit another is creating some pipeline components and orchestrations and using a storage database as SQL Server with a number of tables to instruct our code to drive the messages and other more options.

What I was missing in BizTalk was the real feature of a real ESB, something was able to adapt his processes using a convention over configuration, when Microsoft provided the opportunity to expose REST by BizTalk then BOOOM, here we go and I started the project.

I wrote a post about it but I never published the code, now the code is available here.

The idea is simple, RESB is going to provide the possibility to drive the BizTalk processes by HTTP REST using a convention over configuration, below some of the slides I used during an event in Milan, all of them are inside the package.


and trying to cover the most important and used communication patterns.


In the REST URI specify what entities we want to invoke and the order.


and drive our process trough HTTP REST


The architecture is using BRE with different pipeline components and orchestrations.


The code inside uses different technics to provide the maximum flexibility and extensibility, below some sample used inside the project.

Slide8 Slide7 Slide9

In many asked me to publish this code, the solution is the DRAFT of this idea and this could be a good starter kit to create something similar.

You can find more information in previous blog post and you can download the code here.

C# Tip Clone a .Net object

Clone an object in an integration scenario is a quite common operation we need to do,  many time integrate could means move values objects, transform data between references and so on, many time is very useful to be able to duplicate our objects and “data containers”.

Well, one of the common approach, is using the MemberwiseClone method, it create a shallow copy of the object and we can use the clone as “copy” but it doesn’t work with any scenario, below the exact limits:

The MemberwiseClone method creates a shallow copy by creating a new object, and then copying the nonstatic fields of the current object to the new object. If a field is a value type, a bit-by-bit copy of the field is performed. If a field is a reference type, the reference is copied but the referred object is not; therefore, the original object and its clone refer to the same object.

Essentially all the object reference are still working and is going to be complicate abstract the value object for different clones / values.

Another option is to implement the iClonable interface and use the MemberwiseClone , this is a very god option to control in deep the cloning and we will be sure to recreate a new one clone, obbiovusly we need to write all the code to replicate ore object in our Clone() method and use theMemberwiseClone where we need.

An interesting tip I like to use is serializing the object, we can use any serialization pattern the important thing is just serialize the object,for example, a method to serialize an object to a byte array and on from a byte array to byte array as below.

public static byte[] ObjectToByteArray(object objectData)
if (objectData == null)
return null;
var binaryFormatter = new BinaryFormatter();
var memoryStream = new MemoryStream();
binaryFormatter.Serialize(memoryStream, objectData);
return memoryStream.ToArray();

public static object ByteArrayToObject(byte[] arrayBytes)
if (arrayBytes == null) return Encoding.UTF8.GetBytes(string.Empty);
var memoryStream = new MemoryStream();
var binaryFormatter = new BinaryFormatter();
memoryStream.Write(arrayBytes, 0, arrayBytes.Length);
memoryStream.Seek(0, SeekOrigin.Begin);
var obj = binaryFormatter.Deserialize(memoryStream);
return obj;

and simply serializing the object to clone it.

byte[] byteobj = SerializationEngine.ObjectToByteArray(objecttoclone);
var newobjectclone = SerializationEngine.ByteArrayToObject(byteobj);

it works perfectly and it is very simple to implement, logically the object need to be serializable.

BizTalk NOS Ultimate – BizTalk Assessment Power Up!

In this last period I’m doing many different BizTalk assessments and health check, these kind of missions are really complicate to do specially if you need to check the source code and BizTalk NOS supports me a lot, the best thing I can do is to show you how I use BizTalk NOS in different situations, in this post I will show you one real case in a orchestration assessment.

I found a solution with many projects insides, obviously the projects and solution name are obfuscated.


The code is well done but the number of projects inside the single solution is huge, creating different problems.

First big problem is the navigation, I can’t manually navigate inside it just browsing each single project, the wild search feature helps me a lot to find what I need.
I want to understand where the solution implement Oracle integration and what error messaging patterns it use, first of all I execute a wild search using the “oracle.” word and ignoring the case.


I got 1 orchestration, first of all I like to execute the Refactor feature, it able to provide me many information, first of all the size and complexity, I can execute the same feature globally from the solution file and I receive the same report bat with a global result.


The BizTalk NOS produces a very detailed report with many useful information, on top I can check the most important information:


Average complexity, this is an algorithm which calculate the complexity using different parameters as branches, code used, loops and so on, more than 1000 means  that the orchestration is complex to understand and too many artefacts present, I will discuss that with the customer.

File Size, is the current assembly Kb size in term of this orchestration, quite high… but the AGC is what I looking for.

AGC, Average Global Cost, this algorithm calculates the global cost of orchestration considering also the external dependencies used, as maps, schemas pipelines, orchestrations and so on, this is what I’m really keen, the result here is 11643 Kb, this means that this orchestration potentially will consume 11+ megabytes of memory, could be very expensive if this orchestration is called in multiple instances, I will check the issue with the customer.

Number of warning, this algorithm sum all the warnings found in the orchestration, a warning is something that doesn’t follow a simple base best practice, as base I mean a persistent in a loop using a send port, a extensive using of code inside a shape, a default named as “schema1 or map2” used inside it an more, over 50 means that is better have a look inside the orchestration for sure you will find different points of discussion.

For example in this orchestration I found many dangerous persistent points created by send ports inside loops, I can see that in the report looking for the database bomb icon, I will check this other issue with the customer.


To understand more I also use the dynamic navigation panel, when you open the orchestration you can see a navigation panel on the right.


Using this panel I’m able to navigate in very fast way inside the orchestration and see all the messages and maps used and all the dependencies, but I will love to show you what I’m able to find and understand during a BizTalk solution assessment using BizTalk NOS.
I’m keen to the message ExceptionNotificationMessage, I think this message is used inside the orchestration to manage the messaging error handling, I’m keen to understand if the error messaging pattern used is global inside the solution or just inside this orchestration.

To understand that I open the schema with a double click from the dynamic navigation panel, now I need to locate the schema inside the solution, to do that right click on the top of schema toolbox panel and select I simply select the Locate It menu voice.


Now I’m able to use one of my preferred feature, the Troubleshooting-External Dependencies, right click on the schema file and select Troubleshooting and External Dependencies,


The result is most of the orchestration are using this schema, this means the error messaging handling should be centralized, to be sure I execute a wild search using the schema name and the result confirmed, I can also see where the schema is used and constructed, now I can check the libraries code regarding the error handling.


As you can see BizTalk NOS Ultimate is a very powerful tool, I use it in many different situations, assessment, troubleshooting, development, refactoring and more, there are a lot of features inside, I will write more about it, I love it, it is ROCK!.

You can download a free trial version here.
More info @

Logic Apps What’s New

I’m following the Logic Apps broadcasting video live, below a quick recap about the most important points.

1) possibility to write java script and c# script code inside a logic app, essentially the same concept of BizTalk scripting functoid


2) possibility to download these Api Apps from a new GitHub repository

This is nice if you want to understand how to create or extend these Api Apps, the purpose of this GitHub repository is to provide an open source repository where the community can contribute.

4) What’s new

new features

4) What’s in progress


For more details…

the full video here

the Microsoft team blog here

Fast and easy deployment with WiX 3.10 for Visual Studio 2015

Wix is a great toolset which able to provide all we need to create a great deployment, I’m preparing the setup for jitGate, now in private beta test and I’m using WiX 3.10, this version support Visual Studio 2015, the installation is very simple and the 3.10 build is available here.

Essentially WiX is a toolset completely based on top of Windows Installer and it is completely base on XML scripting, here the name Windows Installer XML Toolset, WiX Toolset.

WiX is free and open source and the framework is able to cover all the large number of features and options offered by Microsoft Windows Installer, it also provide a large number of tool to made easy creating our deployment database and WiX already offers a large number of setup dialog forms ready and we can also customize them.

WiX is able to offer a great WPF setup interfaces, the Wix setup also uses a WPF interface.


Wix is very easy to extend because completely based on XML scripting and it is essentially formed by 4 big areas.


1) Product area containing all the information about the product, the most important setting about the setup behaviours as upgrading, compression options, deployment restrictions and more

2) Features to manage the different deployment features options, for example, minimal / typical and complete installation.

3) Directory to manage the source and destination directories deployment and it is very intuitive to use.

4) Component group to manage the deployment files,  essentially 1 Component = 1 File.

Another great thing is the WiX BootStrapper, we can use it to install our prerequisite before our installation, for example these three simple lines will install the .Net Framework 4.5.


WiX covers all the deployment option type and we can also extend the Windows Installer behaviour using the Custom Action Project and this is really powerful.


There are many resources and courses available in internet

I definitely recommend WiX Toolset to create our deployment projects.



Event Hubs API App – fast and easy to do

Last evening I was working around a Logic Apps and I needed to send some messages to the EventHubs.
We have two different options, we can use the API app HTTP Connector or we can decide to create and API app which able to do that, this is a good opportunity to understand the development productivity about API Apps.
I would like to use a very simple approach and we can extend this sample as we want, using dynamic configurations, extended features and so on, I just want demonstrate how much is simple do that in some simple steps.

Install the Windows Azure SDK for .NET – 2.5.
Create a new Visual Studio project, select Cloud and ASP.NET Web Application


Select Azure API App


All the library will be automatically added


Select manage NuGet Packages
5Search for EventHub and select the EventProcessHost package
6[OPTIONAL] Rename the ValuesController.cs class in to EventHubController.cs

Below the simple code we have to use to send an event message to the Event Hubs, copy and past this code in the class file.

using Microsoft.ServiceBus;
using Microsoft.ServiceBus.Messaging;
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Timers;
using System.Web.Http;

namespace EventHubConnector.Controllers
    public class EventHubController : ApiController
        // GET api/values
        public string Get(string message)
            string ehConnectionString = "Endpoint=sb://[EVENTHUB CONNECTION STRING]";
            //Create the connection string
            ServiceBusConnectionStringBuilder builder = new ServiceBusConnectionStringBuilder(ehConnectionString)
                TransportType = Microsoft.ServiceBus.Messaging.TransportType.Amqp

            //Create the EH sender
            string eventHubName = "[EVENTHUBNAME]";

            //OPTIONS 1
            EventHubClient eventHubClient = null;
            eventHubClient = EventHubClient.CreateFromConnectionString(builder.ToString(), eventHubName);
            EventData data = new EventData(Encoding.UTF8.GetBytes(message));

Now we enable the swagger features, I would like to spend something here because some guys asking me more information about the swagger side.
We have two different options to manage the swagger contract, on is dynamically, entering in SwaggerConfig.cs and uncomment the usual .EnableSwaggerUi lines.
The second is static, this is useful if we want to drive our swagger generation and this is also quite simple to do.
Uncomment the usual EnableSwaggerUi lines , press F5 and execute the project in debug mode.

Navigate on http://localhost:%5BYOURPORT%5D/swagger/docs/v1 to get the API json raw version

Open the file in Visual Studio, create a file named apiDefinition.swagger.json Metadata folder and save the content inside the file.

To enable the static feature we just only have to enter in the apiapp.json file and delete the value of the endpoints node , as below


Very easy and fast.
We are ready to publish, right click on project file and select Publish.
Select Microsoft Azure API Apps.


Insert the name of your API app and set all the usual subscription properties as service plan and resource group, we can also create them.


Our API app is deployed and ready to use


This API App which able to send a message to the Azure Event Hubs, very simple and fast to do and we can now extend this API App also to receive message from Event Hubs and create some other interesting features.

Logic Apps and API Apps under the hood

Solidsoft Reply are experiencing a growing number of customers who are interested in understanding Logic Apps and API Apps and as a result of the increasing requirement, I decided to spend time dedicated to this topic.

There are many articles and blog posts about this topic, in this article I would like to present my first impression and feedback.
I would like to do a quick introduction about that but before to read this article I recommend to see this video of Josh Twist.

The difference between the API Apps and Logic Apps is, the API Apps are single atomics applications we are be able to develop and deploy in the Microsoft Cloud, they provide us the possibility to write and deploy .Net code.


The Logic Apps is the possibility to orchestrate our API Apps in a logical flow and creating and centralizing our logical processes.


The first time I saw the Logic Apps I tried to compare them with BizTalk and I think is quite normal for a person as me but this is a mistake, they are two different things, different architecture, different pattern, different approach used, but both of them are able to cover the same scope.

Compare Logic and API Apps with BizTalk is quite complicate because they uses two different approaches and patterns, BizTalk provides different components layers and levels, API Apps is a unique containers of micro application blocks and we use Logic Apps to organize them.
We can’t create a new BizTalk orchestration shape but we can do that with API App creating a new API App, in the Logic Apps we can’t have the same concept of BizTalk pipeline file but we are free to organize our pipeline, formed by API app components, inside the Logic App, as we prefer.
The API Apps are application containers which executing actions, we can have two different types of Apps, simple applications containers which formed by our .Net code, or Triggers.
To start a Logic App we need to use a trigger, a trigger could start in different ways, because called, because a polling rule is verified or manually.

Essentially the API App is a different representation of a Web API, to create an API App we need to use Visual Studio 2013, Microsoft provides the Microsoft Azure SDK which provides a new project template, the API App project template.
To create a new API App we select new project -> Cloud -> APS.NET Web Application -> OK.
new proj1

Visual Studio proposes a new template, the Azure API App (Preview).
new proj2
Logically the Web API checkbox is selected.

After the NuGet package installation we have all we need to develop the API App, but I don’t want to enter in details in this first article, I want to discuss about the concepts.
The directory organization is the quite similar to a Web API.


This is a good idea because the developers don’t need to upgrade their knowledge, we can see a Web API on the left and a API App on the right, they are using the same ApiController abstract class inside the assembly, System.Web.Http.dll, so what is the real difference.

API App uses a metadata description controller completely based on Json, also the Swashbuckle NuGet package provides an automatic Swagger metadata generation.
For more information about Swagger you can go in the official site

The provisioning is different, the API App template provide a deploy completely focused on that.
Right click on project file, select publish and click on Microsoft Azure API Apps.

After selected the subscription we are able to define how to deploy the API App, what I like here is the idea to keep separated the concept of project name from the API App name, we can define a different and multiple API App name.

Now select the canonical information and deploy the API App.
There are aspects that I really like about the internal architecture.
One is the swagger integration inside the API App, we can activate this feature simply uncommenting these lines of code inside the SwaggerConfig.cs file.

We can test our swagger documentation running our project, press F5.
Adding /swagger in the link url and select List Operations we are able to see the documentation.

The difference between a simple and general API App operation and a Trigger operation is because a naming convention.
In this week I’m developing an API App trigger which able to integrate my RFID reader, to define a trigger I just need to specify the “Trigger” word at the end of our methods.

My first question was … why not using a System.Attribute approach and enrich the class? but exist a good reason behind that.
This is a simple and common way for all the programming languages the API App supports and now API App supports .NET, Java, Node.js, Python and PHP.
The Azure platform recognize triggers from the Swagger API definition rather than the API app code itself, this is really cross-platform approach.

To define the Trigger mechanism we just only need to define the push or poll in the Route Sytem.Attribute

After the deployment we are able to see and use it inside the Logic App, open the browser and navigate in the and select Browse and API Apps.

About Logic Apps and API Apps distribution, we have to consider two different sides, Admin and Developer, one is on premise the second is on the cloud.

Logic App uses triggers to integrate on premise and, of course, on Cloud technologies as SQL Server, File System and other more, in the first case.
Logically the trigger needs to interact with the on premise environment, this mechanism is provided by a service host application running in our on premise environment and using a relay binding on the cloud.

In the next articles I would like to explain more in detail about settings on on-premise side and on cloud side environments.

We have two different sides of settings, one is one the cloud and the other is on the developer side in Visual Studio and no many resources mention about that but I think is important.
Open the Azure node, in the Visual Studio Server Explorer and logging in to the subscription, now we are able to see many information about our API App as logging and tracing.

We have two different settings sides, one is on premise side inside Visual Studio.


the other one is in the cloud side.

We can also to debug our code remotely.

Another interesting area to consider is the on-premise integration layer, when we are going to use a trigger as SQL Connector, to communicate between the cloud and our on-premise environment, the Azure Platform creates a relay binding endpoint into the namespace.

This is the reason because we need to specify a namespace string during the API App trigger creation inside the Logic App.

In on premise side a host process is going to create the service layer interface to communicate with the trigger in the cloud.

There are a lot of things to discuss about the Logic Apps and API Apps architecture and there are a lot of things which running “under the hood, in this article I tried to collect the most important.
My feedback is, I like the approach and the idea, I think we missed a so powerful platform inside Azure.

I hope that with this article, other passionate developers as me, will be able to understand more about Logic and API Apps, I will write more in detail about any particular aspect in the next articles and videos.