First experiences with using WinRM/WinRS for remote deployment

What is WinRM/WinRS?

Windows Remote Management (WinRM) is a remote management service which was first released with Windows 2003 R2.

WinRM is a server component, while Windows Remote Shell (WinRS) is a client which can be used for executing programs remotely on computers which run WinRM.

The following example shows how to remotely list the contents of the C: folder on a computer with host name Server01:

WinRS –r:Server01 dir c:

Using WinRM for remote deployment

My first encounter with WinRM/WinRS was to execute some PowerShell scripts for automatic remote deployment of a test environment. The commands were executed from an MSBuild script in a CruiseControl.Net build.

The scripts would first uninstall any old versions of the components, and then renew databases and install new component versions. Finally a set of NUnit tests would be executed on the environment.

WinRS failing to execute remote commands due to limited quotas

It was very easy to get started with WinRS, and in the beginning everything seemed to work fine. But now and then the execution failed with System.OutOfMemoryException or with the message “Process is terminated due to StackOverflowException.”.

The reason for these problems was not obvious since there was no mention of quotas in the error messages, but after some investigation it turned out that they were caused by a too low memory quota on the server. The default memory quota is 150 MB, and can be changed by executing the following command on the remote server (will set memory quota to 1 GB):

WinRM set winrm/config/Winrs @{MaxMemoryPerShellMB = “1000”}

Multi-Hop configuration

In one of my scripts i tried use a UNC path to access a remote share from the target computer, but got “Access is denied”. It turned out that the Credential Security Service Provider (CredSSP)  had to be configured on the client and on the server in order to achieve this: http://msdn.microsoft.com/en-us/library/windows/desktop/ee309365(v=VS.85).aspx

Resources

Configuring WinRM

Quota Management for Remote Shells

Advertisements

Using Gendarme with CruiseControl.Net for code analysis

Gendarme is being developed as a part of the Mono project and is a tool for code analysis. It comes with a wide range of predefined rules and can easily be extended with you own custom rules which you can write in C# or other .Net languages.

Configuring the CruiseControl.Net buidl task

CruiseControl.Net has been delivered with the Gendarme task since version 1.4.3. However, the Gendarme executable must be downloaded and installed separately. The binary can be downloaded from this link: https://github.com/spouliot/gendarme/downloads

Gendarme is designed for processing the build output assemblies in ONE directory. I.e. it does not support recursive search for assemblies, which fits well if you have one CruiseControl.Net build project per service/application, but in my case I wanted to generate a report for an entire product branch with multiple services and applications.

This can be achieved by using the configuration element, which lets you specify a file that contains the full path to each assembly which should be analysed.
In order to generate the file, I execute the following PowerShell command:

Get-ChildItem -Path 'D:SomeDirWork' -Recurse 
	-Include MyCompany*.dll 
	-Exclude *.Test*.dll,*Generated.dll | 
	sort -Property Name -Unique | 
	sort -Property FullName | 
	foreach {$_.FullName} | 
	Out-File -FilePath 'D:SomeDirArtifactAssembliesForCodeAnalysis.txt' -Width 255

The PowerShell command above will recursively scan through the directory “D:SomeDirWork” and include all DLL files starting with “MyCompany” excluding those which ends with “.Test.dll” or “Generated.dll”. Next it will select distinct files regardless of paths (in order to filter out shared assemblies which are duplicated), before it sorts by full path name and write the output to file.

Using the PowerShell command as an executable step, the project configuration in ccnet.config turns into this:

Configuring the Dashboard

The stylesheets which are needed for showing the formatted reports in the CruiseControl.Net dasboard are included with the CruiseControl.Net installation, and just need to be referenced in dasboard.config:

Resources

Gendarme home page: http://www.mono-project.com/Gendarme

Gendarme CCNet task configuration: http://confluence.public.thoughtworks.org/display/CCNET/Gendarme+Task

Intellisense for CruiseControl.Net configuration files

Editing the CruiseControl .Net configuration file ccnet.config may be a cumbersome process. The XML configuration elements are documented at http://ccnetlive.thoughtworks.com/ccnet/doc/CCNET/Configuring%20the%20Server.html, but it would be more convenient to have intellisense available when editing the configuration file.

Intellisense for CCNet configuration files can be added to Visual Studio by using the schema definition file ccnet.xsd. Unfortunately this file is not distributed by the CCNet installation package, but it is included in the source distribution. For the current version the file is located at “projectccnet.xsd” in the downloadable source distribution zip file.

Your can also get it from the the source code repository at SourceForge (link is to version 1.5).

Adding the XSD schema to Visual Studio

Once you have gotten your hands on the ccned.xsd file, it must be copied to the schema folder of your Visual Studio installation, e.g. to  “C:Program Files (x86)Microsoft Visual Studio 10.0XmlSchemas”.

Note: Copying the file to the folder “Microsoft Visual Studio 10.0Common7Packagesschemasxml” will not have any effect!

Configuring the namespace

Which namespace should be used for the CCNet configuration files? A namespace must be specified in order for Visual Studio to know which schema to use for intellisense.

CCnet.xsd defines the target namespace “http://thoughtworks.org/ccnet/1/5”:

… which means that the following namespace must be defined in the CCNet configuration files:

The schema file seems to favor using XML elements instead of attributes for many configuration options, which contradicts many of the example configurations which are distributed with CCNet, but I don’t consider this as being a big issue.

EDA, messaging and NServiceBus at NNUG Oslo 25th of May

On the next Norwegian .Net User Group I will give a talk on NServiceBus. Topics which will be covered include an overview of the architecture and capabilities of NServiceBus  and configuration options provided by the framework.

Ole-Marius Moe-Helgesen will give an introduction to Event Driven Architecture and messaging and he will also share experiences from a project at an insurance company which made use of NServiceBus.

Jan Ove Skogheim Olsen will share project experiences with NServiceBus from Call Norwegian.

More information available here: http://www.nnug.no/Avdelinger/Oslo/Moter/Brukergruppemote-25-mai-2010/

Persistence with Command and Query Responsibility Segregation

Command and Query Responsibility Segregation (CQRS) is a pattern where reading of data and commands for updating the domain model are separated into separate services. Architectures for distributed systems built on the CQRS pattern offers high scalability and reliability and has gained in popularity during the last couple of years.

Greg Young visited this month’s javaBin meeting in Oslo for a talk on CQRS based architectures, and in this blog post I will share some of the new insight I got into CQRS.

Event sourcing and eventual consistency are two essential concepts which fit well together with a CQRS based architecture, and previously I considered these two concepts to be mandatory in order to make the architecture scalable and reliable. However, the complexity introduced to a system by using these two concepts may scare many brave developers away from building real production systems which makes the most out of an architecture built on the CQRS pattern.

Most developers feel more comfortable with using well-known architectures built on a relational model stored in a RDBMS supporting ACID capable transactions. The mind shift required when changing to event sourcing and eventual consistency may seem too big and risky.

Slide1_thumb-5B3-5D[1]
Figure 1, typical architecture utilizing the CQRS pattern

In the javaBin talk Greg Young actually advised against using eventual consistency when starting to implement a new system and rather gradually introduce the concept in parts of the system as it evolves and scalability issues appear. This will simplify the initial implementation and make it easier to get started with CQRS.

The simplest alternative: No event sourcing and no eventual consistency

This is the simplest option for handling consistency and concurrency because the domain model and denormalized read model can be updated in a single transaction.

The domain model and the denormalized read model can either be stored in the same database server or in different servers. A distributed transaction is required if updates are made on different threads or to different database servers, which will have an impact on the performance. An ORM is typically used for persistence of the domain model.

The read model can even be implemented as views on top of an existing schema modeled for OLTP.

Slide2_thumb-5B2-5D[1]
Figure 2, Domain model and read model updated in a single transaction

Solving scaling issues as they arise: Event sourcing and no/partial eventual consistency

Greg Young prefers using event sourcing rather than a relational schema when persistence of the domain model. To quote his thoughts about ORMs: “Using an ORM is like kissing your sister!”

The events can for example be stored in a RDBMS, an object database, a document database or be serialize to flat files. The event store must support transactions.

As the system evolves and scalability issues surface, an event queue (and hence eventual consistency) for updates to the read model can be party introduced.

Slide3_thumb-5B3-5D[1]
Figure 3, event store and read model is updated in a transaction.
An event queue and eventual consistency is introduced in areas where scaling issues arise.

The most complex and powerful alternative: Event sourcing and eventual consistency

In this alternative the domain events are stored in an event storage, and a queue is used to update the read model.

Two different queues can be used when updating the read model. The most traditional architecture is to publish the event to a separate queue in the same transaction as updates the event store. Tools like NServiceBus are typically used when publishing to the queue.

The second alternative was described in a recent blog post by Greg Young and uses the event storage as a queue. This means that there are no requirements for distributed transactions, as the only write happening when processing a command is to the event store. The read model is updated from the events in the event store and not from a separate queue. This has the advantage that there is only one version of the truth; it’s not possible to publish events which have different content from the ones stored in the event storage.

Slide4_thumb-5B3-5D[1]
Figure 4, event store and event queue

Slide5_thumb-5B2-5D[1]
Figure 5, using the event store as a queue

Conclusions

There is a wide range of options available for how to design persistence in a CQRS based architecture. The most important thing to consider is that the persistence requirements for the domain model on the command service usually will not conform well to the data retrieval requirements for the read service (think OLTP vs. OLAP).

Other factors which must be taken into consideration when designing the persistence models are the cost requirements, is it a greenfield or a brownfield project, the skills and competency of the developers, SLAs and enterprise architecture guidelines for the organization.

Domain-Driven Design: Strategic design

Eric Evans visited the January meeting of Oslo XP Meetup for a talk about Domain-Driven Design, and this post is a summary of his talk.

Context mapping

Generic subdomains

A standardized domain which can be bought off the shelf, e.g. an accounting module.

Supporting subdomains

The parts of the system which is required, but which is not important enough do make or break your business.

The core domain

The core domain is typically built by 5-10% of a software system and is the areas and features of your software which are so important that they differentiate your business from your competitors’ businesses. The business should put all efforts into getting this part of the system as good as possible. The core domain will depend on the supporting domains.

Example:

The star rating of books at Amazon helps the customer get the right book. But rating is not strictly required for customers to buy books. The rating functionality is thus a part of Amazons supporting domain.

EBay also has a star rating system. This system doesn’t rate the product, but how trustworthy the seller is. Since trust is essential for a customer to buy something at EBay, their star rating system is a part of the company’s core domain.

“The hackers” and the core domain

Why do often the irresponsible, lesser skilled programmers who care nothing about good software design become the heroes in the organization?

From the customers’ and the leadership’s perspective, the heroes are the people who build the most valuable and useful features, and these features are often the core domain.

On the other hand, the skilled developers often focus on “platform” related architecture and features instead of the core domain. This may be fatal for the company if the “hackers” are allowed to build a badly design core domain which ends up in a Big Ball of Mud.

Good design has business value! Eric suggests putting your most responsible and skilled developers on the core domain and fire the bad ones.

Bounded Context – A system will always have multiple models

The enterprise model is a bad idea. Rather each team should build their own model with a clearly defined bounded context. This will result in clean, well defined models within each context instead of Big Ball of Mud models which try to do everything.

Strategic design and legacy systems

The conical dilemma regarding old legacy software is: Keep the existing system or build a new one from scratch?

Eric describes three different strategies:

1. Rebuild from scratch

This strategy will almost always fail and will certainly take much longer time than estimated. Eric advises against this strategy, unless there are really good reasons (most classical reasons often heard are simply not good enough!).

2. Refactor the existing system

Might work, but most likely the system will never reach the desired level of quality. Lesser skilled developers will continue hacking the system the same way as before, and degenerate the newly shiny refactored parts.

3. Continue hacking on the old system

This is the fate of most systems.

So what to do then? Eric suggests using anticorruption layers to isolates new well-designed domain models from the old legacy system.

Questions from the audience

Two different teams work on an almost identical domain model. Should they use a shared domain model or create on domain model per team?

Eric suggests that each team build their own domain model, even if they overlap and some code will be duplicated. The alternative will lead into creating a Big Ball of Mud.

My own comments to Eric’s talk

I’m currently working on a project where the goal is to replace several old mainframe systems with a new common system on a new platform. According to Eric, this project should have been doomed to fail. In it’s forth year of development and 2,5 years in production the project is still going strong, albeit it has gone through several challenging and difficult periods.

So why hasn’t this project failed then? I think the reason is that the organization I work for previously has made a couple of failing attempt to replace the old legacy systems. This made them learn which costs and efforts are needed to create a new system which replaces the old ones.

The new system will also give a competitive advantage with it’s flexibility in defining new product and price structures, and this added business value is also a motivation for continuing the project.