EDA, messaging and NServiceBus at NNUG Oslo 25th of May

On the next Norwegian .Net User Group I will give a talk on NServiceBus. Topics which will be covered include an overview of the architecture and capabilities of NServiceBus  and configuration options provided by the framework.

Ole-Marius Moe-Helgesen will give an introduction to Event Driven Architecture and messaging and he will also share experiences from a project at an insurance company which made use of NServiceBus.

Jan Ove Skogheim Olsen will share project experiences with NServiceBus from Call Norwegian.

More information available here: http://www.nnug.no/Avdelinger/Oslo/Moter/Brukergruppemote-25-mai-2010/

Persistence with Command and Query Responsibility Segregation

Command and Query Responsibility Segregation (CQRS) is a pattern where reading of data and commands for updating the domain model are separated into separate services. Architectures for distributed systems built on the CQRS pattern offers high scalability and reliability and has gained in popularity during the last couple of years.

Greg Young visited this month’s javaBin meeting in Oslo for a talk on CQRS based architectures, and in this blog post I will share some of the new insight I got into CQRS.

Event sourcing and eventual consistency are two essential concepts which fit well together with a CQRS based architecture, and previously I considered these two concepts to be mandatory in order to make the architecture scalable and reliable. However, the complexity introduced to a system by using these two concepts may scare many brave developers away from building real production systems which makes the most out of an architecture built on the CQRS pattern.

Most developers feel more comfortable with using well-known architectures built on a relational model stored in a RDBMS supporting ACID capable transactions. The mind shift required when changing to event sourcing and eventual consistency may seem too big and risky.

Slide1_thumb-5B3-5D[1]
Figure 1, typical architecture utilizing the CQRS pattern

In the javaBin talk Greg Young actually advised against using eventual consistency when starting to implement a new system and rather gradually introduce the concept in parts of the system as it evolves and scalability issues appear. This will simplify the initial implementation and make it easier to get started with CQRS.

The simplest alternative: No event sourcing and no eventual consistency

This is the simplest option for handling consistency and concurrency because the domain model and denormalized read model can be updated in a single transaction.

The domain model and the denormalized read model can either be stored in the same database server or in different servers. A distributed transaction is required if updates are made on different threads or to different database servers, which will have an impact on the performance. An ORM is typically used for persistence of the domain model.

The read model can even be implemented as views on top of an existing schema modeled for OLTP.

Slide2_thumb-5B2-5D[1]
Figure 2, Domain model and read model updated in a single transaction

Solving scaling issues as they arise: Event sourcing and no/partial eventual consistency

Greg Young prefers using event sourcing rather than a relational schema when persistence of the domain model. To quote his thoughts about ORMs: “Using an ORM is like kissing your sister!”

The events can for example be stored in a RDBMS, an object database, a document database or be serialize to flat files. The event store must support transactions.

As the system evolves and scalability issues surface, an event queue (and hence eventual consistency) for updates to the read model can be party introduced.

Slide3_thumb-5B3-5D[1]
Figure 3, event store and read model is updated in a transaction.
An event queue and eventual consistency is introduced in areas where scaling issues arise.

The most complex and powerful alternative: Event sourcing and eventual consistency

In this alternative the domain events are stored in an event storage, and a queue is used to update the read model.

Two different queues can be used when updating the read model. The most traditional architecture is to publish the event to a separate queue in the same transaction as updates the event store. Tools like NServiceBus are typically used when publishing to the queue.

The second alternative was described in a recent blog post by Greg Young and uses the event storage as a queue. This means that there are no requirements for distributed transactions, as the only write happening when processing a command is to the event store. The read model is updated from the events in the event store and not from a separate queue. This has the advantage that there is only one version of the truth; it’s not possible to publish events which have different content from the ones stored in the event storage.

Slide4_thumb-5B3-5D[1]
Figure 4, event store and event queue

Slide5_thumb-5B2-5D[1]
Figure 5, using the event store as a queue

Conclusions

There is a wide range of options available for how to design persistence in a CQRS based architecture. The most important thing to consider is that the persistence requirements for the domain model on the command service usually will not conform well to the data retrieval requirements for the read service (think OLTP vs. OLAP).

Other factors which must be taken into consideration when designing the persistence models are the cost requirements, is it a greenfield or a brownfield project, the skills and competency of the developers, SLAs and enterprise architecture guidelines for the organization.

Domain-Driven Design: Strategic design

Eric Evans visited the January meeting of Oslo XP Meetup for a talk about Domain-Driven Design, and this post is a summary of his talk.

Context mapping

Generic subdomains

A standardized domain which can be bought off the shelf, e.g. an accounting module.

Supporting subdomains

The parts of the system which is required, but which is not important enough do make or break your business.

The core domain

The core domain is typically built by 5-10% of a software system and is the areas and features of your software which are so important that they differentiate your business from your competitors’ businesses. The business should put all efforts into getting this part of the system as good as possible. The core domain will depend on the supporting domains.

Example:

The star rating of books at Amazon helps the customer get the right book. But rating is not strictly required for customers to buy books. The rating functionality is thus a part of Amazons supporting domain.

EBay also has a star rating system. This system doesn’t rate the product, but how trustworthy the seller is. Since trust is essential for a customer to buy something at EBay, their star rating system is a part of the company’s core domain.

“The hackers” and the core domain

Why do often the irresponsible, lesser skilled programmers who care nothing about good software design become the heroes in the organization?

From the customers’ and the leadership’s perspective, the heroes are the people who build the most valuable and useful features, and these features are often the core domain.

On the other hand, the skilled developers often focus on “platform” related architecture and features instead of the core domain. This may be fatal for the company if the “hackers” are allowed to build a badly design core domain which ends up in a Big Ball of Mud.

Good design has business value! Eric suggests putting your most responsible and skilled developers on the core domain and fire the bad ones.

Bounded Context – A system will always have multiple models

The enterprise model is a bad idea. Rather each team should build their own model with a clearly defined bounded context. This will result in clean, well defined models within each context instead of Big Ball of Mud models which try to do everything.

Strategic design and legacy systems

The conical dilemma regarding old legacy software is: Keep the existing system or build a new one from scratch?

Eric describes three different strategies:

1. Rebuild from scratch

This strategy will almost always fail and will certainly take much longer time than estimated. Eric advises against this strategy, unless there are really good reasons (most classical reasons often heard are simply not good enough!).

2. Refactor the existing system

Might work, but most likely the system will never reach the desired level of quality. Lesser skilled developers will continue hacking the system the same way as before, and degenerate the newly shiny refactored parts.

3. Continue hacking on the old system

This is the fate of most systems.

So what to do then? Eric suggests using anticorruption layers to isolates new well-designed domain models from the old legacy system.

Questions from the audience

Two different teams work on an almost identical domain model. Should they use a shared domain model or create on domain model per team?

Eric suggests that each team build their own domain model, even if they overlap and some code will be duplicated. The alternative will lead into creating a Big Ball of Mud.

My own comments to Eric’s talk

I’m currently working on a project where the goal is to replace several old mainframe systems with a new common system on a new platform. According to Eric, this project should have been doomed to fail. In it’s forth year of development and 2,5 years in production the project is still going strong, albeit it has gone through several challenging and difficult periods.

So why hasn’t this project failed then? I think the reason is that the organization I work for previously has made a couple of failing attempt to replace the old legacy systems. This made them learn which costs and efforts are needed to create a new system which replaces the old ones.

The new system will also give a competitive advantage with it’s flexibility in defining new product and price structures, and this added business value is also a motivation for continuing the project.

LEAP Conference – day 3

A summary of day 3 of the LEAP conference in Redmond, Seattle

Sync: Why, what and how

With Lev Novik

 IMG_1117_thumb-5B4-5D[1]
Why Sync?
  • Facilitates rich clients
    • Faster response, richer UX
  • Legacy applications can be migrated to use the Cloud as a data storage by using Sync
General Sync Challenges
  • Granularity of changes
  • Change (non-) Reflection
    • Using a timestamp. Use locking until synchronization is finished?
  • Conflicts
    • Not detecting conflicts will result in data loss
    • Complex algorithms for conflics detection exists, which don’t require storing the history of all changes
  • Loops
    • Multiple devices synchronizing data to multiple servers at the same time
    • Can result in duplicated data
  • Hierarchical data
    • The order of synchronization is important
    • Eg. one endpoints adds an item to a folder, while another endpoint deletes the entire folder
  • Item filtering
    • Optimization by syncing parts of the data more frequently
  • “Column” filtering
    • Select parts of the data
    • Challenge: Can’t do conflicts detection, since one of the endpoints don’t have the complete version of the data
  • Errors and interruptions
    • Not all conflicts can be solved automatically
      • Doing so will result in loss of data
      • Must wait for a human to resolve them
Microsoft Sync Framework
  • What does MS Sync Framework do?
    • Makes it easy to sync participating endpoints
      • Build in endpoints for
        • V1: File system, relational databases
        • V2: SQL Data Services, Live Mesh, ++
  • The Sync Session
    • Data stores implements a Sync Provider
    • The Sync application has a Sync Orchestrator which communicates with the endpoints’ sync providers
    • Synch Framework Runtime
      • Metadata
        • Versioning
      • Runtime
        • Algorithms to solve sync problems
      • Metadata Store
        • For those who can’t store the metadata themselves
      • Simple Provider Framework
        • Makes writing providers easy
How do customers use the sync framework?
  • Write sync applications
    • Implement synch orchestration
  • Write sync providers in order to support sync
    • Declare an object identifier
    • Declare versioning
    • Enumerate changes
Sync Participants
  • Sync endpoints
    • Stores metadata
    • Can be many kinds of devices, and the sync logic should not be implemented for each of them
  • Sync providers
    • Does most of the sync work
    • Operates on the endpoints’ meta data
  • Sync application
    • Has the Synch Orchestrator

The sync logic can be placed in different locations (eg. on the client or in a web service) for differenc scenarios.

Sync Framework on MSDN: http://msdn.microsoft.com/sync/

 

Visual Studio Team System: ALM as we do it at Microsoft

With Stephanie Cuthbertson

IMG_1120_thumb-5B1-5D[1]

Some facts about Microsoft Development
  • TFS usage at MS
    • VS 2008
      • 13 000 users
      • 2 570 000 work items
      • 40 100 000 source file
Planning and tracking
  • Feature planning and prioritizing in the development of VSTS 2010
    • Value props prioritizing
      • Voting and weighting/prioritizing of features in an Excel sheet
      • Work items are then imported to TFS
  • Generate MS Project GANTT from TFS
VS 2010 demo
  • Simple task editing integration with Excel and MS Project
  • Improved forecasting statistics and status reports
  • User requirement tracking
    • Can edit requirements through a web interface
      • Requires a separate (new) licence
    • Can link requirements to test cases
Branching
  • In the development of VSTS 2010, branching per feature is used
  • Feature must pass “Quality Gates” before merging into active branch
    • Feature complete
    • Test complete
    • All bugs fixed
    • Static code analysis
    • Localization testing
    • etc
Tracking and reporting in VSTS 2010
  • Better SharePoint integration
  • Web dashboard
    • Extensive statistics and analytics possibilities

Always Responsive Apps in a World of Public Safety

With Mario Szpuszta

IMG_1122_thumb-5B2-5D[1]

A case study for a system for ship tracking and tracing delivered to Frequentis.

Who is Frequentis AG?
  • Provides systems for
    • Air traffic
    • Ship tracking & tracing
    • Coordination systems for police offices
Terms
  • MCS- Maritime Communication System
    • Ship – Ship, Ship – Land, Land – Land
    • Usually hardware interface
  • CAD – Computer Aided Dispatching
    • Collaborative Incident Management
    • This is the kind of software made in this case study
  • TnT – Tracking and Tracing
    • CAD and MCS Solution from Frequentis
Tracking & Tracing Architecture
  • GUI in WPF
    • Several modules
    • Complex requirements
      • Lots of information and operations available for the users
    • Could not use CAB, Prism or similar frameworks since the GUI would then run in one process and one app domain. The entire system should not go down if one module crashes.
    • Each GUI module runs in a separate process. A separate shell was created in order to achieve this.
  • Communication with Maritime Communication System with .NET remoting
  • GUI communicate with the services through a message bus
  • Server
    • WCF service modules
    • Windows 2008 and SQL Server 2005
The Service Bus
  • Complex communication
    • Everyone communicate with everyone
  • Failure of one system may not affect others
  • Challenges
    • Not every harbour can pay for the required infrastructure, like huge server farms
    • Failure of single entity may not affect others
  • Classic architecture
    • Keep it simple
      • Lightweight
      • Reliable
    • Loosely couples
    • Many-to-many communication
  • Solution
    • Created custom Message Subscription Database
    • Use WCF Peer-to-Peer channel for communication
      • Issue: Max. 700 msg/sec limitation due to slow serialization
      • No Duplex-bindings, no MSMQ
        • Just leverage NetTcp-bindings
  • Tech-hints for WCF
    • NetDataContractSerializer will include assembly info – serialization will fail if endpoints have different assembly versions, even though the contracts are compatible
    • DataContractSerializer enables loosely coupling
Creating a responsive user interface
  • The application may never hang at any time
  • Encapsulate logic in “autonomous” tasks
  • Set of jobs executed based on commands
  • Core rule: Everything executed asynchronously
    • Thread pool with queue and queue manager
  • Commands, Jobs and Queues
    • Business logic encapsulated into Jobs (and ONLY there)
    • Commands executed autonomously without side effects
  • Results from Async Jobs
    • Modules implements INotify interface
      • Passed into the constructor of a job
      • Job calls back to module through INotify
  • Communication with other systems
    • Create yet another job
    • Job talks to IConnectionPoint
  • Tasks, Jobs – Tech Hint
    • CCR (Concurrency Coordination Runtime, originally from The Robotics Studio)
      • Simplified execution of concurrent tasks
      • Has now been released as a separate toolkit separated from Robotics
IMG_1125_thumb-5B2-5D[1]
WPF-based client
  • Why WPF?
    • Huge amount of information needed to be presented
    • Frequentis hired a separate UX-research team
      • Different alternative UX-stories were investigated
    • Advanced requirements for alternative visualizations of data
  • Presentation Model Pattern
    • Separate UI from code

 

Green Computing through Sharing

With Pat Helland

IMG_1126_thumb-5B3-5D[1]

Introduction
  • In 2006, 1,5% of the electricity in US was consumed by Data Centers
    • This is more than what is consumed by TVs
    • Projected to double every fifth year
  • Sharing resources vs. dedicated resources
    • Shared resources may not be available when you need them
    • Dedicated resources are expensive and have less utilization
  • Sharing through
    • Virtual machines
    • Cloud computing

 

The evolving landscape of data centers
  • Power Usage Effectiveness (PUE)
    • PUE = Total Facility Power / IT Equipment Power
    • Typical factor is 1.7
  • Power and cooling is expensive
    • Infrastructure and energy cost are both more expensive than the server cost
  • Redundancy
    • Represents more than 20% of the data center cost
    • All servers require
      • Dual power paths
      • Dual network
  • “Chicago Data Center”
    • Highly efficient data center with PUE = 1,2
    • Servers are located in isolated steel containers, each containing 2 000+ servers
      • Individual servers are never maintained

 

Over-Provisioning versus Over-Booking of Power
  • Power Provisioning
    • Total power consumption for a server is typically 200W
    • Power consumption typically peaks at about 90% for a data center
      • Theoretical max power consumption is seldom used, eg. because disk usage prevents 100% CPU utilization
      • This means that it is possible to add more servers than the theoretical max limit in order to utilize the available power
Services and Incentives
  • Amazon’s Server Oriented Architecture
    • One page request typically use over 150 services
  • Service Level Agreements (SLAs)
    • Example: 300ms response for 99.9% of requests with 500 requests per sec
What does this mean for developers?
  • Factories are more efficient than hand-crafted manufacturing

LEAP Conference – day 2

A summary of day 2 of the LEAP conference in Redmond, Seattle

Pharma in the Cloud

With Eugenio Pace

IMG_1091_thumb-5B3-5D[1]

Windows Azure Primitives:
  • Code host
    • WCF
    • ASP.Net
    • Worker
      • Similar to a windows service in an on-premises application
  • Persistence
    • Table
    • Blob
    • Queue
      • Events can be published to the queue, and Workers can handle these events
    • SQL Data Services
      • Supports most of the functionality of the regular SQL Servers
      • Funtionality has been significantly extended since the PDC08 demo
  • Application Services
    • ACS (access control)
    • ServiceBus
    • Workflow
Cloud vs. on premises and build vs. buy

IMG_1094_thumb-5B1-5D[1]

Building a multi-enterprise collaboration application in the cloud for “BigPharma”
  • Requirements:
    • De-centralized management
    • Fine grained access control
      • Org –> Row –> Field
    • Leveraging existing Identity and AuthZ infrastructure
      • Using Active Directory (local users/groups used in demo)
      • Support Single Sign On

Demo: http://pharmacloudcatalog.com/catalog/Provisioning

  • Identity & Access Control
    • Using claims-based identity
      • Both for the web service and for the web site
    • Using MS Geneva Framework
      • Identity providers for ASP.Net exists which support this framework
    • Custom Security Token Service (STS)
    • Mapping tokens to permissions can be done in the web interface of .Net Services Access Control Service
  • ServiceBus
    • Enables communication from the server to the client without requiring an inbound connection to the client (all connection from client/server are outbound – to the service bus)

Download & study sample for Azure (note: a different samle than the one demonstrated in this session) http://www.codeplex.com/azureissuetracker

 

Microsoft Dynamics CRM

With Girish Raja

IMG_1095_thumb-5B1-5D[1]

Dynamics CRM 4.0 demo
  • The Outlook CRM add-in client
    • Appears as a separate folder in Outlook
    • Data available offline
  • Flexibility
    • Accessing by browser, Otulook or mobile
    • Hosting as software or as service
  • Extensibility Toolset – customization tools for
    • System Administrators
    • Developers
    • Business Analysts
  • Configurable entity model
    • Create entities (similar to database tables) from the Customization screen in Dynamics CRM Online
    • The asmx web service endpoints are automatically updated with the custom entities
    • Configurable role based access with high granularity
  • Workflow editor
    • Uses Windows Workflow internally
    • Activities can be created in the web interface

.NET Service Bus

With Clemens Vasters

img_1101_thumb-5b1-5d

  • Demo application where a website in the cloud communicates with an application running on premises (on Clemens’ laptop) through the .NET Service Bus
    • No need to configure firewalls
    • Security kept intact
  • Why .NET Service Bus?
    • Enable bi-directional connectivity
      • Not depended on the kind of device or the location of the device
      • Without having to open inbound firewall/NAT ports
    • Provide federated naming and discovery
  • The first version of the service bus (will be released with Azure in November 2009), will use Windows Workflow from .NET 3.5, with an DSL on the top for supporting migration to .NET 4.0
  • NetTcpBinding is the preferred one for optimal performance
  • Service Bus Naming
    • Hierarchical structure, similar to DNS
    • Updates takes effect immediately
    • Naming scheme: scheme://solution.servicebus.windows.net/name/…
  • What’s wrong with DNS?
    • High latency for updates
    • Names hosts, not services
  • Service Registry
    • A registry for service endpoints
    • Services can be categorized (eg. printers can be organized into a separate category)
  • Service Bus Messaging
    • Based on WCF
    • Not supported:
      • Atomic transaction flow
      • Protocol level transport authentication

More information about Service Bus on MSDN: http://msdn.microsoft.com/en-us/library/dd582728.aspx

SQL Data Services – Under The Hood

With Gopal Kakivaya

IMG_1104_thumb-5B1-5D[1]

Motivation
  • Database as a service
    • Pay-as-you-go model
    • Guaraneed SLA
    • Familiar relational programming model
    • Leverage existing skills and tools
      • This is new compared to the PDC08 version
    • Full control of the logical database administration
    • The physical aspects of the database administration is handled by the service provider

Concepts
  • Database Provisioning Model
    • Account
      • Each account as one or more servers
    • Server
      • Has one or more logins
    • Database
      • Users
  • Connection Model
    • Clients connect directly to a database
  • Security Model
    • Uses regular SQL security model
      • Username + password
    • Future: AD Federation, etc

 

 

Architecture
  • Components
    • Master node
    • Data Nodes
      • SQL Server
        • Replication Agent
        • Local Partition Map
      • Fabric
        • Reconfiguration Agent
        • PM Location Resolution
        • Failure detector
        • Ring Topology

One partition is set as partition, and one or more secondary partitions are located on other data nodes.

  • Partitioning
    • Provides better fault tolerance
    • Failed partions can be rebuilt faster
      • Eg. if the database is divided into 10 partitions, it’s much faster to rebuild the failed partition than the entire database
  • Fault tolerance
    • Security built into the software
      • Signed data, eg. will detect if the network card has corrupted the data
    • Can be used on cheap hardware
      • If anything fails (eg. a disk), the faulted hardware will automatically be shut down
        • This is made possible by the use of replica sets
  • Replication
    • Reads are completed at the primary
    • Writes are replicated to all nodes
      • The primary partition will wait for acknowledges from the secondaries
      • All writes, both to the primary and to the secondaries are part of the transaction
    • The replication factor may be configured, based on the customer’s demand
      • “Replication factor of 4” means that there are 1 primary and 3 secondaries
  • Reconfiguration
    • As machines die, new machines must take their place
    • Types of reconfiguration
      • Primary failover
      • Removing a failed secondary
        • Might be temporary, eg. because of an update made to the machine.
        • The secondary will not be replaced immediately, since it might be temporarly down
      • Adding recovered replica
      • Building a new secondary

LEAP Conference – day 1

A summary of day 1 of the LEAP conference in Redmond, Seattle

Keynote

With Scott Guthrie

IMG_1062_thumb-5B1-5D[1]

Rich web
  • AJAX & HTLML4/5 and Silverlight
  • Silverlight 3
    • Ships in July 2009
    • Runs inside and outside the browser
  • Demos (http://www.iis.net/media/)
    • Smooth streaming – adaptive bitrate
    • The streaming server is free
    • Supports pre-recorded and live content
    • Content is cached on servers local to the user – one webserver can serve a large number of clients
    • Demo client/server application created with Silverlight template
      • Using navigation template
  • “Out-of-browser” settings
    • Can use GPU acceleration
    • Support for context menus (right-clicking) will be added to Silverlight 4
  • Expression Blend 3
    • Ships in July 2009
    • New feature SketchFlow for sketching/prototyping UI
      • Use multiple sources like scanned images, pictures etc
      • Create workflows
      • Wiggly Styles
      • Separate skin which looks like a hand-drawn image
      • Without colors – focus on functionality and usability
      • Looks similar to Balsamiq Mockups
    • Photoshop import
      • Supports selecting layers
    • Sample data
      • Creating, editing and styling
      • The designer person can get the application working with testdata without being dependent on the developer
      • Disigner import for Silverlight will be significantly improved in VS 2010
  • Web platform installer

Multi-core

  • How can developers take advantage of multi-core CPUs?
    • Shift from how to do things to what to do:
      • Using LINQ and lambda expressions
      • The framework can then utilize multiple CPUs by partitioning the execution into different chunks which can be run in parallel on different CPU cores
  • The ASP.Net core has been configured to support multi-core parallelism as default in .Net 4
  • .Net Parallel Extensions
    • Net parallel task debugger window in VS 2010
      • Easier navigation between threads and tasks

Automated Testing

  • VS 2010 has better support for TDD
    • New mode for working with classes which hasn’t yet been created (switch into this mode by pressing Ctl + Alt + Space).
    • Automatic generation of classes and methods based on the test
    • Demo – Scott referring to the AAA (Arrange – Act – Assert) pattern (which is good!)
  • Manual test tools

IMG_1063_thumb-5B2-5D-1[1]

Cloud

  • The same .Net runtime binaries are used on-premise and in cloud
    • ASP.Net
    • SQL Server
    • WCF / Workflow

SharePoint

  • Built-in support in VS 2010
  • Features include:
    • Projects
    • List
    • Web-Parts
    • Workflows

SharePoint Patterns and SharePoint futures

With Paul Andrew

IMG_1065_thumb-5B1-5D-1[1]

Development patterns and practices

The SharePoint Development Lifecycle

Memory management

  • Use the SPDisposeCheck utility
  • SharePoint APIs return IDisposable objects
Deployment
  • VSeWSS 1.3
    • Extension to VS
    • Simplify deployment with the “Package” feature
      • Creates the .wsp file in one command
    • Automatic file renaming
    • Deployment conflict resolver
    • Deploy additional assemblies

Futures

This information was confidental until the Sharepoint conference in Las Vegas in October 2009, and will not be covered in this post.

Patterns & Practices roadmap

With Eugenio Pace

IMG_1066_thumb-5B1-5D[1]

P&P FY09 Programs

  • Client development
    • Prism (WPF and Silverlight)
    • Web Client
    • Mobile Client
  • Server development
    • SharePoint Guidance
    • Services Development
    • Web Service Security Guidance
    • Enterprise Service Bus
    • Web Service
  • Solution Development Fundamentals
    • Enterprise Library
    • Application Architecture Guide
    • Testing Patterns & Guidance
    • Data Access Guidance

What’s coming in FY 10?

  • Client
    • Prism 3, expected March 2010 (WPF 4.0 / SL 4.0)
    • Web Application Guidance (ASP.Net, MVC, jQuery, Dynamic Data)
  • Server
    • SharePoint Guidance, April 2010 ()Internet Scale, Silverlight, LOB, Office 14)
  • Services
    • Cloud Identity Management Guidance, November 2009 (Geneva, Azure Services, LiveID)
  • Fundamentals
    • Enterprise Library 5.0, March 2010
    • Data Access Guidance, March 2010 (Domain Driven Design, EF 2.0, Astoria, .Net RIA Service)
    • Application Architecture Guide 2nd Edition, November 2009
    • Acceptance Testing Guide, November 2009

CloudLib

  • Reuses existing blocks for
    • Exception handling
    • Validation
  • Extensions to existing blocks for
    • Security
    • Log
  • New blocks added for the cloud
    • DataAccess (SDS)
    • Tables
    • Config
    • Blob
    • Queue
    • Worker

How to work with the P&P team?

Rich Internet Applications

With Ian Ellison-Taylor

IMG_1074_thumb-5B1-5D-1[1]

Silverlight 3
  • 3D support
  • “Out of browser” demo
    • Select “Desktop shortcut” when installing
    • Still hosted in the browser internally
      • This is not visible to the user
      • Not possible to access the browser’s XML DOM
  • New capabilities
    • Media
      • Smooth streaming
      • More format choices (like H.264)
        • More efficient decoding – uses less resources
      • Fullscreen HD playback
      • Extensible media formats
      • Content protection
  • Graphics
    • Perspective 3D Graphics
    • New Bitmap API
    • Enhanced Control Skinning
    • Bitmap Caching
      • Performance improvements
    • Themed App Support
      • Supported in Blend 3
    • Improved Text Rendering
      • Crisper text
      • Faster rendering
      • Support for more languages (about 30 in total)
      • Better layout algorithms
  • Dev Productivity
    • Controls (60+)
      • Datagrid
    • Search Discoverability
      • Control which information to make available for search robots
    • .Net RIA Services Framework
    • Improved Performance
      • Targeted for big applications
    • Advanced Accessibility
  • Out of Browser
    • Run Apps Out of Browser
    • Desktop & Start Menu
    • Safer & More Secure
      • Still running inside a sandbox, same security access as when running in the browser
    • Smooth Installation & Auto Update
    • Windows Integration
      • Better support for Windows 7, including touch and new start menu
    • Connectivity Detection
      • Detects network connection status
  • Design Tooling
    • Prototyping w/ SketchFlow
      • Sketch out random ideas
      • Link them together using workflows
      • Real controls are used under the covers
        • The scetch skin makes the user focus on the functionality – not on visual details like colors and fonts etc
    • Visual Design Workflow
    • Accessibility Interactivity
    • Design w/data
    • VSTF Integration
    • Design Surface Extensibility

 

 

Windows Forms
  • Will still be supported for many years and will be continued to be developed

A simple and compact style for BDD specifications

When appropriate I prefer to use the testcase-class-per-fixture style for writing BDD style  contexts/specifications.

However, when testing small systems where there is only one specification per context, the testcase-class-per-fixture syntax becomes overwhelming and cumbersome to user.

For this reason I’m sometimes using a more compact format for the specifications:

$MethodName$ The system under test (SUT), will often be a method name
$Context$ The situation/scenario
$ExpectedBehaviour$ The expected outcome of the given context

Example specifications:

The disadvantages of using this style is that the method names in the test class may get very long, and the test results output isn’t formatted as good as it would have been when using one test case class per fixture:

bdd_simple_stye_output_thumb-5B1-5D-1[1]

The following ReSharper live template can be used to quickly add new specifications/tests:

LEAP part 3 – SOA

The third master class: Loosely Coupled Business Systems: SOA on the Microsoft platform

Udi Dahan – The Software Simplist had been hired to present the third LEAP master class in Oslo. He is an well known international expert on enterprise software architecture and design, and is the author of the open source messaging framework nServiceBus.

The entire class was based on discussion and interaction with the audience, and the only Power Point slide used was the one showing the agenda.

He started out with sketching a naive traditional n-tier application (big ball of mud), and based on suggestions from the audience we explored different solutions which might improve the solution. Whatever suggestions we threw at him, he always had a thoroughly considered answer describing pros and cons with the suggested solution. He obviously has a lot of experience with real world enterprise SOA applications.

The goal was to create autonomous services – standalone services with loose coupling to other services. The system should be scalable and reliable and use as few resources as possible.

Topics discussed

Coupling

  • Avoid coupling by slicing independent logic into separate vertical autonomous boundaries / services
    • Example of services: Order, inventory management, billing, shipping
  • How should services communicate with each other?

Where to put the orchestration/workflow logic?

  • Layer on top of the business layer components?
  • In an Enterprise Service Bus on the side of the services?
  • In the GUI?

When to use…

  • Fire and forget
  • Messaging (async / synchronous)
  • Events

Duplicate data in order to remove dependencies and keep services autonomous?

  • How to synchronize the data?
    • Publish / subscribe pattern can be used. E.g. when an address is updated in the customer service, the customer service can publish the updated address.
    • Context of the update is important. Why did the change happen? This is information which is available close to the user and the business process (i.e. NOT at the data base tier), and is usually triggered from the UI.
    • Versioning – published events have to be backwards compatible with subscribers
  • Duplication of data is usually considered a bad practice, but for SOA it may have advantages:
    • Avoid service calls (e.g. retrieve customer address RPC style when needed)
    • Services become more reliable and autonomous. What if the customer service is down? The the shipment service won’t be able to do it’s work since the address can’t be retrieved.
    • The duplicated data can be considered a local cache for the service

Publish/subscribe vs. request/response

  • Doesn’t both solutions create dependencies between services?
    • Request/response creates design-time dependencies
    • Pub/sub makes it possible to create services which are both design-time and run-time autonomous
      • Run-time autonomous: Will continue to work even when other services goes down
      • Design-time autonomous: Has no direct references to other services
  • Pub/sub advantages:
    • Better performance
      • Local cache (duplicated data) in each service
      • No blocking transactions across service boundaries
      • Easier to run processes in parallel – scales better
      • Fire and forget gives faster response in the UI
    • Better resource utilization
      • Messages can be queued up and processed when resources get available
  • Pub/sub disadvantages:
    • Systems might be harder to design correctly (requires untraditional thinking)
    • Systems get harder to understand and debug
  • Request/response advantages:
    • Explicit service calls / dependencies makes the system easier to implement, understand and debug
  • Request/response disadvantages:
    • Synchronous blocking architecture requires more resources
      • Ties up more resources over longer time periods, ie. the system will not scale well.
  • Choose the right architecture for the right place, there is no silver bullet.

Achieving autonomous services by letting each service have it’s own UI

Consider a shipment service which requires a shipment address. The traditional design would be to let the shipment service call the customer service. Another option would be to let the shipment service collect the shipment address through it’s own GUI, and thus keeping the service autonomous.

LEAP part 2

The second master class: The Microsoft Data Platform and Business Intelligence

The second master class of LEAP Norway was presented by Jon Jahren from Microsoft Consulting Services Norway.

This master class gave an overview of the Microsoft products and architectures for Business Intelligence (BI).

 Master class summary

  • BI introduction
    • Competitors
    • How to introduce BI to a company
    • Microsoft’s vision and strategy for BI
    • Demo of current Sharepoint BI solution
    • Demo of data mining in Excel 2008 SP2
      • Retrieve data from Analysis Services
      • Generate forecast
    • Microsoft’s BI stack
      • Office, Sharepoint Server and SQL Server
  • BI technical architecture
    • ETL (Extract, Transform and Load)  -> Presentation Server -> Client
    • Kimball method fundamentals for Data Warehousing
    • The Star Model
    • Biztalk vs. SSIS
      • Biztalk is message-based
      • Both products have very similar features, but SSIS have much better performance
      • In the latest versions, both can share the same adapters (WCF based)
    • SSIS demo
    • OLAP demo
      • Create Data Source View
      • Create Cube
      • Deploy
      • MOLAP with realtime proactive caching
      • Microsoft’s strategy is to use ONE cube for all perspectives
        • A “perspective” is a view of the cube for a specific report
        • All fact and dimension tables in ONE cube
  • Metadata
    • Use Sharepoint for metadata
    • MDM – Master Data Management
      • Avoid duplicate entities
      • Estimated 5 – 15% of  all data is duplicated in large enterprises
      • A new MDM application platform is  to be released by Microsoft
      • Business Data Hub
        • In the cloud
      • PAXOS
        • Algorithm for fault-tolerant distributed computing
  • The future of BI
    • Load -> View -> Model
      • Create model automatically based on usage, instead of creating the model first
    • IMDB
      • In-memory storage for performance
      • Is the underlying engine for self-service BI
      • SQL Server Gemini
        • Add-in to Excel 14 for self-service BI
        • Supporting millions of rows

Next master class: SOA on the Microsoft platform with Udi Dahan

Technorati Koder: