Data and Transaction Management
Steve Swartz and Clemens Vasters
Connected Systems Division, Microsoft
The core foci of this presentation were the characteristics and architectural patterns related to data storage and data access. The core takeaways were the importance of early data architecture envisioning and that the correct data architecture is the ultimate performance optimization. For experienced architects and developers in the audience, the session likely served as a refresher on the topic of data architecture.
Steve talked about data characteristics from two perspectives: scope of the data and class/type of data. Data scope included: My Database, My Shared Data, Our Database, My Huge Database, My Distributed Database. Classes/types included: Reference Data, Fresh Data, Stale Data.
As Steve puts it: “Increased collaboration changes the architecture perspective”. Thinking about data scope, the growth in data scope, data concurrency and correctness, should be one of the central tasks of early data architecture. That may be obvious, but Steve suggested that too few people consider the larger “enterprise wide” data picture when architecting solutions. I agree.
Next were the architectural patterns which included: Direct Access, Remote Access, Intermediated Access, Error Handling via ACID, Error Handling via Accounting, Error Handling via Compensation, Distribution via Caching, Distribution via Federation, Distribution via Read-Only Replication, Distribution via Read/Write Replication, and Distribution via Reporting.
The intermediated access pattern is a rich and important pattern that places processing in front of the database to control access. It is the mechanism used to achieve federated queries and therefore is a part of the federation distribution pattern. When talking about the “Error Handling via Compensation” pattern, Steve mentioned something about Microsoft working on automatic compensation capabilities? Anyone know about this? BizTalk allows you to implement the compensation pattern for long running transactions (which he mentioned) but I’m sure Steve wasn’t referring to BizTalk. “Error Handling via Accounting” is the error pattern that persists errors and then reports them for “manual” resolution. This is in contrast to the ACID error handling pattern which uses transaction management to provide automatic error handling. ACID performs much better (obviously) but does not scale as well as the “accounting” approach.
Driving Business Process Automation through Vertical Accelerators
Brennan O’Reilly & Mark Smith, EMC Microsoft Practice
This was a weak presentation with plenty of platitudes, basics and generalities on how to design integrated business systems with some focus on manufacturing, financial services, and healthcare verticals. I was hoping to get some good insight into the design of accelerators for the BizTalk platform, but left disappointed.
The healthcare part of the presentation referred to the American healthcare industry and dealt almost exclusively with billing and insurance solutions. Actually, they didn’t explicitly refer to the American healthcare industry but it was obvious from their focus on billing and HIPPA.
Effective Techniques for Handling Large Messages in SOA
Thomas Abraham, Senior Consultant/Architect
Microsoft Solutions Practice
Biggest takeaway from this session was his idea about carving up a large message in a custom pipeline component. The idea is to slice up the message, writing the “large” piece to temporary storage and then using a proxy ID in a cut-down version of the message that flows into the message box. The proxy ID “points” to the “large” message piece in temporary storage and is used to retrieve that piece for pipeline assembly on the way out of BizTalk. Not rocket science but a very useful tip for reducing the load on the message box and improving performance.
When Thomas was discussing ASP .NET 2.0 coding practices, he stated that it’s important to think about memory management. Since the introduction of garbage-collected languages, I have been saying the same thing. Ever since I started working with Java, and now C#, I have regularly come across developers who ignore the memory usage characteristics of their software and then wonder why they are having performance issues.
BizTalk Server 2006 R2 Adapter Framework
Chandramouli Venkatesh, Group Manager
Connected Systems Division, Microsoft
Chandramouli did a good job of presenting on the upcoming BizTalk adapter framework. Microsoft’s motivation behind the creation of an adapter framework is to unify their approach to adapter development enabling “…easy development of metadata-driven, host-agnostic, custom adapters to LOB systems”. This is great to see and should hopefully make the prospect of developing a custom BizTalk adapter a less intimidating experience; I haven’t developed a custom BizTalk adapter myself but I keep hearing that it’s not a trivial task.
In keeping with the harmonies that Microsoft is creating around WCF, the adapter framework will extend WCF and adapters will be surfaced as WCF bindings which will make the adapter’s consumption exactly the same as WCF service consumption. This means that custom adapters will be usable across different consuming hosts (e.g. BizTalk, a custom WCF host) which is a big win.
From a development perspective the adapter framework will ship with a rich set of development tools which will automate and simplify much of the coding required for adapter development. Microsoft’s current thinking is that the adapter framework will be made available as a separate download.