With a good dose of business-focused keynotes and sessions under my belt, I was ready for a bit of a technical dive. So, my first session of the morning was WCF: Extensibility in the WCF Service Oriented Platform, presented by Craig McMurty, Technical Evangelist at Microsoft. It started with an introduction to Windows Communication Framework (WCF) which probably could have been shortened. A couple of highlights from this session:
- The presentation’s central theme was the WCF ability to “inject” custom functionality (i.e. extensions) at various stages along the path that a WCF message travels on both the client side and the server side. Such custom functionality can deal with specific requirements around serialization, threading, etc.
- Craig said that WCF was a “software factory” for communication. There’s that expression again, “software factory”. Since its inception, “software factory” has been liberally bandied about when talking about a number of software development spaces. If I recall correctly, the term “software factory” had a precise meaning that doesn’t encompass each and every domain-specific API that comes along. Ok, I’ve just returned from an evening out with my colleagues where we discussed my objections to the liberal use of the expression “software factory” which now seems to include the realm of WCF. My colleague, Adam Bowron put forth an excellent argument in defense of the association of “software factory” with the code generated by WCF. He explained that the generation of the WCF code infrastructure to support a specified contract and protocol requirement (i.e. a specific service communication domain) can legitimately be labeled as a software factory example. When the term “software factory” was first “born” I believe this was the basic description:“A Software Factory is a software product line that configures extensible development tools like Visual Studio Team System with packaged content like DSLs, patterns, frameworks and guidance, based on recipes for building specific kinds of applications.”http://www.theserverside.net/news/thread.tss?thread_id=29651
Does this description apply to WCF? Perhaps. WCF has patterns and guidance but are they packaged and primed to kick-start the development of WCF solutions?
The next morning session in line was a presentation called Avoiding 3 Common Pitfalls in Service Contract Design. Tim Ewald (Principal Architect at Foliage Software Systems) did a reasonably good job at explaining his position against building consensus around the definition of enterprise-wide canonical representations of business entities. One of his central tenants is that flexibility in data models is one of the keys to SOA success. The 3 command pitfalls he talked about were:
- Too much required data. The trouble is that you may want to adopt a corporate data model for a certain business entity (e.g. a customer) but the system you are dealing with simply cannot supply some of the schema’s required information; an issue of cardinality. This forces you to take one of two approaches: 1) Gather data just to fulfill the schema requirements, or 2) Use garbage data. The solution is to make the schema elements optional (i.e. minOccurs=”0″). With a schema you should be saying “if you can supply the data in this shape (i.e. the correct sequence of elements) I’m ok but if you change the shape (i.e. out of order elements/types) then we have a problem”. Bottom line: enforce the occurrence constraints down at the system level.
- No Solution for Versioning. Before designing data contracts is it crucial to first establish a clear schema versioning policy that governs how the schema can change. Tim suggested that changing namespaces makes systems incompatible and offers “lower value” and goes on to say that namespace changes can be minimized with a few simple rules which include; 1) Create an instance document based on the schema version you have, 2) Always consume an instance based on the schema version you have, 3) If you want to do validation, make sure you do careful validation that doesn’t fail if, for example, new elements are detected at the end of types (i.e. the core schema version has changed to include new elements). If you want to develop flexible, loosely coupled systems then you need to be prepared to ignore some schema validation errors (e.g. unknown elements that appear at the end of a sequence). Flexibility is enhanced by a versioning policy that allow clients to extend their data contract to include new types or new elements at the end of a sequence.
- No System-Level Extensions. What if you want to add new elements to your version of the schema? According to Tim you should be able to extend your version of the schema and, therefore, the core schema authors should support this by “leaving a slot in the schema for clients to extend their version”. Tim went on to say that by enabling and promoting extension capabilities, the core schema team could “learn” from the extensions applied to “local/departmental” versions and then incorporate those extensions, as needed, into the core schema over time (i.e. a “harmonious” and flexible evolution of the core canonical representation driven by “feedback from the trenches”).
Tim summed up by asking us to rethink the “big bang” canonical design approach and, instead, start with a base model designed to facilitate the evolution of incremental value. This approach, according to Tim, would allow the user to adopt core schema versions at their own pace without sacrificing business goals.
After a quick bite of lunch, my colleague (Adam Bowron) and I hooked up with Jim Boyer (a great Technical Specialist from Microsoft). We walked across the street to the BizTalk product building for a meeting to discuss BizTalk 2006 end-to-end message ordering with one of the Microsoft BizTalk product/program managers. I won’t go into the details at this point expect to say that the discussions centered around enabling BizTalk to maintain FIFO message order regardless of the impact of multiple orchestrations and other BizTalk activities which can alter the original order.
Our meeting at the BizTalk building cut into the first afternoon sessions and I arrived half-way through the presentation on Building an ESB on the Microsoft Platform. I really didn’t take away much from the remainder of the session except that it looks like there is some interesting work going on around the BizTalk platform to provide a compelling ESB-esque story. My colleague mentioned that there is good work being done around enterprise-level exception management (leveraging the BizTalk 2006 failed message routing feature).
My final session of the day was entitled Get Your Processes Talking! Speech, Workflow & BizTalk United. Presented by Jon Fancey (Principal Consultant, NetStore) and Albert Kooiman (Senior Business Development Manager in the Unified Communications Group), the aim was to show how Communication Server 2007 and BizTalk can work together to deliver speech-oriented workflow solutions. This session was really all about showcasing Communication Server 2007 with a little dash of BizTalk. The speech capabilities in Communication Server appeared impressive and the design experience in Visual Studio looked great (i.e. creating a new speech workflow solution). The intention is to optimize human workflow processes via automation and thus improve service levels. My thinking at this point was that organizations need to be extremely careful about fronting their service function with a machine. Not all customer service scenarios are suitable for speech-based human workflow processes. I would go so far as to say that few customer service scenarios are suitable for the machine interface. When customers make a service call to an organization they typically do not want to talk to a machine.