Archive

Archive for January, 2008

Suspending a BizTalk Orchestration with Delay Shapes

January 31, 2008 1 comment

Okay, here’s a question for you: what happens to an orchestration that is processing a delay shape when suspended? Well, let’s find out.

First, I built a simple schema to kick off my orchestration (nothing surprising here). Then, I built a simple orchestration:

Timer Orchestration Example

Write_Time simply prints out the time to the debugger, e.g.

System.Diagnostics.Debug.WriteLine("Time: " + System.DateTime.Now.ToLongTimeString());

The Delay shape causes a delay of 1 minute. I built my project and deployed it, and then opened up the Debugger. Next, I dropped in a message to kick off my orchestration. I got my first debug statement:

[3404] Time: 10:30:34 AM

I then suspended the orchestration at 10:30:46. I waited until 10:31:35, and then resumed the message. So, here’s the question, will it A) print the current time as soon as resumed, B) wait about 48 seconds to print, or C) wait another minute to print? Let’s see our next debug statement:

[3404] Time: 10:31:35 AM

Interesting. While suspended, it appears that the Delay shape keeps track of time. Let’s do some more tests. This time I left the orchestration running until 10:32:30 and then I suspended it again. I resumed it at 10:33:25 this time. My next message in the debugger was:

[3404] Time: 10:33:25 AM

Hmmm… will it print out every time I resume the orchestration or did this happen because it had been greater than a minute? Let’s do some more tests. I suspended the orchestration at 10:34:00. I then resumed it at 10:34:05. My next print out was:

[3404] Time: 10:34:26 AM

Interesting! No, it isn’t just printing out when I resume (it’s not just going back to the first shape after the Delay). Let’s do another (after I do some other stuff). New message kicks off an orchestration and we get our first debug:

[3404] Time: 11:31:39 AM

Then, I suspend the message at 11:31:50. I then wait a couple of minutes, until 11:34:00, and resume the orchestration. Here’s what we get for the next few printouts:

[3404] Time: 11:34:00 AM
[3404] Time: 11:35:00 AM
[3404] Time: 11:36:00 AM

So in conclusion, if the orchestration is resumed before the Delay time specified, it will continue to wait.  If more time has passed, the orchestration picks up right after the Delay shape, and continues.  In my case this means that I will now be on a new 1 minute interval, that ending right on the minute (00 seconds).

You might be asking, so what? Well, here’s why this is important. Let’s say you are using a Delay shape in your orchestration to handle retries… this means that if you suspend your orchestration for any reason, your retry time is running out! This is definitely something to account for when you’re using the Delay shape in building an orchestration.  The good news is that if your Delay shape runs out of time, the orchestration will pick up right after the Delay shape.  Good luck developing!

Categories: BizTalk Server

BizTalk Server 2006 SQL Adapter Lock Resolution

January 24, 2008 4 comments

We just fixed a rather hairy problem here at work that I’m sure will benefit someone out there. Here’s a brief description:

Using BizTalk Server 2006 (R1) we had a problem with a particular application that used the SQL Server Adapter for polling. It would run a very simple stored procedure every 30 seconds. The stored procedure basically queried a table and then truncated it. The problem we were facing was that the SQL Server DBA was finding the database table exclusively locked after some amount of time (usually within a minute or so). We tried a bunch of stuff to no avail and then decided to call Microsoft.

The Microsoft gentleman tried a few things here and there, but the problem persisted until he called back with a new idea. We had 2 Receive Handlers associated with the SQL Adapter.  However, only one was in use and so I was asked to delete the second Receive Handler. I reluctantly agreed after realizing that no one else was using that Receive Handler. Well, guess what – that fixed the issue. I wish I could have shared some obscure registry key that we flipped or something, but no. I guess the moral of the story is to only use a single Receive Handler for your SQL Adapter.

Categories: BizTalk Server

Send Port Groups

January 12, 2008 Leave a comment

This probably won’t be anything new for most experienced BizTalk developers, but I learned something so I figured I’d write it down to make sure I don’t forget.

At work we have several systems that subscribe on message type. Before moving to our Test and Prod environments I figured it’d be nice to simplify maintenance by using a Send Port Group. So I read the section on Send Port Groups in my book written by Darren Jefford, Kevin B. Smith, and Ewan Fairweather. It didn’t quite address the question I had in the way I was expecting an answer so I figured I’d just try things out.

What I had in mind worked as I suspected for my simple cases. For example, when I had

  • SendPort1 – subscribing on MessageTypeA
  • SendPort2 – subscribing on MessageTypeA
  • etc…

I created a Send Port Group and added the filter of MessageTypeA. Then I removed the filters from SendPort1 & SendPort2 and sure enough things worked fine. So then I moved to my more complex case (simplified here).

  • SendPort5 – subscribing on MessageTypeA
  • SendPort6 – subscribing on MessageTypeA and CustomFilter1
  • etc…

I figured I could have a new Send Port Group filtering on MessageTypeA and add both SendPort5 and SendPort6, and that I could then add an additional filter to SendPort6 (that of CustomFilter1). Want to guess what happened?

I got two messages in my SendPort6 destination for an incoming message of MessageTypeA meeting CustomFilter1’s criteria. This is when I realized that I hadn’t understood the book well. As you probably know, the subscription for the Send Port Group is met, and then a separate subscription is met by the filter on SendPort6, causing two messages in my output. I hadn’t expected this.

Categories: BizTalk Server

What is DataFlux?

January 12, 2008 59 comments

Update 27-MAY-2015: My blog on DataFlux below, has been visited many times a day by visitors all over the world, but readers were often disappointed when they learned that DataFlux is now fully integrated with SAS; although that provides a great feature set, it also makes the software much more complicated, more expensive, and in a few words not as easy to use as it once was.

Well, today I share some great news. A new Data Quality add-in for Microsoft Excel now exists on the market called Aim-Smart. It’s powerful, easy to use, and it runs inside of Excel, making it the perfect tool for data stewards.

Here’s the original post:


It’s been a while since my last post, so I thought I’d share something on DataFlux.

So what is DataFlux? Yes, a leader in data quality, it’s both a company and a product; better stated, DataFlux (the company) provides a suite of tools (often simply called DataFlux) that provide data management capabilities, with a focus in data quality.

DataFlux’s tools can do a lot of really neat things; I’d say it’s a must-have for Sales & Marketing, and it’d benefit most enterprises out there in other ways. To see what all of this pomp is about, let’s use an example. Think of these entries in your company’s xyz system:

Name Address City,State,Zip Phone
Mr. Victor Johnson 1600 Pennsylvania Avenue NW Washington, DC 20500 202-456-1414
Victor Jonson, JD 1600 Pennsylvania Avenue Washington, DC 456-1414
VICTOR JOHNSON 255 DONNA WAY SAN LUIS OBISPO, CA 93405 (805) 555-1212
Bill Shares 1050 Monterey St SLO, CA 93408 8055444800
Doctor William Shares 1052 Monterrey Ave San Luis Obispo, California n/a
william shares, sr 1001 cass street omaha, nebraska, 68102

In this example, a human could probably pretty easily figure out that the first two Victors are probably one and the same and that Bill in SLO and William in San Luis Obispo are also the same person. The other records might be a match, but most of us would agree that we can’t be sure based on name alone. Furthermore, it is obvious that some data inconsistencies exist such as name prefixes and suffixes, inconsistent casing, incomplete address data, etc.; DataFlux can’t (and shouldn’t try) to fix all of these quirks, but it should at least be able to reconcile the differences, and, if we choose, we should be able to do some data cleanup automatically. So let’s get started. I’ll open up dfPower Studio.

dfPowerStudio Main Window

This interface is new in version 8 and helps provide quick access to the functions one would use most often. This change is actually helpful (as opposed to some GUI changes made by companies) by combining a lot of the settings into a central place.

In my case, I’ll start Architect by clicking on the icon in the top left, where most design takes place. On this note I guess I should say that Architect is the single most useful product in the suite(in my opinion anyway), and it’s where I’ll spend most of my time in this posting.

DatFlux Architect Initial Screen

On the left panel you’ll see a few categories. Let me explain what you’ll find each one (skip over this next section if you want):

Data Inputs – Here you’ll find nodes allowing you to read from ODBC sources, text files, SAS data sets (DataFlux is a SAS company), and more. I’ll cover one other data input later…

Data Outputs – Similar to inputs, you’ll find various ways of storing the output of the job.

Utilities – Utilities contain what many would refer to as “transformations”, which might be helpful to know if you’ve worked with Informatica or another ETL (Extract, Transfer, Load) tool.

Profiling – Most nodes here help provide a synopsis of the data being processed. Another DataFlux tool is dedicated to profiling – in some ways these nodes are a subset of the other’s functionality, but there’s one primary difference. Here the output of profiling can be linked to other actions.

Quality – Here’s where some of DataFlux’s real magic takes place, so I’ll go through the task of describing each node briefly: Gender Analysis (determine gender based on a name field), identification analysis (e.g. is this a person’s name or an organization name?), parsing (we’ll see this), standardization (we’ll see one application of this), Change Case (although generally not too complicated, this gets tricky with certain alphabets), Right Fielding (move data from the “wrong” field to the “right”), Create Scheme (new in Version 8 – more of an advanced topic), and Dynamic Scheme Application (new in Version 8 – another advanced topic)

Integration – Another area where magic takes place. We’ll see this in this post.

Enrichment – As the name suggests, these nodes help enrich data, i.e. they provide data that’s missing in the input. This section includes: address verification (we’ll see this), geocoding (obtaining demographic and other information based on an address) and some phone number functions (we’ll see one example).

Enrichment (Distributed) – Provides the same functionality as I just described, but distributed across servers for performance/reliability gains.

Monitoring – Allows for action to take place on a data trigger, e.g. email John if sales fall under $10K.

Now that we’ve gone through a quick overview of Architect’s features, let’s use them. I’ll first drag my data source on to the page and double click on it to configure its properties. For my purposes today I’ll read from a delimited text file I created with the data I described at the beginning of the article. I can use the “Suggest” button to populate the field names based on the header of the text file.

Text Input Properties

What’s nice here is I can have auto-preview on (which by the way drives me crazy), or I can turn it off and press F5 for a refresh, which shows the data only when asked. Either way, the data will appear in my preview window (instant gratification is one of the great things about Architect).

Preview of Input Data

Next, I’ll start out my data quality today by verifying these addresses. I do this by dragging on the Address Verification (US/Canada) node. After attaching the node to Text File Input 1 and double-clicking on the node, in the input section I map my fields to the ones expected by DataFlux and in another window I specify what outputs I’m interested in. I’ve selected a few fields here but there are many other options available.

Address Verification Properties

Preview
You’ll notice here I’ve passed through only the enriched address fields in the output. I could have also kept the originals side by side, plus I could have added many more fields to the output, but these will suffice for now (It’d be tough to fit on the screen here). Already you can see what a difference we’ve made. I want to point out just two things here:

1. There is one “NOMATCH”. This is likely to have happened because too many fields are wrong and the USPS data verification system is designed not to guess too much…

2. 1052 Monterey St is an address I made up and consequently the Zip-4 could not be determined. The real address for the courthouse in San Luis Obispo is 1050 Monterey St. If I would have used that, the correct Zip-4 would have been calculated. So why did we get a US_Result_Code of “OK”? This is because the USPS system recognizes 1052 as an address within a correct range.

Nonetheless, pretty neat, eh? I’d also like to point out that the county name was determined because I added this output when I configured the properties. At our company we’ve configured DataFlux to comply with USPS Publication 28, which among other things, indicates that addresses should always be uppercased. For this reason you see this here. Having said this, you have the option to propercase the result set if you’d like.

Moving on, let’s clean up the names. It’d be nice if we could split the names into a first & last name. First, I reconfigured the USPS properties to allow additional outputs (the original name & phone number). Next, I dragged the Parsing node onto the screen and configured its properties to identify what language & country the text was based on (DataFlux supports several locales and in version 8 supports Unicode). After that, I can preview as before. Note how well DataFlux picked out the first, middle and last names, not to mention the prefixes and suffixes.

DataFlux Parsing

For simplicity, I’ll remove the Parse step I just added and use a Standardize node instead. Here in the properties I’ll select a “Definition” for the name and phone inputs. There are many options to choose from including things like: Address, Business Title, City, Country, Name, Date, Organization, Phone, Postal Code, Zip, and several others. Let’s see what this does…

DataFlux Standardization

You might be wondering how DataFlux does this. After all, if the input name were “Johnson, Victor” would it have correctly standardized the name to “Victor Johnson”? The answer here is yes. DataFlux utilizes several algorithms and known last names, first names, etc. to analyze the structure and provide a best “guess.” Of course this means that with very unusual names the parsing algorithm could make a mistake; nonetheless I think that most users would be surprised how good this “guessing” can be, especially with the help of a comma. By that I mean that the placement of a comma in a name greatly enhances the parser ability to determine the location of the last name. If you’re interested in learning more about this let me know and perhaps I’ll write another blog to go into the details. All in all, it’s pretty neat stuff and of course the good part is that it is customizable. This helps if someday you want to write a standardization rule for your company’s specific purpose.

Let’s move on. I’m next going to make “Match Codes.” Match codes allow duplicate identification (and resolution). For example, often times (perhaps most of the time), nothing can be done about data in a system once it is entered. For example if a name is Rob, we can’t assume the real name is Robert yet we may have a burning desire to do something like that to figure out that 1 record is a potential duplicate of another… this is where match codes come in. Here’s the section of the Match Codes Properties window where we assign the incoming fields to the Definition. This step is important because intelligent parsing, name lookups, etc. occur based on the data type.

DataFlux Match Codes Properties

Let’s preview a match code to see what this does.

Match Codes

I couldn’t get the whole output to fit on the screen here, but I think the match codes seen in the name and the address will get my point across. Here you can see that match codes ignore minor spelling differences, take into account abbreviations, nicknames, etc. Why is this so significant? We now have an easy way to find duplicates! Match codes could be stored in a database and allow quick checks for duplicates! Let’s move on to see more… I’m now going to use Clustering next to see how duplication identification can be done. First, I’ll set the clustering rules in the Properties window (note that I use the match code instead of the actual value for the rule):

Cluster Conditions

And let’s preview…

Cluster Preview

Note that the cluster numbers are the same for records that match, based on the clustering conditions I set a moment ago. Pay special attention to the fact that our Bill & William Shares didn’t match. Why? Well because of the clustering conditions I set. We could modify our Quality Knowledge Base (QKB) to indicate that SLO = San Luis Obispo or I could remove the City as a clustering condition, together with lowering the sensitivity on the address match code (sensitivities range from 50-95) and the two would match. Let’s do this to be sure:

Cluster Preview Final

There are a lot of really neat things that DataFlux can do. I’ll try to post a thing or two out here now and again if I see anyone interested…

Categories: DataFlux