Home > DataFlux > What is DataFlux?

What is DataFlux?

It’s been a while since my last post, so I thought I’d share something on DataFlux.

DataFlux Logo

So what is DataFlux? Yes, a leader in data quality, it’s both a company and a product; better stated, DataFlux (the company) provides a suite of tools (often simply called DataFlux) that provide data management capabilities, with a focus in data quality.

DataFlux’s tools can do a lot of really neat things; I’d say it’s a must-have for Sales & Marketing, and it’d benefit most enterprises out there in other ways. To see what all of this pomp is about, let’s use an example. Think of these entries in your company’s xyz system:

Name Address City,State,Zip Phone
Mr. Victor Johnson 1600 Pennsylvania Avenue NW Washington, DC 20500 202-456-1414
Victor Jonson, JD 1600 Pennsylvania Avenue Washington, DC 456-1414
VICTOR JOHNSON 255 DONNA WAY SAN LUIS OBISPO, CA 93405 (805) 555-1212
Bill Shares 1050 Monterey St SLO, CA 93408 8055444800
Doctor William Shares 1052 Monterrey Ave San Luis Obispo, California n/a
william shares, sr 1001 cass street omaha, nebraska, 68102  

In this example, a human could probably pretty easily figure out that the first two Victors are probably one and the same and that Bill in SLO and William in San Luis Obispo are also the same person. The other records might be a match, but most of us would agree that we can’t be sure based on name alone. Furthermore, it is obvious that some data inconsistencies exist such as name prefixes and suffixes, inconsistent casing, incomplete address data, etc.; DataFlux can’t (and shouldn’t try) to fix all of these quirks, but it should at least be able to reconcile the differences, and, if we choose, we should be able to do some data cleanup automatically. So let’s get started. I’ll open up dfPower Studio.

dfPowerStudio Main Window

This interface is new in version 8 and helps provide quick access to the functions one would use most often. This change is actually helpful (as opposed to some GUI changes made by companies) by combining a lot of the settings into a central place.

In my case, I’ll start Architect by clicking on the icon in the top left, where most design takes place. On this note I guess I should say that Architect is the single most useful product in the suite(in my opinion anyway), and it’s where I’ll spend most of my time in this posting.

DatFlux Architect Initial Screen

On the left panel you’ll see a few categories. Let me explain what you’ll find each one (skip over this next section if you want):

Data Inputs – Here you’ll find nodes allowing you to read from ODBC sources, text files, SAS data sets (DataFlux is a SAS company), and more. I’ll cover one other data input later…

Data Outputs – Similar to inputs, you’ll find various ways of storing the output of the job.

Utilities – Utilities contain what many would refer to as “transformations”, which might be helpful to know if you’ve worked with Informatica or another ETL (Extract, Transfer, Load) tool.

Profiling – Most nodes here help provide a synopsis of the data being processed. Another DataFlux tool is dedicated to profiling – in some ways these nodes are a subset of the other’s functionality, but there’s one primary difference. Here the output of profiling can be linked to other actions.

Quality – Here’s where some of DataFlux’s real magic takes place, so I’ll go through the task of describing each node briefly: Gender Analysis (determine gender based on a name field), identification analysis (e.g. is this a person’s name or an organization name?), parsing (we’ll see this), standardization (we’ll see one application of this), Change Case (although generally not too complicated, this gets tricky with certain alphabets), Right Fielding (move data from the “wrong” field to the “right”), Create Scheme (new in Version 8 – more of an advanced topic), and Dynamic Scheme Application (new in Version 8 – another advanced topic)

Integration – Another area where magic takes place. We’ll see this in this post.

Enrichment – As the name suggests, these nodes help enrich data, i.e. they provide data that’s missing in the input. This section includes: address verification (we’ll see this), geocoding (obtaining demographic and other information based on an address) and some phone number functions (we’ll see one example).

Enrichment (Distributed) – Provides the same functionality as I just described, but distributed across servers for performance/reliability gains.

Monitoring – Allows for action to take place on a data trigger, e.g. email John if sales fall under $10K.

Now that we’ve gone through a quick overview of Architect’s features, let’s use them. I’ll first drag my data source on to the page and double click on it to configure its properties. For my purposes today I’ll read from a delimited text file I created with the data I described at the beginning of the article. I can use the “Suggest” button to populate the field names based on the header of the text file.

Text Input Properties

What’s nice here is I can have auto-preview on (which by the way drives me crazy), or I can turn it off and press F5 for a refresh, which shows the data only when asked. Either way, the data will appear in my preview window (instant gratification is one of the great things about Architect).

Preview of Input Data

Next, I’ll start out my data quality today by verifying these addresses. I do this by dragging on the Address Verification (US/Canada) node. After attaching the node to Text File Input 1 and double-clicking on the node, in the input section I map my fields to the ones expected by DataFlux and in another window I specify what outputs I’m interested in. I’ve selected a few fields here but there are many other options available.

Address Verification Properties

Preview
You’ll notice here I’ve passed through only the enriched address fields in the output. I could have also kept the originals side by side, plus I could have added many more fields to the output, but these will suffice for now (It’d be tough to fit on the screen here). Already you can see what a difference we’ve made. I want to point out just two things here:

1. There is one “NOMATCH”. This is likely to have happened because too many fields are wrong and the USPS data verification system is designed not to guess too much…

2. 1052 Monterey St is an address I made up and consequently the Zip-4 could not be determined. The real address for the courthouse in San Luis Obispo is 1050 Monterey St. If I would have used that, the correct Zip-4 would have been calculated. So why did we get a US_Result_Code of “OK”? This is because the USPS system recognizes 1052 as an address within a correct range.

Nonetheless, pretty neat, eh? I’d also like to point out that the county name was determined because I added this output when I configured the properties. At our company we’ve configured DataFlux to comply with USPS Publication 28, which among other things, indicates that addresses should always be uppercased. For this reason you see this here. Having said this, you have the option to propercase the result set if you’d like.

Moving on, let’s clean up the names. It’d be nice if we could split the names into a first & last name. First, I reconfigured the USPS properties to allow additional outputs (the original name & phone number). Next, I dragged the Parsing node onto the screen and configured its properties to identify what language & country the text was based on (DataFlux supports several locales and in version 8 supports Unicode). After that, I can preview as before. Note how well DataFlux picked out the first, middle and last names, not to mention the prefixes and suffixes.

DataFlux Parsing

For simplicity, I’ll remove the Parse step I just added and use a Standardize node instead. Here in the properties I’ll select a “Definition” for the name and phone inputs. There are many options to choose from including things like: Address, Business Title, City, Country, Name, Date, Organization, Phone, Postal Code, Zip, and several others. Let’s see what this does…

DataFlux Standardization

You might be wondering how DataFlux does this. After all, if the input name were “Johnson, Victor” would it have correctly standardized the name to “Victor Johnson”? The answer here is yes. DataFlux utilizes several algorithms and known last names, first names, etc. to analyze the structure and provide a best “guess.” Of course this means that with very unusual names the parsing algorithm could make a mistake; nonetheless I think that most users would be surprised how good this “guessing” can be, especially with the help of a comma. By that I mean that the placement of a comma in a name greatly enhances the parser ability to determine the location of the last name. If you’re interested in learning more about this let me know and perhaps I’ll write another blog to go into the details. All in all, it’s pretty neat stuff and of course the good part is that it is customizable. This helps if someday you want to write a standardization rule for your company’s specific purpose.

Let’s move on. I’m next going to make “Match Codes.” Match codes allow duplicate identification (and resolution). For example, often times (perhaps most of the time), nothing can be done about data in a system once it is entered. For example if a name is Rob, we can’t assume the real name is Robert yet we may have a burning desire to do something like that to figure out that 1 record is a potential duplicate of another… this is where match codes come in. Here’s the section of the Match Codes Properties window where we assign the incoming fields to the Definition. This step is important because intelligent parsing, name lookups, etc. occur based on the data type.

DataFlux Match Codes Properties

Let’s preview a match code to see what this does.

Match Codes

I couldn’t get the whole output to fit on the screen here, but I think the match codes seen in the name and the address will get my point across. Here you can see that match codes ignore minor spelling differences, take into account abbreviations, nicknames, etc. Why is this so significant? We now have an easy way to find duplicates! Match codes could be stored in a database and allow quick checks for duplicates! Let’s move on to see more… I’m now going to use Clustering next to see how duplication identification can be done. First, I’ll set the clustering rules in the Properties window (note that I use the match code instead of the actual value for the rule):

Cluster Conditions

And let’s preview…

Cluster Preview

Note that the cluster numbers are the same for records that match, based on the clustering conditions I set a moment ago. Pay special attention to the fact that our Bill & William Shares didn’t match. Why? Well because of the clustering conditions I set. We could modify our Quality Knowledge Base (QKB) to indicate that SLO = San Luis Obispo or I could remove the City as a clustering condition, together with lowering the sensitivity on the address match code (sensitivities range from 50-95) and the two would match. Let’s do this to be sure:

Cluster Preview Final

There are a lot of really neat things that DataFlux can do. I’ll try to post a thing or two out here now and again if I see anyone interested…

About these ads
Categories: DataFlux
  1. Mansur B
    March 9, 2008 at 1:55 am | #1

    good job Victor. Send some more techniques / positngs for parsing – missing value.

  2. Jeyashrii
    July 3, 2008 at 3:51 am | #2

    This is really fantastic. Great Job Victor. This is very much useful for beginners like me. Please clarify me more on the following points:

    1) Could you explain how placement of a comma in a name greatly enhances the parser ability to determine the location of the last name

    2) While building clustering rule, why do we use match code instead of actual value?

    3) Could you explain about intelligent parsing and name lookups

    4) Could you explain abt Create Scheme, Dynamic Scheme Application

    Also please send more postings on Data Flux

    Thanks…

  3. sheetal
    January 30, 2009 at 2:10 am | #3

    Hi,

    I queried a file.

    1. If the table have records then it should load another job.

    can it happen. i could not found this.

    can u help me by providing a dummy job.

    Regards

    • October 8, 2010 at 1:24 pm | #4

      The new Data Management Platform is perfect for what you are looking for. It provides a new type of job called a “process job” where you can decide whether or not to execute individual data jobs (aka Architect Jobs) within it.

  4. February 3, 2009 at 9:38 am | #5

    Usually, you don’t care if the other job gets loaded. If there aren’t any records in the table, then the job that gets called won’t do anything (because no data is passed in). In DataFlux, at run-time, embedded jobs are basically embedded into the parent. In other words, there’s no way I know of to prevent an embedded job from loading into memory, but as I mentioned, if the table has no records then the embedded job nodes won’t have any data pass through. In other words, you can logically prevent data from going down one side of a branch if there’s no data, but the embedded job will still be loaded into memory in case it were needed. Does this help? Let me know if I addressed your issue… if not, please provide more detail.

  5. Pushpendra
    February 17, 2009 at 2:34 am | #6

    Could you please tell me where can I learn DataFLUX in india,

    • June 19, 2009 at 7:24 am | #7

      I sent this question off to the DataFlux team, and they sounded appreciative, but never responded with any information – sorry.

  6. vikash
    June 19, 2009 at 12:10 am | #8

    Hi I need suggestions on correcting spelling mistakes in data using Data Flux.

  7. SR
    November 17, 2009 at 1:29 pm | #10

    Could you please post some sample Dataflux scripts?

    Thanks!

  8. SR
    November 17, 2009 at 1:30 pm | #11

    Hi would appreciate if you could please post some sample Dataflux scripts?

    Thanks!

    • November 17, 2009 at 2:38 pm | #12

      To be honest, I don’t really have a lot of sample files I could share. Most of what I have is specific to what my company is trying to accomplish. What are you looking for?

      • SR
        November 18, 2009 at 9:03 am | #13

        I am just trying to understand how to write Dataflux scripts. I want to see some samples or a procedure to write Dataflux scripts. can you help please?

      • November 18, 2009 at 3:57 pm | #14

        Let me see if I can create a DataFlux post for you where I share some Architect files. I’ll notify you when I’m able to get this done.

  9. HS
    November 18, 2009 at 4:49 pm | #15

    Very helpful post, esp for a beginner.

    Thanks!

  10. H Pahad
    February 11, 2010 at 2:11 am | #16

    Hi

    Which language to use for coding expressions within Utilities (Data Validation) or generally within DataFlux? These are certainly not Base SAS.
    Would appreciate assistance.

    Regards

    • February 11, 2010 at 8:57 am | #17

      The language used in expressions is DataFlux’s own invention. In some ways it looks a bit like VB and in some ways a little like C. I found that looking at the Expression Reference in the Help files was sufficient for me to figure out anything I needed to code. Good luck.

  11. Heena
    September 28, 2010 at 3:49 pm | #18

    Hi victor,
    I wanted to customize my QKB for changing my all address data in database where i wanted the address field to be only 35. can u please help me with this

    • September 29, 2010 at 9:06 am | #19

      Heena, do you mean you want all addresses to have a max length of 35 characters?

  12. Partha
    December 1, 2010 at 9:17 am | #20

    Hi Guys,
    Anyone is having dataflux beginners material?. Just now i have entered into a project where we are using dataflux. I have no idea about dataflux could anyone help me?. If anyone is having material could you please send to this mail id sp.partha@gmail.com. Or even the techniques used in Dataflux, QKB, How the validation can be done. I need more details in dataflux.

    Thanks Victor you have done a very good job.

    Thanks for your help.

    • January 11, 2011 at 10:05 am | #21

      Partha,

      I’m sorry for not getting back to you sooner. I somehow missed your comment. I’ve sent an email to some friends at DataFlux to see what can be done. I’ll try and get back to you by tomorrow.

      –Victor

    • January 11, 2011 at 3:32 pm | #22

      Partha, DataFlux suggested these 2 items:

      1. Read through the “dfPower Studio Getting Started Guide” in the dfPower Studio install directory – it should be helpful.
      2. There are a couple of recorded demos on the DataFlux portal that may be helpful. Please see these two under the “Webcast Demos”:
      • Intro to dfPower Studio
      • Intro to dfPower Architect

      Hope this helps!

      • partha
        January 11, 2011 at 7:33 pm | #23

        Thanks Victor, I will go through those materials and catch you next week :)

      • partha
        January 11, 2011 at 7:42 pm | #24

        Hi Victor, I am not able to get those two topics under Webcast. I am able to find the webcast link and not the webcast demo. Could you help me to get the link?
        Thanks!

  13. Wai Yee Helfrich
    January 9, 2011 at 1:07 pm | #25

    Hi,

    Does Dataflux incorporate data from NCOA? If so, how do I know if there’s a change in address for a customer, or there’s a change in the zip code, from dataflux? Would I have to run through all the addresses through verification, clustering to ensure that the latest change gets picked up?

    Does this also apply to the change in area code of the phone numbers?

    Thanks in advance for any information regarding this.

    • January 10, 2011 at 10:38 am | #26

      DataFlux does not incorporate data from NCOA. This has to be obtained from a 3rd party vendor such as InfoUSA. If there is a change in the zip code you would run the addresses through address verification again. You could cluster if you suspect that it might break up clusters or create new ones, but this step would be optional. With the DataFlux server environment, called the DataFlux Integration Server, running on an average-powered server, several million addresses can be verified in about an hour so it’s not usually a big deal to redo address verification periodically. If you had calculated area codes for phone numbers you could redo the calculation in a similar fashion.

      • partha
        January 11, 2011 at 8:29 am | #27

        Hi Victor,
        Could you see my last post? Could you help me?
        Thanks!

    • January 13, 2011 at 4:28 pm | #28

      Wai Yee,

      I’m happy to share that DataFlux contacted me today to let me know that the next version of the software, DMP v2.2, will have NCOA support. They will send me more details in the coming days. v2.2 should be out soon – I’ll try and get you an exact date.

  14. Wai Yee Helfrich
    January 15, 2011 at 5:24 pm | #29

    Thank you for taking the time. Please do let me know when the NCOA support happens and how to use it.

    Many Thanks again.

  15. Prakash
    March 15, 2011 at 2:54 pm | #30

    Hi Victor,
    I cannot find the state code to verify against the USPS, do you know if it is available ?
    Also, I am doing database inserts to DB2 but the max load rate is only 50 rows/sec, do you know if there is a way to improve the performance of the database load ? I have millions of data to load but it is taking forever…

    Thank you and would appreciate your prompt response.

    • March 15, 2011 at 3:38 pm | #31

      Hi Prakash,

      I’m not sure what you mean by not being able to “find the state code.” Do you mean in the source? Regarding the DB2 insert rate, have you already tried changing the commit interval? Is the database co-located or is it remote? There is also a bulk insert feature you can try – I don’t know if that is a feature available for DB2 or not (I’ve used it successfully with Oracle). Let me know more details and I’ll certainly try to help.

  16. Prakah
    March 15, 2011 at 5:47 pm | #32

    Victor,
    I have a 2 character state code from the source along with the address,in address verification node I am looking for US State in the outputs just like US ZIP. Is ‘State’ the USPS state ?

    DB2 Insert : I tried increasing the commit interval but it didn’t help. The database is on remote AIX server and dataflux on Windows.

    Thank you!!

    • March 15, 2011 at 7:31 pm | #33

      Yes, the ‘State’ output field contains the 2-letter state code of the verified address. If you’d like the full state name you can use a ‘Standardization’ node.

      Regarding the DB2 insert, how much did you increase the interval? I would recommend a value of 10,000. If the database server is not nearby (and latency exists), you want to try and reduce the “chattiness” in the ODBC driver. I would recommend looking at the various parameters found under the “Bulk” tab of the DB2 Wire Protocol driver (that ships with DataFlux). In particular, the two things I’d try is enabling the bulk load and changing the size of the batch. Look at the Data Direct driver documentation to find out more about other options. http://media.datadirect.com/download/docs/odbc/allodbc/wwhelp/wwhimpl/js/html/wwhelp.htm#href=userguide/db2.10.06.html

  17. Prakash
    March 16, 2011 at 4:00 pm | #34

    Victor,
    I tried 5000 earlier and now changed it to 10,000 but didn’t notice any significant improvement. I am looking at the Bulk load but for some reason cannot find the BULK option in the DataDirect ODBC driver. I will investigate more on the bulk load.

    Thanks a lot!

  18. Prakash
    March 18, 2011 at 8:52 am | #35

    Hello Victor,
    I removed the database load part and tried to write it into a file, now the throughput is around 100 rows/sec and it took around 10 hours to load 3.5 Million . Is there any way we could run the nodes faster taking out the Database load portion of it ?

    Thank you!!

    • March 22, 2011 at 2:06 pm | #36

      Prakash,

      How are you running the DF job? Is it running on a client machine (Windows)? Or is it using a DataFlux Integration Server (DIS)? Anytime my company processes that large of a load it uses a server environment. Here we have a 64-bit Linux machine with 4 cores and 32GB of RAM. Perhaps you should consider using a more powerful platform as well. Perhaps you could get a 30-day trial license from DF to see if that would indeed solve your problems.

      –Victor

  19. anonymus
    March 28, 2011 at 10:20 am | #37

    Hello,

    I am wondering that if by chance you have used dataflux Architect jobs with command line execution. if it is so could you please post a detail blog on it.

    • March 28, 2011 at 10:22 am | #38

      I think I can come up with something in the next few days… I’ll let you know once I add it.

      • anonymus
        March 29, 2011 at 8:38 am | #39

        Sure Thanks please let me know…

  20. Jas
    May 18, 2011 at 5:14 am | #40

    Fehlberg Victor :
    I think I can come up with something in the next few days… I’ll let you know once I add it.

    Hi Victor
    Really nice and useful post. Have you come up with the blog you have mentioned about where you have posted some dfArchitect. Really keen to go through the same.
    Thanks!!

  21. vikramg24
    October 11, 2011 at 12:15 am | #41

    Hi Victor,

    Am new to Dataflux and need to use it for some data qa. Would it possible for you to direct me to some basic tutorials. You could also e-mail it to me in case you have some pdfs.
    vikram_212@yahoo.com

    Thanks.

    • October 12, 2011 at 9:16 am | #42

      1. Read through the “dfPower Studio Getting Started Guide” in the dfPower Studio install directory – it should be helpful (a similar document exists in DMP).
      2. There are a couple of recorded demos on the DataFlux portal that may be helpful. Please see these two under the “Webcast Demos”:
      • Intro to dfPower Studio
      • Intro to dfPower Architect

  22. vikramg24
    October 12, 2011 at 3:55 am | #43

    Hi victor,

    Am new to dataflux and am trying to figure my way out. I would like to know how to attach a QKB. As of now am not able to run any job since it says there is not qkb being specified. Alsoi am not able to find any locales from drop down list using gender analysis node.

    your help will be appreciated.

    Thanks.

    • October 12, 2011 at 9:11 am | #44

      You first need to download and install a QKB. Have you done so? If not you can get the QKB from DataFlux’s customer portal. What version of DF are you using? In DMP you do this by clicking Administration in the bottom left, and then choosing Quality Knowledge Bases->New, and then give it a name and point to the QKB root, e.g. c:\apps\qkb\. I’d check the box to set it as default and I wouldn’t make it private unless you need that.

  23. vikramg24
    October 17, 2011 at 12:42 am | #45

    thanks Victor.. will follow the steps.

    Also quick questions with regards to attaching a database. I need to analyse table on an oracle db. I have established an ODBC connection on the server. Does this mean that I can automatically access these tables while configuring the input source nodes on the DF power architect ?? If no what steps would I need to take in order to access these tables through the DF power architect.

    Thanks.

  24. vikramg24
    November 6, 2011 at 2:10 am | #46

    Hi Victor,

    I have a question with regards to changing Datatypes. I have an excel workbook which contains say data of birth as a date field.

    Am simply trying to attach this data using input data source and getting an excel output workbook. Now the Date of Birth attribute appears as a number in the output excel sheet.

    I have tried using the override option while configuring both source and destination nodes, but does not work.

    How do I overcome this. I want the format to be retained as date in the Output workbook as well.

    Thanks.

  25. vikramg24
    November 6, 2011 at 3:39 am | #47

    Victor,

    Am now getting a new error message while I try to read data from an excel work book (.xlsx) using the input node (Data Source). The message reads as below

    “External Table is not in expected format”.

    This error pops up when I try to go and select the excel work book from a folder on the hard drive. Have also tried converting the file into a .xls … But the same message pops up.

    How do i go about solving this issue. Please advise.

    Thanks.

  26. vikramg24
    November 7, 2011 at 1:00 am | #48

    Victor,

    Have managed to solve the data type issue.. but am not able to change the length of an attribute in the output node.
    For example if I have an attribute called ID and dataflux reads it as a varchar(255). I want to change this to Integer(10) type.
    I try doing this using the override option. Change the datatype by 1st specifying the length and then the data type.. in this case 10(Integer).
    Am getting an error saying table can not be created. The step passes if I do not provide the field length.
    I have also tried Integer(10).. but gives the same error message.

    Please advise.

    Thanks.

  27. Cindy Dout
    March 1, 2012 at 3:34 pm | #49

    Hello, I hope this post is still alive, I can’t find much in the way of help or user groups for Dataflux.

    I am trying to run exec dbms_stats.gather_table_stats at the start of a job. I tried several different versions in SQL Execute node but no go. A developer friend told me to try this in pre-processing but the syntax is killing me, I don’t even know where to start on making the dbconnect and execute strings workable. Here is the original pl/sql command:

    exec dbms_stats.gather_table_stats(ownname=>’EDM’,ta bname=>’EDM_ACCOUNT_XREF’,partname=>NULL, estimate_percent=> 1)

    Any help would be very welcome. Thanks! Cindy

  28. Cindy Dout
    March 1, 2012 at 5:01 pm | #51

    Hi Victor, thanks for the quick response. I’m on dfpowerstudio 8.0. It’s on a server. I already have a working job that reads and writes to one of my databases. I’m trying to add a node that will run the dba command to optimize the db tables before this job writes to them.

    I haven’t gone to support@dataflux.com as we are dropped the support maintenance.

  29. shravya
    October 3, 2012 at 4:40 am | #52

    Hi Victor,
    We have 2 servers in teradata database (server1 and server2).We load the data into server1 using an ETL tool and then move the data to server2 using datamover.
    Could you please let me know if there is a way to do a reconciliation of the counts in teradata servers1 and 2 using dataflux?

    • October 3, 2012 at 9:12 am | #53

      Hi Shilpa,

      Yes, you can use DataFlux for this. Are you licensed for the Business Rules Monitoring component? I would set up a rule that compares Count1 to Count2, recording the data to a repository in the event that they are not equal. Then, you can map Count1 to a select statement, e.g. select count(*) from table1, and do the same for Count2. If you’re not licensed for the Business Rules Monitoring component, I’d consider using Informatica’s DVO tool, assuming of course Informatica is your ETL tool. Let me know if this helps. I can answer specific questions once you get back to me.

      –Victor

  30. ramesh
    February 19, 2013 at 3:56 pm | #54

    Hello Victor,
    Congratulations on the great job in here. I am pretty new to dataflux and learnt a thing or two from you blog. I would like to practice more on the same by doing some exercises. Would really appreciate if you have any material on that. Please send any such material to my email id ramesh.kandaswamy@gmail.com. thanks a ton!

  31. February 27, 2013 at 10:34 am | #55

    I was under the wrong impression that DataFlux was used only for Address Validation/Verification. This post really got me thinking of its other ‘strengths’. Great post…

  1. January 29, 2008 at 1:00 pm | #1
  2. February 14, 2008 at 6:46 am | #2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: