Friday, April 29, 2011

A Template Meta-model

-- And miles to go before I sleep.      Robert Frost "Stopping by Woods on a Snowy Evening" 
When I have a really difficult task to finish that my brain is not ready to work on (some would call it writer's block), I try to work around it by looking at related stuff.  I'm now into the analysis phase on the CDA Consolidation ballot (which I have to have a draft of by Wednesday).  But I'm still struggling, so I finished a quick analysis of HL7, Health Story, IHE and HITSP specifications to come up with a document model this evening.  I also looked at previous work from the HL7 Templates workgroup, including the Template Registry Requirements, and the data model I had proposed for a pilot registry project that I was never able to finish for lack of time and assistance.  It went quicker than I thought it would, but since I've been paying attention to templates from these organizations for the past five years, perhaps I shouldn't be surprised.

Let's start with the document model first.

An implementation guide has a title page, table of contents, front matter, a body and back matter.  I focused all my attention on the body.  Within the body there are 4 different kinds of templates which are commonly organized around the CDA Header, the CDA Document, Sections and Entries.  Document and section templates are very easy to analyze, and fit into the same general structure.  It turns out that header and entry templates are very similar as well.

Included in the analysis are my opinions, which appear in italics.


I hereby contribute the remainder of this post to the public domain.

Template
Template Name
This is a short human readable name that quickly describes the template.  It's for human readability.  Changing the name of a template DOES NOT affect its use.
Identifier
This is a single identifier that must be present and is always valued.  HL7, IHE and HITSP all recommend use of an OID as the identifier, with no extension, and so do I.  My first CDA template used extensions, and I've seen several that do also.  It's not a show-stopper either way.
Open or Closed Status
If a template is open, everything not explicitly prohibited is allowed.  If closed, everything not explicitly allowed is prohibited.  This is USUALLY the same for all templates within an implementation guide.  As such, it could be documented in the front matter rather than with each general template.  The current model for the CDA Consolidation guide is to explicitly document it for each template.  I find that extra "gunge" to be distracting, but not so much that I would object to making it available.

A closed template is like a "final" class in UML.  It can no longer be extended to support new use cases (although see implicit inheritance below).

In more than 1000 templates I've looked at or helped develop for CDA (yes, there are THAT many and I've looked at them), I've never seen ONE closed template that I can recall.  I know they exist, just not in the CDA universe .. kind of like anti-matter.   I don't like them because they interfere with reuse, just as I avoid "final" classes in Java. 

Parent Templates:
A template can inherit constraints from one or more parent templates.  HITSP uses multiple parents to harmonize across IHE and HL7 overlaps, and avoided it elsewhere.  IHE PCC uses inheritance from only one parent template.  A PCC template may have multiple ancestors, so there can still be multiple template identifiers.  Inheritance is optional.  A template need not have any parent templates.
I like the one template inheritance rule, but when you get to real-world implementation, it may need to be relaxed.  


  Parent Template ID:
    This is the identifier of the parent template.
  Explicit or Implicit Inheritance:
    This is a boolean flag indicating whether the inheritance must be explicitly expressed in the instance, or whether it is implied by template that inherits from the parent.  If implied, the template need not report its parents.  If explicit, then it must.  This allows reuse of sets of constraints in a template -- and introduces the idea that a template could also be abstract -- never directly instantiated without further constraint.  A closed template can only be inherited from implicitly, never explicitly.

When explicit inheritance is present, you have a couple of options in documentation:

  1. Copy the constraints from the parent template into the documentation of its child, indicating that template as the source
  2. Don't copy them, but do include a link to them from the child.  
IHE and HITSP used the latter model.  The CDA Consolidation guide uses the former.  

I like explicit inheritance because it enables incremental interoperability without having knowledge of the "inheritance" rules. The CDA Consolidation guide templates won't provide any incremental interoperability with epSOS work even if they shared the same constraints because the shared constraints aren't enumerated.  You could still build a document that enabled incremental interoperability by using them, but you don't "get it for free".


Scope:
Scope indicates where the template is applicable.  It can constrain where template may appear, or describe the scope of the use case where it is applicable.
  Scope Narrative:
    A narrative description of the scope.  It need not be present.
  Class: [1..*]
    The set of classes (model artifacts) to which the template applies (e.g., this template only applies to sections in CDA).

Description:
A narrative description that talks about the template.  This is commonly present in IHE and HITSP templates, and is missing from many of the CDA Consolidation templates.  I think this must be present for all templates, but could be argued into "should".
The narrative can include references to other important stuff in the documentation (much like the HL7 PubDB format allows today), including material not generated by a template development tool (e.g., like MDHT or TDB).

Model Diagram:
This is a diagram generated from the template model expressing the template in UML form.
Model Table:
This is a table describing the template using data in the template's model.  It too is machine generated.
Examples:
This is list containing at least one example of the template content.  Two forms are often used:  A skeletal example showing only what the template constrains, and a full example with additional stuff that shows the xml with sensible clinical data.

IHE and HITSP use a skeletal model.  Skeletons are very easy to generate using model data, and are also easy for non-clinical users to create manually (e.g., me).  Full examples that make clinical sense often need human assistance for sample values, et cetera.  Template development tools can support creation of full examples.

Examples really should be present for all but the most obvious stuff, and if you are someone like me, you cannot trust your own intuition about what is obvious.  If it can be machine generated, then why worry, just DO it.

Negative Examples:
Negative examples are helpful when there are obvious ways to mess things up.  They need to be clearly labeled as WRONG, BAD, et cetera.  One of the things I've learned is that the easiest way to show people how to not make the obvious mistake is to show them what it looks like.  It's how I went from this:

‹code›‹originalText›SARS‹/orginalText›‹/code›

to this:

‹code nullFlavor='UNK'›‹originalText›SARS‹/orginalText›‹/code›

Good bad examples are hard to automatically generate.  Tooling could help, but this might be a case where you'd just include external content.  Negative examples are only needed where you want to point out a problem case that isn't obvious.

Constraints:
A template has at least one, and usually more than one constraint (if none, there is no reason for it to exist).  Constraints are the next "reusable" object I found in the analysis.  These are the numbered things in the CDA Consolidation Guide.

Constraint
Identifier:
A constraint has an identifier that allows it to be referenced.  Constraints are reusable (IHE, HL7 and HITSP did it quite often), e.g. X shall be an interval of time contain a low/@value and a high/@value.
Target Component Name:

You need to identify what you are constraining, and most of the time it is only one element or attribute.  The component either a class attribute of the class being constrained by the template, a component of the data type of a class attribute, or an association with another class.  It could also go deeper.

This is where MDHT did not perform well that TDB did.  The "long list" of general header constraints produced by MDHT were a result of several issues, one of which this solves.  The other half of the solution has to do with guidelines (governance) about where to put template boundaries.  The template should begin and end within a single class, using other templates to enforce business rules on associated classes.  This rule can be broken in some cases because sometimes you may need to "go deep" just once, and you don't want to create extra templates just to enforce a rule.  The entryRelationship class in CDA uses three lines of XML.  Why would I want to create templates to say act X must contain entryRelationship Y and entryRelationship Y must contain act Z, when I could more simply say: act X must contain entryRelationship/act Z. 


Target Component Definition:
You often need to explain what the target component is used for.

This is what I've been doing for the recent Reconciliation profile in IHE, building from similar earlier work, and something that the CDA Consolidation guide did not do well.

Trigger Condition:
Sometimes a constraint is triggered by a pre-condition.  If trigger then constraint is a common pattern in several templates found in the CDA Consolidation guide.

Constraint: [0..*]
Constraints can have "sub-constraints" ... and those can have sub-constraints as well.
This is something that TDB does well.


Content Description [0..1]:
This is human readable text explaining what the constraint is doing.  It is needed when the effect of the constraint is not-obvious.  For example, the XPath representation of a constraint on effectiveTime/@value fixing the precision to be at least the second would look like this:


string-length(substring-after(translate(concat(@value,'+'),'-','+'),'+')) > 16

But the description can simply say: must be precise to the day.

For implementation purposes, I'd recommend use of ISO Schematron and XSLT 2.0 because then you can define functions for precision of dates which are simpler to read (at least I think you can).


Data Type Restriction [0..*]:
You might want to say that effectiveTime must always be an IVL_TS, or that code must be constrained to CE.  There need not be a data type restriction.  It might also be tricky, the substanceAdministration/effectiveTime, which IHE limited to [0..2].


Nullible: [0..1]
Can this item be null?  Yes or No.


Cardinality: [0..1]
What are the upper and lower bounds.  If unspecified, then the base model rules.


Conformance [1..1]:  Shall|Should|May|Shall Not|Should Not|Need Not
What is the conformance verb used?


Reason:
A text explanation of WHY the constraint is present.  What does it accomplish?  Why is it here?
Let's stop relying on our collective memories.  Frankly, over time, they aren't all that great (which isn't what I said before the second rewrite).

Precision:
If a QTY data type, what is the precision of it.  And TS is a quantity data type.


Value Set [0..1]:
If coded, what is the value set ... is another reusable artifact.


Value Set
Name:
What do we call this thing?
Identifier:
How does the computer identify it.
Scope:
What does it apply to?  This value set applies to procedures, encounters, lab results, et cetera.
Definition:  
X and all subordinate children from SNOMED CT ... or X, Y and Z, or a combination of these.
Purpose:  
Why does it exist?  What is it used for.
Intensional/Extensional:
How is it defined?  By an operational definition, or an explicit list.
Static or Dynamic:
Is it fixed to a specific vocabulary version, or can it change when a new version is release.
By the way: I really don't like it that we put static/dynamic in next to every value set in the conformance rules.  Static/dynamic are bindings that apply to the value set, not its use.  At least in my world.  I've never seen an implementation guide that would use static in one place for a value set, and dynamic in another.  It's just a REALLY BAD idea.  More gunge to ignore.

General Notes
From the work above, I can create a beautiful CDA Implementation Guide with rich multi-media, XML Schema for GreenXMI templates, a UML Meta-model for templates, or a MIF-based expression of templates.  I'm going to leave all of that in the hands of others.  I have to get back to my review the CDA Consolidation guide.  


What's not in here are the components of the meta-model important to governance and provenance.  That's great stuff, but not a requirement for MDHT to be able to do what we need yet. 


Having run through this excercise, I can now exchange my axe for a scalpel -- and when we go through reconciliation, I absolutely won't take an arbitrary deadline as an excuse for not doing it right.  It takes as much as it will take -- if you want quality, you have to give it time.


I don't know that I'm back to done yet, but I sure hope I'm getting things smart.  I have three more deliverables before Monday before I call myself done again.

Need Help Analyzing the HL7 CDA Consolidation Ballot?

Just a quick update today:

The HL7 CDA Consolidation Ballot signup closes this coming Monday and voting ends a week later.  If you have been struggling trying to figure out how it differs from the HITSP C32, IHE XPHR or the existing Health Story Guides, some help is in site.  As I mentioned briefly yesterday, I've been working on a tool to compare the templates using Schematron source files for the various items.

Last night I finished the tool and updated the spreadsheet, which is posted on the SIframework wiki.  Just putting this together was a challenge.  The next challenge is analyzing the nearly 1400 lines on the two spreadsheet tabs.

Oh well, no weekend for me.

Thursday, April 28, 2011

XLST is a fine tool for Comparing CDA Schematrons used by the sIframework

One of the challenges with the CDA consolidation project is how difficult it is to compare the old against the new. I've just finished the first half of a tool that should make this easier.  I developed it using XSLT, existing schematrons for the CCD and C32, and a drafty schematron that is generated from the Template Database tool and which was generously provided by the Lantana Group.

Results can be downloaded from the SI Framework website.  The spreadsheet has 9 columns.  The middle column is what makes it cool.  It contains, as best as I was able to determine using some pretty rough heuristics, what CDA element or attribute was being constrained.  The left hand and right hand sides are different sides of the mirror.  On the left is the new, on the right the old. 

Templates are sorted alphabetically by the new template name, and also show the LOINC code for the section.


The first column for each section shows the context used for any of the tests (this is rule/@context in the schematron). 

The second column indicates the level of error reporting/testing detail.  An E in this column indicates that the failure is an error, a W indicates that it is a warning.  M indicates that manual testing is required.

The third column is the XPath assertion that must be true.  The assertions on the left hand side are part of an incomplete schematron generated by the Template Database.  These schematrons need manual tweaking before they really work.  On the right hand side, they come from the NIST C32 Validator and HL7 CCD (These two together SHOULD include all necessary rules.  The downloadable NIST validator rule set does NOT include the CCD rules in the schematrons).

The fourth column provides the narrative detail on what that Schematron is trying to do.

So, now you can start comparing sections to see what is different, while I'm working on the entries tomorrow.

Signup for this ballot closes on Monday, and voting on May 9th.

After having completed this part of the effort, I'm getting a new feel for the template meta model that I was griping about yesterday.

Wednesday, April 27, 2011

Green XMI for HL7 Templates?

There's quite a bit more work that needs to be completed in order to complete the S&I CDA Consolidation project if we want to use MDHT to do the publishing.  I'm thoroughly annoyed with myself because I ignored any past experience I ever had in structured technical publishing.

One of the challenges the project had at the very end was being able to get the output to look the way that others had wanted.  The requirements were simply: "It should look like another guide".  That's not a requirements statement, nor does it get at the heart of the problem with templates.  The problem with templates is that we need a meta-model for them.  Then I hear "MDHT uses UML and XMI as it's meta-model".  OK, that doesn't really cut it either.  That's like saying I use XML to send my data.  UML and XMI are representations of meta-models.

So, I'm going to borrow an idea from hData and GreenCDA, which is to figure out what the model of use is for templates.  We do have over 500 of them to look at.  We should be able to figure out what the common patterns are.  One easy way to do that is by performing a documentation analysis. This is one of the early steps in any project that hopes to succeed in automating the development of technical documentation.  The end result of this process is one in which you identify the types of documents and their constituent parts that are used to create a particular kind of technical document.

Having completed that process, you figure out how to take your current artifacts and move them in the direction of being able to express those parts.  If this reminds you of other design analysis problems, good, because it should.  The set of "parts" are the "things that we use" in templates.  That can be used to build a template meta-model.  That model can be expressed in UML and XMI, or it can be expressed in a number of other formats.  One simple expression of that would be a "Green" version of XMI, suited just for templates.

Duh.  Head meets forehead (loudly).  OK, so I shouldn't beat up on myself too hard because that's nearly 10-year old knowledge that I haven't used recently.  But, I'm not the only one who missed it either.  I think part of the challenge here are projects with super tight deadlines that do NOT include volunteers in the planning and scheduling process.

Tuesday, April 26, 2011

IHE Cardiology Supplement Published for Public Comment




IHE Community,

Cardiology supplement  published for Public Comment

The IHE Cardiology Technical Committee has published the following supplement to the IHE Cardiology Technical Framework for Public Comment on April 22, 2011:

·        Cardiac Imaging Report Content (CIRC)

The document is available for download at http://www.ihe.net/Technical_Framework/public_comment.cfm. Comments submitted by May 23, 2011 will be considered by the Cardiology Technical Committee in developing the trial implementation version of the supplement.  Comments should be submitted using the online forums at  http://forums.rsna.org/forumdisplay.php?f=249.


HITsm T1: To what extent should patient involvement influence the advancement of HIE?

To what extent should patient involvement influence the advancement of HIE?

That was the first topic from last night's #HITsm chat.  Sparked by, of all things, this article.  My brain exploded, my blood pressure quickly rose and I almost asked "What planet are you from?" (See what I did write).  The discussion devolved from there into one about security and patient consent and never really rose above it with a few rare exceptions.  The last calm thing I'll say in this post is that this is my personal hot button issue, not the opinions of anybody I work for or am otherwise involved with in any way.

Patients NEED access to their data.  This is both a story my tweople tell, and one I know quite well myself.

A few years ago, my mother-in-law wound up spending her vacation weekend; instead of with her daughter; in the hospital, raving, with a 104 degree fever.  The hospital staff couldn't figure out why the antibiotics weren't working because they didn't get the information that she was immuno-compromised until Monday when her primary care physician told the hospital a few hundred miles away that she'd just gotten off of 6 weeks of chemo.  [Her family learned from that experience quite well.  Between that any many other events, we got into the habit of asking her, and her healthcare providers routinely about her CBC results.  Oh what a challenge that was for her provider's at first.  But Mama was stubborn like bull.  She got the hospital security officer to sign a note posted above her bed at one point:  "Please share Mrs. ____'s blood test results with ANYONE in the family who asks".]

Then there was the time that my step-father spent two days in the hospital up my way because they weren't sure his pacemaker hadn't moved, and couldn't get the images until... you guessed it.  Monday when Medical Records opened up.

That's just a few stories from my family, and ones with happy endings at that.  My friends in Health IT all have stories of their own, and some of them ended not nearly so well.

This is pure BS.

In both cases I reported, the hospitals had the necessary consent to access and share the data [AND both my sister-in-law and my mother had power-of-attorney and knew how to use it].  In both cases, the patients received expensive, unnecessary, and while we are at it, POOR care.  In one case, the lack of care related to missing information introduced a mildly life-threatening risk.  In the other case, it was mostly a miserable way to spend a weekend that resulted in a delay in obtaining the right treatment.

In both cases, the data wasn't available because it needed to be accessed by a person, instead of using Health IT to exchange it.  Hell, even a FAX would have been fine.  The problem was that there was nobody at the other end who could get TO the data to even FAX it.  In both cases, the people needed worked "normal shifts." That means that patients who don't get sick on their schedule get stuck better than half the time waiting for an office to open.

If you truly want patient-centered care, don't hold my (or my family member's) data hostage to LOUD and mostly useless discussions about security and privacy.  There are privacy and security laws on the books.  Enforce them.  There is privacy and security technology and frameworks out there.  Use them.  Stop making me wait to get my damn data.

In my own experience, it takes at least three times longer to work out the legal framework in an HIE that it does to actually implement the security requirements.  And the same basic technology shows up over and over again.

Please, "don't invent the wheel over and over and over again"
Instead: "identify what's working, where it's working and adopt it and adapt it."

A million dollars on a consent pilot?  What a waste.  Go inspect and explore what's been done -- over and over and over again.  Then see what works, and adopt and adapt.  There are at least 10 HIEs out their that have already done consent -- using standards.  Check it out.

If you recognize the quote, then the irony of where the story came from that started off T1, and my focus on trying to bring it back to being patient-centered should make it readily apparent why I'm so angry right now.

I'll put it quite plainly.  The number one issue for patients (and consumers) with regard to their health information is ACCESS.  The second issue is SECURITY. If you can remember that order of priority, all else is easy.  But we forget.  And if you don't believe me, take this simple test:  Ask the next HIT expert you see what the most pressing issue for patients is about their healthcare information.  I'll bet most of them get it wrong.


Update: July 30th, 2011
Here's Regina's rendition of this story:
Sorry Medical records are closed

Monday, April 25, 2011

Are Patients Consumers? Yes and No

"When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean -- neither more nor less." - Lewis Carrol, Alice in Wonderland
This is not a standards related post, but it is a healthcare related one.

Paul Krugman wrote for the New York Times recently wrote about patients not being consumers. His article comments on the Republican backlash against the Independent Payment Advisory Board.  I agree with a good number of his points (I'm pretty much a liberal independent), but the title: "Patients are not Consumers" misses the mark.  Others have written on the same topic.  Jackie Fox wrote on patients not being consumers on KevinMD nearly a year ago, and Dr. JC writes something similar in 2007 on Brain Blogger.

Lets look at a few definitions.
  1. A person who purchases goods and services for personal use. -- Google
  2. Consumer is a broad label for any individuals or households that use goods and services generated within the economy. -- Wikipedia
  3. An individual who buys products or services for personal use and not for manufacture or resale.  -- Investor Words
  4. One that utilizes economic goods -- Merriam Webster
  5. A person or thing that consumes -- Websters New World College Dictionary
And then there is the origin of the term "Consumer", whose earlier sense comes from "squanderer".

All of the definitions pretty much agree on "use" or "consumption", and most agree that it applies to individuals.  Economic definitions seem to distinguish between use and "purchase".

So, am I a consumer of healthcare?  Under my current (high deductable) plan, I do pay for the first few thousand dollars of it to providers of healthcare goods and services.  I  fit into any of the definitions above.  When I was covered under a different plan, I only paid for a small part of my care (at each even).  I'm not really sure I qualified as being a consumer then.  So, I am now, but not everyone is.

Is the Federal Government a consumer of healthcare?  It pays for it.  It receives some benefit from it, although not direct.  By some definitions it could be argued to be one, by others not.  Given that we fund the Federal government, we also are beneficiaries of these services but not really consumers.

Is my employer a consumer of healthcare?  It picks up the tab for a significant chunk after my deductible, although I'm still on the hook for some of it.  They receive some benefit from the expense (a happier, healthier, less distracted employee).  If the government is a consumer, employers are as well.

Is my payer a consumer?  Not in my case, but likely in many others where the payer is providing the "insurance".  My payer isn't spending their own money in any way, but those that take on risk (e.g., my employer) do.  They accept risk on behalf of their "customers".  Payers also negotiate the best deals they can on to benefit their stakeholders (which is not necessarily the same as their consumers).

From a different viewpoint; As a consumer of a TV set, an airline ticket, a car, or a good dinner, I have quite a bit of choice.  There is also a great deal of information that I can use to help me make my decision.  I can pick an airline based on where they go, their schedule, price and services.  I can easily compare one airline to another.  I can check out the quality of a TV or automobile, check prices, determine features, et cetera.  I can also check out the quality of a dealer or distributor, et cetera. I can check the ratings on service, food quality and price on just about any restaurant from numerous sources. As a consumer, I can choose to check out manufacturers and their distributors, or I can just buy what looks good to me.  My dollar, and my choice.

As a patient, I don't really have those choices.  I cannot easily check the quality of a surgeon or other specialist against any readily understandable benchmark (there are ways to do it in my state, but they aren't easy to find out, and the ratings aren't as easy to understand as something I would find in Consumer Reports).  I cannot figure out how much care is going to cost me from provider A or provider B for a specific condition.  Recently I spoke with a healthcare provider about what a visit would cost.  Nobody in the office could even give me a ballpark figure for what a visit would cost me.  I needed to speak to a specialist in billing to get that information.  My doctors don't have a good idea of what the costs of care are.  They cannot tell me the difference in costs between two tests that would give them additional information.  Some can probably tell me the difference in result quality, but even that is arguable given some of the reports of innumeracy.  My payer cannot even help.  They have a website that can tell me costs from certain providers for certain conditions and treatments, but to use it, I need to know both my diagnosis and what services they are going to bill me for.  And I won't know that until after I see them.  So even though I have been given a list of five specialists by my doctor in a recent visit, I cannot readily compare costs.

When I deal with a plumber or roofing contractor, I can talk to them about my options: quality, features and cost being the operative components.  When I deal with my doctor, I cannot easily have the same kinds of conversations, even though I try to do so more often now.  My healthcare dollar is important.  I want to get the best value I can for it overall, not just for a single problem, but for my entire health.
 
From this perspective, patients are not consumers.  We don't have the information that a consumer would have, and I'd sure like to have that now.  The argument that vouchers will change that doesn't hold water for me.  High deductible plans were supposed to have a similar effect, but they've made it no easier for patients to figure out how to best spend their healthcare dollar.

Friday, April 22, 2011

Trouble Finding HL7 V3 Codes in NQF Quality Measures? I know why...

Are you having trouble finding the codes listed in the spreadsheets supplied by CMS for the NQF Quality Measures?  I was recently contacted by someone who was looking for immunization reason codes from the  spreadsheets in the second download.

The spreadsheet rows affected look something like this:
standard_taxonomy
standard_taxonomy_version
standard_code_list
HL7
3.0
21703, 21704, 21738, 21745, 21747, 21815, 21990, 22259, 22261, 22855
HL7
3.0
14880, 15985, 19729, 21708, 21710, 21741, 21743, 21746, 22260, 22851
HL7
3.0
19730 , 19731, 19733, 19734, 19735, 19736, 19987, 19988, 19989, 19990, 21408, 21493, 21568, 21706, 21707, 21709, 21728, 21729, 21730, 21731, 21732, 21733, 21734, 21735, 21744, 22023, 22024, 22165, 22166, 22167, 22168, 22169, 22857, 22858, 22859, 22865, 22866, 22867, 22907, 22909, 22911, 22912, 22913

Look all you want in HL7 Version 3 Vocabulary and you'll have trouble finding these code values.  Why?  Because those aren't the HL7 Code values nor is the code system correctly identified.  Yes, they do come from HL7 Version 3.0, but which of the umpteen code systems defined in V3?  (Note:  I'm using the HL7 Ballot web-site here for educational purposes, you should really be using an HL7 Normative Edition Publication to view the published codes).

The answer is that it can be found in the HL7 V3 ActReason Code System.

So, where did the numbers come from?  These are the Internal identifiers HL7 uses to maintain the vocabulary, found in the column titled "Definition, Properties and Relationships" rather than the appropriate codes found in the "Concept Code Column".  Look at 21703 for example.  That should actually be INEFFECT in the ActReason code set.

The Standard Taxonomy column should indicate that the vocabulary comes from HL7 ActReason so you know which code set it comes from.  While this is part of the HL7 Version 3.0 standard, the specific version of the code set is NOT 3.0.  I'm assuming that NQF is using the HL7 2010 Normative Edition.  If so, the version should probably be 2010 or something like that.

Now that you know how to do the mapping, there's something else that should also have been done.  The spreadsheets note the copyright for CPT®, LOINC® and SNOMED-CT®, but should also acknowledge the copyright of HL7 on the Version 3 codes that they report, and obtain appropriate permissions to use them.

Thursday, April 21, 2011

Not Fully Baked

My daughters like Pineapple Upside-Down Cake, especially for breakfast.  The first time my wife made this, she made it up because she didn't have all the ingredients called for in her cookbook to make it from scratch.  She made it using yellow-cake mix, fresh pineapple, brown sugar, and maraschino cherries and a square cake pan.  It took about two hours to finish, instead of the 45 minutes that just mixing and baking the yellow cake would have. We discovered that it wasn't done when we went to cut and serve the cake and discovered that the inside was quite runny.  Time was short, so we decided to do something else for breakfast.  It still had promise, so we put the cake back in the oven instead of throwing it out.  We wound up having it with lunch.  It was yummy.

We aren't sure why it needed 1.5 hours in the oven, but there was a pretty good test we could use for done-ness.  If the toothpick came out clean, it was done.  The next time my wife made it, we planned for 1.5 hours of baking time, and that's about what it took.  Our experience in doing it once before was used to estimate how much time it would take to do it again.

We need new standards.  That is a key message behind the ONC Standards and Interoperability Framework initiatives.  We need mature standards.  That is a key message from Doug Fridsma to the HIT Standards FACA found in John Halamka's blog post of yesterday.  I'm reminded of a cartoon that John Moehrke tweeted yesterday where two parts of an organization are not in sync.

The Direct Project was planned to take six months and instead took nearly a year.  It's maturing quite rapidly compared to other efforts, but still, maturity takes time.  That's an obvious syllogism that seems to have been lost somewhat in the aggressive development activities going on in the Standards and Interoperability Framework.  Project planning 101:  If it's something new that's never been done before, it needs even more time than you probably think.

The CDA Consolidation project got 3 months to complete development of model driven tools and to use them to produce an implementation guide.  The project completed the required document on time, but as I dig into the results, it's still quite runny inside.  If this was a green field, the document would be great.  But it isn't and the document doesn't outline what is different from the C32 Version 2.5 and C83 Version 2.0.  So until I analyze those changes, I won't know what must change in an implementation to know what can be reused.  That analysis of the new requirements for an implementation is a struggle for me, and if I'm having problems with it, I'm certain that others are either equally struggling or even just plain stymied.  If they treat this guide as a new set of requirements, it's simply back to the drawing board and time to start over.  It's only when we can reuse what we did before that it becomes better.  The current specification also did not meet many the requirements that the Documentation Workgroup outlined at the start.  Nor, in my opinion, will it meet the success metrics outlined on the project page as it stands today.  The model data isn't delivered, nor are schematrons, UML models, nor tooling to support import, creation and validation.

As critical as I am, the CDA Consolidation project still has great promise.  I think we are going to need to put it back into the oven before it will be ready to serve.  It may or may not be ready in time for stage 2, but if we wait, it should still be yummy.

Wednesday, April 20, 2011

Physician Answer Syndrome

I was recently reading a post about physician use of clinical decision support titled "Do Decision Support Tools Make Docs Look Dumb?"  What I find interesting about this is that I routinely Google it, or look it up on the web, or in a particular document.

The volume of information that I am expected to be conversant with is incredibly large and includes: Programming syntax details in at least 20 different programming languages, software library capabilities in over two dozen freely available or licenses libraries, the operation and automation of about a dozen different tools, how to administer two different database servers made by different vendors, how to administer a web server, three different UML modeling tools, how to administer an operating system, and how to implement about 100 different standards and profiles.  That's probably not as complicated as being a doctor, but it's still pretty complex.

I try to keep enough of that in my operating memory to do the day to day stuff, and to know where to go when it's no longer day-to-day.  When I don't know, I also know a bunch of others like me who I can ask.

I'm not ever embarrassed at having to look something up, but I'm also rarely ever put into a position of need by someone who has no understanding of my particular art.  Even on those occasions, I'm still not embarrassed.   It's fairly simple to explain that I needed to check something out, and to explain what I discovered.  The translation of the user's need into an appropriate query, and interpretation of the results of that query into something that the end-user can do is quite valuable.

There's an old joke I've heard a bunch of different ways applying to a repairman who is called to fix a problem.  He listens very carefully to the customer who gives quite a detailed description of the problem.  After about 10 minutes of listening, he says, I know what your problem is, and then goes over and makes a very simple adjustment.  His customer is very happy, until the repairman issues the bill.  It's $55 dollars for no more than 30 seconds of work.  The customer is outraged.  "This bill is way too much.  You cannot charge me $55 for less than a minute's work."  The repairman agrees.  "You're right.  Give me back the bill."  He scratches out the $55 and writes the customer a new bill and hands it back.
   Adjusting the Thingamabob:  $1.00
   Knowing that Adjusting the Thingamabob would fix it:  $54.00
The customer paid.

Another great story is physician related.  A doctor has an extremely difficult case.  He calls on a colleague who listens to him go on about the case.  After quite some time, his colleague motions to the doctor to wait, and then steps out of the room and returns with a book.  He reads the doctor the answer from the book, and then closes it and returns it to where he got it, returning back to the room where the doctor is still sitting, now dumbfounded and outraged.  "You are a farce!  You didn't know the answer, you simply read it to me from a book.  You are supposed to be the best in the field.  How can you do that?"  His colleague says "Follow me" and leads the doctor to a large room filled with books.  He waves at the room expansively and then turns to the doctor and says, "Now, which of these books holds the answer that you need?"

The point is, knowing how to solve the problem is what is important.  That includes being able to access the right information that helps you find the solution.  It shouldn't matter whether its a healthcare problem, a computer problem, or a car problem.

If providers find demonstrating that skill to be embarrassing, perhaps they may suffer from a variant of another disorder, perhaps we should call it "Physician Answer Syndrome".

Tuesday, April 19, 2011

On being a HealthIT Mento(ree)

Last night's #HITsm chat brought up the topic of resources for people entering the Health IT workforce.  One of the most amazing resources that any person can find is a mentor.  It's a particular and often peculiar relationship that benefits both parties.

One of the things that I learned late my IT career is that Standards Development Organizations like HL7, profiling organizations like IHE, and HIT professional societies like HIMSS offer their members access to some of the most skilled professionals in Health IT.  And it was with couple of those that I developed a mentor/mentoree relationship around the same time.

I wouldn't be in the role that I'm in today if it hadn't been for my mentors.  I've been fortunate enough to have several, including some that I went to school with, others that I worked with directly, and others who worked for a competitor (for almost as long as I've known them).  I've also been a mentor on several occasions that I can count.  In at least one case the role flip-flopped back and forth several times.
  
From a Health IT perspective, I can tell you that I'm both proud and to some degree, even a wee bit jealous of those I've mentored.  The reason I'm jealous is because they are so much younger than I was when I got engaged in Healthcare IT and found my first mentor since my college years.  In at least one case, I managed to find the "Perfect Student", one who absorbed everything I had to teach and surpassed me quite rapidly.  I think he's now a VP of software development (I'd have to check linked-in).

Being a remote employee these days, it's very hard to find a mentor "in-house" as it were.  For those that aren't remote it can still be challenging because you have limited access to skilled senior-level people. I've seen many organizations try to formalize the mentor/mentoree relationship in a program.  Most of them simply don't work because you cannot force it, there needs to be the right chemistry.  I suspect the programs that do work focus more energy on making sure the right opportunities arise to develop the relationship without trying to force it.  As a new member of the Health IT workforce, don't wait for a program to offer you one.  I encourage you to join an organization like HL7, IHE, or HIMSS and get engaged.  It is through those engagements that you will learn more (possibly than you ever wanted to know) about the field that you've entered.  And if you are lucky (like I was), you may even find a mentor (or two) who can help you navigate the field.


Monday, April 18, 2011

Working around the ONC Certified Health IT Product List bug for those Attesting to MeaningfulUse today

One of the EHRA members discovered that organizations using the ONC Certified Health IT Product List website may not be able to find products by name due to a bug in the web site.  The problem occurs when using the search by Product Name or search by Vendor Name.

The problem can be demonstrated by entering a Vendor Name or Product Name and then displaying the results for some Vendors and Products with more than one record.  Thanks to Laura Nasipak of eClinicalWorks who discovered this problem.  She has been in contact with ONC and they are working on a fix.  I have verified that this impacts multiple Vendors (including GE Healthcare) and Products (including Centricity EMR 9.5).

If you are unable to find your vendor or product, the workaround is to change the number of records to display from 25 (the default) to 200.  If you still cannot find your vendor, try browsing the entire list.

The two screenshots below illustrate the problem and work-around  (click on the images to see them full-size).

The problem:

The workaround:

 



Friday, April 15, 2011

Some thoughts for ONC Head Dr. Mostashari on IHE XDS for HIE

For most of you, this post qualifies as "preaching to the choir".  In this particular case, I hope I'm preaching to the Bishop.

The other day the new ONC Coordinator, Dr. Farzad Mostashari, spoke briefly with members of the HIMSS EHRA at the all-member call.  One of the things he said struck me:

One thing will not change [at ONC], and that is Listening.


The reason is struck me was that we had been discussing that very same topic the night before on the #HITsm tweetchat.  I do know that Dr. Mostashari is listening to some of us.  My evidence is his recognition of me at HIMSS by my twitter handle.  I'm pretty certain that Dr. Mostashari is lurking on twitter (I have that from a very good source that works with him).  I even DM'ed my source to tell Farzad to check the twitter/chat stream.  That particular statement appears to have been stated in a number of other places as well.

Another interesting comment he made was on ONC's emphasis.  They will be emphasizing patient engagement.  If you read further into the stream, you'll see that I made a similar suggestion.  Was I prescient and great minds simply think alike or was he listening and responding?  Either way, I'm happy to hear it.

We do have one point of disagreement, which on on how standards should be developed.  It's pretty clear from the way he answered my question about existing standards and innovation that he feels that if a standard is not gaining traction it is because it isn't ready.  Readiness of a standard requires three things:  A good standard, customer demand for what it provides, and the technology to implement it.  "If we build it they will come" is not a good reason to implement.  There must be demand for it.  For most organizations looking at adopting new features and standards, the demand needs to be apparent or the vision something that can be promoted.  Sometimes promotion of that vision, especially if the technology is novel needs time.  If you've been tracking healthcare technologies, you are probably familiar with the Gartner Hype Cycle.  Many items on the hype cycle spend at least a year in each phase -- at least when I look at Healthcare IT.

When we worked on what would become Cross Enterprise Document Sharing (XDS) for the joint IHE/HL7 demonstration, we saw a great deal of customer excitement about the possibilities, but the market barely even existed!  We knew it would take time.  Over the following years we've seen the explosion of eHealth and HIEs as a market, one that I heard being sized at over $12 billion annually (see previous link).  There have been a number of different technologies put into play, and large number of vendors across all different sizes with a wide variety of products.  Much of the technology is proprietary and doesn't work with other solutions.

The XDS family of standards does stand above the rest:


XDS is just exiting the first half the the Hype Cycle (now climbing the slope). Not every solution takes off virally like Direct.  In fact, in healthcare IT,  time frames have been quite different   Technology like the iPads (proprietary) and Direct (standards based) are exceptions rather than the rule.  I appreciate Dr. Mostashari's desire to change the adoption curve, and I too would like to see that change.  Where I remain concerned with is the assessment of readiness.  If we are nearly ready, but go back to reinvent the wheel, we could find ourselves back where we started.  We will also have lost a great deal of momentum and investment that the industry has built up around this particular solution.

So, if you are still listening Dr. Mostashari, I have a couple of follow-up question to ask: What would readiness look like?  How would you assess existing HIE standards against it?  What are the gaps?  Let's have a dialogue.

As a quick reminder, the opinions represented in this blog are my own, and not that of my employer or the respective standards organizations that I work with.

Thursday, April 14, 2011

Industry Survey on Potential CORE Rule Opportunity Areas for EFT and ERA Transactions

This is the second of two announcements for today. While I don't usually focus on payment issues, this one is relevant for HITECH and PPACA programs, and CORE is looking for participating in this survey. Give them a few moments of your time if you can.

Keith


Dear Stakeholder,
Survey Background
On March 23, 2011, the National Committee on Vital and Health Statistics (NCVHS) submitted a recommendation to the Secretary of the Department of Health and Human Services (HHS) recommending CAQH CORE in collaboration with NACHA as operating rule authoring entity for EFT and ERA transactions. The NCVHS letter also recommends that the Secretary require CAQH CORE, in collaboration with NACHA, to submit to NCVHS fully vetted EFT and ERA operating rules for consideration by the Committee by August 1, 2011.
CAQH CORE convened EFT and ERA Subgroups to develop operating rules for these transactions using the established CORE rules approval process. To assist the CORE participants and the industry in its decision making on EFT and ERA operating rules, CAQH CORE staff has outlined a list of potential rule opportunity areas based on:
  • Scope of operating rules as defined by ACA Section 1104
  • Current industry initiatives including state and regional efforts (e.g., Minnesota State Administrative Uniformity Committee, Washington State Healthcare Forum), work done by ASC X12 and/or WEDI, CAQH CORE and NACHA research, existing draft CORE Operating Rules
Both CORE participating and non-participating entities are asked to provide feedback on the CAQH CORE list of potential rule opportunity areas to focus the Subgroups’ rule development efforts, which will be shared with the CORE Rules Work Group (see evaluation criteria at bottom of this email).
Survey Instructions
All industry stakeholders are invited to follow the link to complete the Industry Survey on Potential CORE Rule Opportunity Areas for EFT and ERA Transactions and affirm your organization’s priorities on potential EFT and ERA operating rule areas. CAQH CORE has included a section for soliciting additional rule opportunity areas; should you add items, be sure to consider scope of operating rules and other criteria such as timing. The survey should take about 30 minutes to complete. Please coordinate survey response with EFT and ERA experts within your organization as appropriate.
NOTE: This survey is informative only and does not constitute an official CAQH CORE vote.
Survey responses are due by Monday, April 18th, 2011. Results of the survey will be shared on the EFT & ERA Subgroup calls and on the next CORE Town Hall call, which will be open to both CORE participating and non-participating entities. A PDF of the survey document is available via request; however survey responses must be submitted via the online survey tool. One submission per organization is required.
If you need clarification or have any questions please contact Erin Richter, CORE Senior Manager, at erichter@caqh.org. This information is also available online at www.caqh.org/EFT_ERASurvey.php
Thank you for your time.
The CORE Team
Evaluation Criteria
Given the scope for potential EFT and ERA operating rules as previously outlined, the following evaluation criteria can be applied to potential rule opportunity areas to identify key areas of focus for the Subgroups’ efforts.





IHE News: New Window opens on European eHealth- Read more at www.ihe-europe.net.

This is a day for announcements. This is the first of two, this one from IHE...



IHE Community,

New Window opens on European eHealth
IHE-Europe launches a newly re-designed website, www.ihe-europe.net, in perfect timing with the 11th Annual IHE-Europe Connectathon taking place this week in Pisa, Italy. The European Connectathon is also host to daily round-tables and educational seminars on specific health-care topics. The recent activities from IHE-Europe open a new window on the accelerating movement to electronic records across Europe.

Visit the new IHE Europe Website
The IHE-Europe website features links to five pan-European initiatives, nine national programs, and highlights success stories from regional implementations. Plus, a new feature of the IHE-Europe website are the EU Projects, a section devoted to the growing number of programs for coordinating and harmonizing electronic health practices. According to Harm-Jan Wessels, Chair of IHE-Europe's Marketing Communications Committee, "the website is the result of a collaborative effort of eHealth experts from the various IHE national initiatives across Europe. This cooperation towards the common goal of eHealth interoperability is the foundation for IHE's international impact and success." Read the full Press Release online.

IHE-Europe's 11th Annual Connectathon in Pisa, Italy- April 11-15, 2011
The IHE Connectathon is a 5-day event which main purpose is testing the interoperability and connectivity of health-care IT systems. For five days in a vast hall that is hard-wired for high speed internet, more than 300 IT engineers come together in a casual but intensely concentrated setting to interconnect more than 100 systems and collaboratively solve problems. Participants state, the cost of identifying and fixing a system bug during an IHE Connectathon is ten times less expensive and difficult than de-bugging a system once it is installed at a hospital of clinic.

Official Connectathon 2011 results and a press release will be sent out at the close of testing this week. For more information please visit the IHE-Europe website.


Wednesday, April 13, 2011

Comments on Goal V of the ONC HealthIT Strategic Plan

This is the last post of the series on the ONC Health IT Strategic plan.

I posted Comments on Goal IV of the ONC HealthIT Strategic Plan a few days ago. John Moehrke covered Goal III quite well in his post at the end of March.  My thoughts on Goal IIappeared the same day, just before the ACO rule was announced, just in case you missed them.  And my comments on Goal I appeared the day before that.

Goal V: Achieve Rapid Learning and Technological Advancement
Overall, this goal is probably the furthest off and the weakest written.  There's too much focus on the Federal sphere and not enough outside of it.
A. Lead the creation of a learning health system to support quality, research, and public and population health
Strategy V.A.1: Establish an initial group of learning health system participants.
Two items jump out: "The learning health system’s success will depend in part on the participation of a select number of institutions that collect and use large amounts of health care data." and "Several federal organizations are already fostering learning systems scaled to their own agencies, and some of these agencies will be key initial members of this group."
While the Federal sector is clearly important, many other organizations should be able to contribute to this effort.  The qualification of "a select number of institutions that collect and use large amounts of health data" ignores smaller providers.  There are a number of initiatives that include these providers in efforts to collect and use large amounts of health data.  State HIE initiatives should be included so to see how HIE technologies can support the learning health system.  The FDA, CDC and ASPE efforts, while important, are rather narrowly focused efforts that ignore the potential of other, non-federal contributors to this effort.

Strategy V.A.2: Develop standards, policies, and technologies to connect individual participants within the learning health system.
Another quote in this section stands out: "In order to make the learning health system a reality on a national scale, standards, policies, and mutually reinforcing technologies must be put in place to ensure that data collected at the point of care can be accurately de-identified, aggregated, analyzed, and queried for population health studies and quality improvement."
I don't see the rationale.  The Learning Health System needs to be national in scope, but specific learning efforts could be regional in scale and more quickly executed upon.  There should be mechanisms to support both, and to foster communications between regional efforts.


Strategy V.A.3: Engage patients, providers, researchers, and institutions to exchange information through the learning health system.
I can see obvious benefits for researchers and institutions in participation.  But, what's in it for me as a patient?  When will I get lower cost or better care as a result in the near term?  What would a physician get from participation beyond a nebulous future beneficial result for their patients?  The Learning health system needs to look at innovation in the arenas of both patient and provider engagement.  One quick thought that occurs to me on provider engagement:  Healthcare providers all have continuing education requirements.  Would there be a way to report results produced by the learning health system to providers, or to encourage providers to engage patients that the learning health system is seeking that would include an educational component fulfilling some of these education requirements.  This is the kind of innovation that a learning health system needs to think about first.

B. Broaden the capacity of health IT through innovation and research

Strategy V.B.1: Liberate health data to enable health IT innovation.
This is a pretty good section.  Data liberation is just at the beginning stages as part of Meaningful Use stage 1, and will improve through subsequent stages.  Open government initiatives to consolidate data silos and make aggregated data accessible will certainly be valuable here.

Strategy V.B.2: Make targeted investments in health IT research.
Many of the investments discussed in this section have already been made.  There are certainly a few generously funded activities.  I'd be interested in seeing a broader approach to some of the research.  Which will produce better results?  Four $15M grants, or 120 $500,000 grants?  Innovation occurs in many ways, and a bigger net may gather more fish.

Strategy V.B.3: Employ government programs and services as test beds for innovative health IT.
Eating your own dog-food is a well established principle used by many innovative IT organizations.

Strategy V.B.4: Monitor and promote industry innovation.
Another quote "The government facilitates and monitors the health IT industry and stays abreast of innovation’s impact on federal policies and programs in order to further promote innovation within the industry. Such activity is conducted primarily through panels, conferences, white papers, and similar outreach efforts."
Monitoring includes attending health IT industry activities, not just hosting them.  There is quite a bit of activity going on in Health IT in standards organizations like HL7 and IHE that could use more input and feedback from ONC. Just as ONC needs to engage patients where they are, they should also be engaging providers and the health IT industry where they are.  Even though I'm pretty close to DC, attending ONC sponsored activities is not something I can always fit into my travel budget.  But I do attend quite a few other industry events.  I think one of the challenges here is that ONC doesn't want to show favoritism to any one organization -- fine, spread the wealth like CDC and VHA do.  Everyone will benefit.

Strategy V.B.5: Provide clear direction to the health IT industry regarding government roles and policies for protecting individuals while not stifling innovation.
I'm curious about IOM rather than FDA leadership in this area, especially with respect to patient safety.  I'm not sure why IOM was chosen to lead this activity, rather than the FDA; who has been addressing these issues for quite some time.

Tuesday, April 12, 2011

Affects of Efficient XML on HL7 Version 3

Diego Kaminker (cochair of the HL7 Education workgroup) reminded me this morning that I'm overdue on a post about Efficient XML, a new standard recently recognized by the W3C.  The creation of this standard is pretty significant for several reasons.  As designed, XML (and its predecessor SGML) were created to be text markup languages suitable for expert human users to use to annotate electronic text.  Because of this design, XML has become very easy for software engineers to use for a variety of different tasks.  XML users (and SGML users) have become quite familiar with editing raw markup directly in text files.  But the most common use for XML and its predecessor was software processing of the text and associated markup.

Uses for this markup abound and include display and formatting of text, book production (where I first encountered it), communication of software commands and responses to them (e.g., Web Services), structuring of tabular and hierarchical data, electronic commerce, et cetera.  One of the major complaints from the EDI world was that XML was notoriously costly for messaging in several ways:

  1. Data Size
    Converting data elements from a binary format to text-based formats increases the size of the data, often by an order of magnitude.  The use of XML tags to delimit data elements instead of position in a data field, or simpler delimiters creates quite a bit of additional bytes to transmit.  End tags in XML are quite redundant -- useful for humans, but not all that useful for computers at a certain point in the production cycle.  These additions can add yet another order of magnitude to storage requirements.
    Impacts on Data Size affect:
    1. Storage Capacity
    2. Transmission Bandwidth
    3. Memory Utilization
  2. Processing/Marshalling
    Converting from binary data types for numbers, dates, times and similar data types to text requires computing time.  Dealing with all those start and end tags, and parsing decisions on the text also requires computing time.  The compute time spent on these tasks could be better spent on OTHER things, especially given that parsed XML has similar representations on many different platforms.
These problems make it difficult for devices with limited resources to efficiently use XML for computation or communication, even though the format has numerous other advantages for software development.  What are some of these benefits?
  1. Ready access to tools which make content visible and editable.  Because XML is fundamentally text, any text editor can be used to open it up and edit it.  You may not recall a time when this was a problem for other data, but I do.  
  2. Standards AND tools for describing the content allowed to be in an XML document.
  3. Standards AND tools  to translate the content from one format to another.
  4. Engineer (which is not necessarily the same as human) readability.
  5. Implementability ... one of the guiding principals of the XML work was that it had a particular complexity goal in mind.  An XML parser should be implementable as a semester long college Senior project.
These benefits, and Moore's law trends in storage, network speed, memory sizes and processor speed have meant that the processing and data size issues have not significantly interfered with XML dominance as a data syntax.  But, small devices, or large bandwidth applications have still had some problems.  In the UK, one of the reported problems in adopting HL7 Version 3 was the verbosity of the V3 XML syntax.  In this particular case, the volume is on the order of hundreds of millions of messages per day.  The computation resources to parse the XML were significant, as was the bandwidth.

The HL7 Implementation Technology Specification workgroup  began development of an ITS that would simplify (flatten) the XML in 2008.  Those efforts began before I had even started this blog, so I don't even have a post on how I felt about that particular effort.  I can tell you I was quite negative on that ballot and it didn't go forward.  I ran a little test and what I found was that several existing models were only marginally improved (<10%) by the new algorithm.  I believe I argued successfully at that time that the right way to approach the problem was lower in the stack, rather than at the XML ITS layer.  

This argument applies not just to HL7 XML messaging, but to any form of XML processing.  What the application deals with is the XML Infoset, typically stored using the XML Document Object Model .  What is communicated to the application is an XML document.  Between communication and processing is a layer which translates the XML document from the XML syntax into the the XML Infoset.  That's where EXI has a huge impact.  By changing the format from text-based content to one better able to address the EXI requirements, an EXI implementation is better able to perform the translation back and forth between these layers.  It does so using a much reduced footprint from both a storage and a processing perspective.  Diego reports to me that in his brief experiment, a 60KB CDA document is compressed at a ratio of 20:1, which nearly makes up for the 1-2 orders of magnitude size increase.   Diego plans on performing other tests to evaluate performance.

What I have been able to determine from the W3C bake-off comparing the various technologies that they considered, and from the vendor's website for the technology that "won" is that you can also expect somewhere between 1-2 orders of magnitude improvement on processing (parsing) speed.

What happens next?  Having reached the point of becoming a standard, people are going to want to start implementing this in their products.  There are already 2 open source and one commercial implementation of the standard available. I think you can count on EXI being incorporated into your favorite XML parser pretty quickly.  Java implementations will probably be available sooner than C++/.Net.  Once web servers and browsers start supporting this technology, it will be interesting to see what it does to the browsing experience on sites that support it and which exchange XML or XHTML.   

One of the nice features about the EXI standard is that you can enable others who communicate with you to take advantage of it quite readily.  It plugs into the communications stack at the content encoder/decoder. At least one of the commercial products out there EXI enables your protocol stack.  You won't get all of the benefits of using EXI, but at least your communication partners could.  For XML based Web Services, this is a no-brainer feature to support.  Just make sure your client and server technologies support EXI on the stack and support use of x-efi as an encoding in Content-Encoding and Accept-Encoding headers.


When I was Your Age

My youngest daughter (8-years old, soon to be 9) was asked by her teacher (at E.G. Lyons School in Randolph, MA) to write two paragraphs about what was different in her parents lives.  Since I was working she interviewed my wife, but I thought it would be interesting for this community to hear and think about our own responses.  Sort of a reverse Beloit College Mindset List.
  1. We had just got our first color TV, and we got 6 channels on it.  We didn't have to worry about hooking it up to a VCR or DVD player, Satellite Dish, or Cable because those things hadn't been invented yet.
  2. Telephone handsets had wires that connected them to the phone, and that was wired to the wall.  There were only two styles and a limited selection of colors.  We didn't have cell-phones.  Some families still shared phones with their neighbors.
  3. Maps were always on paper.  There was no such thing as a GPS.  In fact, the satellites used for TV stations (not home TVs) had barely been in use for more than a couple of years. The global positioning satellites wouldn't exist for another 20 years.
  4. Kids didn't have e-mail addresses.  In fact, even adults didn't have e-mail addresses.  The whole Internet thing, with e-mail, web-pages, Twitter, Facebook, Google and all that other stuff was at least 20 years away. People did not have computers, but some companies did.  I had to wait until I was twice your age to get one.  Mine had 16K of RAM and cost 3 times as much as your netbook.
  5. Candy was pretty cheap.  A candy bar cost a quarter and there was penny-candy that really cost a penny.
  6. Almost all recyclables went into the trash with all the other garbage (garbage disposals weren't popular, even though they had been around for decades).  Most recycling focused on aluminum cans and news-papers (which I collected from people on the last Saturday of the month as a way to raise money for boy-scouts).
  7. Oh yeah, we got the newspaper daily and the Sunday edition had color comics.  Most news came either from the paper, the TV or the radio: Not Twitter, the web, Google and Facebook, or Fox, CNN and Cable
  8. The Walkman (you'd think it was an old-fashioned iPod) hadn't even been invented yet.  We had portable radios for our music (if we were lucky), and we had to go to record stores (you know, those old-fashioned CDs) to buy recordings of music.  Cassette tapes were also available.
  9. My mother paid 0.55 for a gallon of gas.  That was during the first OIL crisis.  Before that she paid less than 0.40 for it. 
  10. My job didn't exist.  In fact, while the whole idea of the Computerized Health Record had been "invented" by a guy named Larry Weed before I was born, they still aren't used in many Doctor's offices (although they are in yours and mine).
Some of the things you still have:
  1. Instant Hot Cocoa, Frosted Lucky Charms, Captain Crunch, Pop Tarts, Peanut-Butter and Jelly and Wonder Bread, Marshmallow Peeps and snickers bars all existed when I was little.
  2. Milk still gets delivered by the milk man and comes from local cows.
  3. Skateboards.
  4. Motorcross bicycles.
  5. Libraries and Museums.
Looking back on this, what will you be able to tell your children a few decades from now.
  1. Displays used to be heavy and expensive, instead of thin, lightweight, and roll-away.  Every thing used to have its own display instead of working with any display in the house.  People had lots of computers because everything needed a dedicated computer instead of using the cloud.  My dad had at one point in time a computer for work, one for home, one for fun, a tablet, a cell-phone and a GPS, and a garage full of old stuff.  It was really hard to make computers understand people.  And we had to have PRINTERS!?!  Can you imagine all the paper they wasted?
  2. Light used to come from bulbs that had to be replaced every few months instead of being permanently installed and having to be replaced only when something went wrong.  And it took 100 times as much power to generate the same amount of light.
  3. I couldn't get my own inbox (e-mail address) because I was too little.
  4. A phone call overseas was expensive, and a video call not something most people did. 
  5. They used to have to pick up the trash once a week because we threw away so much stuff!
  6. Houses had to be connected to electricity by wires instead of mostly making their own.
  7. You used to have to go to a special office to get shots and stuff to keep you healthy.  The people that worked there saw you a few times a year and a lot of them still kept track of everything they knew about you on paper instead of just beaming it to you or getting you to beam it to them.  Can you imagine having to go to an office for the doctor to look at you?  Didn't they even know about telepresence?
  8. We had these things called cars that could hold 4 and sometimes even 8 people that we drove around in to get places -- and it used GASOLINE!
  9. They used to have these placed called libraries that stored tons and tons of books on PAPER!
  10. Most people never left their own country to travel.  It was too expensive and difficult and there were all these rules about places you could go and wars and stuff.
And for your kids:
  1. Instant Cocoa, lucky charms, pop-tarts, PBJ and all that stuff will still exist.
  2. Their milk will still come from the milk-man and local cows.
  3. Skateboards will still exist.
  4. Bicycles will still be around, but about half the weight and cost.
  5. The library will be a museum.