Tuesday, May 31, 2011

When I use a Word

'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.'  -- Lewis Carrol, Through the Looking Glass

My daughter reported a dream she had about Humpty Dumpty fussing at her about grammar on the way back from a friend's house this weekend.  It's my fault because I made her read the Humpty Dumpty chapter after she complained to me about how she was using a particular word to mean something in a certain way.

Words have their meaning established by the communities that use them.  You can readily identify the communities that someone belongs to by the language that they use.  If you say RIM, R-MIM, DAM, or Story Board in the standards space, I can tell you've been involved in HL7 efforts.  On the other hand, actor, transaction and profile are key words in IHE.  Other keywords identify other standards developing organizations, and within each of those organizations there are also sub-languages.

Sometimes we use terms in different SDOs with overlapping and often conflicting meanings.  Standards developers are also (and I include myself in that group), the ultimate pedants.  We pay scrupulous attention to the details of our definitions.  And when our definitions conflict with those of others, we argue vociferously.  This is in part because we use language as a norming process.  Agreeing upon a common language is one of the ways we develop the community. Using terms outside of the ways agreed upon in the community violates the rules of the group.

There are two things that we, as Standards Developers must remember.
  1. Terms are more often defined by their use, not by an act of authority.
  2. The people we are communicating with (our customers) are not necessarily members of our community.


So, when defining terms, act with care.  Don't stretch a term so far that it breaks.  Use commonly accepted definitions where they exist, instead of defining new terms.  The precision added by creating a new term fails to support us when the new term simply confuses our readers.

If this post sounds like a rehash of something I've already said, it's because it is, right down to the opening quote.  Well, it is worth saying again.

   Keith

P.S.  I note also that I've used that same quote on two other occasions.  I must like it.

Friday, May 27, 2011

Updates to the ePrescribing Rule

Today, CMS published the updates to the ePrescribing rule that make it more consistent with the Meaningful Use incentives and standards. These are "on display" in the Federal Register and will be officially published shortly.  In brief, what this rule does is indicate that a certified EHR under Meaningful Use meets the requirements for the ePrescribing rule.  The rule also adds some hardship exceptions.

The critical text is this [emphasis added by me] which can be found at the bottom of page 12 and top of page 13:
... for purposes of the 2011 eRx measure certified EHR technology must comply with the Part D standards for the electronic transmission of prescriptions at 42 CFR 423.160(b)(2)(ii). This proposed requirement is consistent with the ONC certification requirements at 45 CFR 170.304(b) and 170.205(b)(1) and (2). With this proposed change to the 2011 eRx measure, eligible professionals (including those in group practices) that are participating in the eRx Incentive Program would have the option of adopting either a qualified eRx system that performs the four functionalities previously discussed or certified EHR technology as defined at 42 CFR 495.4 and 45 CFR 170.102. Thus, under this proposal, certified EHR technology would be recognized as a qualified system under the revised eRx quality measure regardless of whether the certified EHR technology has all four of the functionalities previously described. Because the proposed change to the 2011 eRx measure, if finalized, would not be effective until the effective date of a subsequent final rule, this change would only be effective for the remainder of the reporting periods in CY 2011 for the 2011 eRx incentive and the 2013 eRx payment adjustment. ...

A copy of the document with bookmarks included (although not really necessary in this case) can be downloaded from Google Docs, or viewed in the frame below.

Thursday, May 26, 2011

Restricting the Use of a Standard

Restricting the use of a standard is a topic that's come up multiple times in a variety of settings:

Use of GreenCDA "on the wire" was a topic at the HL7 Working Group Meeting, and has also been discussed during various HL7 calls.  On one of the HL7 mailing lists, the topic also came up regarding use of the HL7 Version 2 XML standard (Yes, there is a standard for that) "on the wire".  I happen to like that one for use with local transformations from V2 to CDA and V3 messaging.  It makes a nice bridge between the two that allows me to use off-the-shelf tools to support a variety of features.

Arguments for use on the wire or not are sometimes interesting.  However, the fact of the matter is that if a standard is useful, people will always find ways to use it that weren't thought of when the standard was created.  This is one way that standards support innovation.  One standard that was meant for exchange of information but not for storage is now being used widely for storage:  The HL7 Clinical Document Architecture.

If the standard makes it easy to do something (which is what standards should do), should we try to stop people from doing it?  I don't think so. What we should do, however, is consider how this new use would impact other systems, and how it might or might not be appropriate given conditions that weren't considered when the standard was created.  The complaint about V2 XML on the wire is similar to the issues about Green CDA "on the wire".  It creates a compatibility problem.  A system that only uses ER7 encoding for HL7 V2 cannot easily communicate with one that uses only the XML Schemas for V2.  You need a bridge between the two syntaxes.

There are other things to look out for.  A standard designed for an inpatient setting may not work when it needs to be used across enterprises, or in an outpatient setting.  Implementers need to understand these nuances.  A common example I use is regarding the CCD.  It's not a hammer suitable for all transfer use cases, something we knew before the fact, but became increasingly important afterwards.

As one observer mentioned, converting from ER7 to XML  "is trivial".  I've actually not found it completely trivial, except for simple cases.  I could write a one-off each time I needed it, but why should I have to.  It wastes my time and annoys my customers.  This is a trivial enough problem that HL7 should probably offer it as an open source solution.  That's an interesting consideration as I think about a variety of different  ways that HL7 can provide value in the future.

IHE Laboratory Technical Framework Volumes Published

 


IHE Community,

IHE Laboratory Technical Framework Volumes Published

The IHE Laboratory Technical Committee has published revision 3 of the following Technical Framework volumes as of May 19, 2011:
  • Vol. 1 (Lab TF-1): Profiles
  • Vol. 2 (Lab TF-2): Transactions
  • Vol. 3 (Lab TF-3): Content
The following Technical Framework volume has been deprecated as of May 19, 2011:
Vol. 4: (Lab TF-4): LOINC Terminology
 The documents are available for download at http://www.ihe.net/Technical_Framework/index.cfm.


Tuesday, May 24, 2011

Seamless Interoperability

The IOM just released a 273 page report advising that what providers need is seamless interoperability.  To put "Seamless Interoperability"  into recent words used by, Bob Dolin, Chair of HL7:  "It just works!"

What would it mean for two Health IT systems to be seamlessly interoperable?

  1. Would the two systems automatically recognize each other's presences and start negotiating how they would exchange data?  Your printer and operating system can do that today for most systems.  If I install CPOE and an LIS already exists, how much work should be required to make the two be aware of each other?  
  2. How much should be configurable?  When I install a printer driver, I sometimes get a choice of printer control languages (more standards).  I can use HP PCL or PostScript.  Do I get to choose which, does it automatically support both?  In this example, do I get to chose between HL7 V2.3.1 or 2.5.1 or are both supported.
  3. In HL7, there are a set of messages for dealing with labs that support a variety of workflows.  What workflows should we support?  To change a lab, do we cancel the old and submit a new one, or change the original order?  Who is allowed to change the order?  Can the lab do it, or only the ordering provider?
  4. What vocabulary is used to identify the labs being ordered?  I've argued for quite some time that LOINC can do the job, but others insist on using proprietary vocabulary.  How does that get mapped back to the EHR so that we can track results over time?
  5. What can be ordered? And what results will that order generate?

Some of these decisions are made locally by the provider, based on their organizations policies and workflows.  Others are negotiated with someone else.  Some are made by the supplier.  You cannot order something from a supplier that they don't provide.

Not all of this is about interoperability standards and technology.  Some of it has to do with making business practices work seamlessly as well.  While people complain about all of the optionality in HL7, some of it (not all of it mind you, but a good portion of it) is there because these business processes exist and need to be supported in some part of the world.

It would be an interesting exercise to see how willing organizations are to standardize (and potentially change) their existing business practices in order to support seamless interoperability.

Monday, May 23, 2011

Spinning the RHIO Story

A recent publication in the Annals of Internal Medicine has gotten quite a bit of attention in Health IT related media:
A quick summary of the reporting tells me that few if any actually read the report, since most only reported data available in the abstract.  I won't bother repeating the abstract, simply read it for yourself.

Age of the data?  The Survey data was gathered between December 2009 and March of 2010.  Where are we now?  May of 2011.  This study is based on data more than a year old.

On RHIO criteria for Meaningful Use, which is alluded in many of the article titles:  There isn't any, at least in the Meaningful Use Standards and Certification Rule or in the Incentives Rule.  That's right.  There is NO criteria specified for what an HIE / RHIO must do under any of these rules.  Nor are there any incentives being given to RHIOs/HIEs by CMS.  Yes, there were public funds made available by ONC, about 1/2 a billion dollars.  The study clearly acknowledges this, even if the reporting on it does not.

The next is the spin on financial viability.  Of 75 organizations, 1/3 (25 / 75 = 1/3) are financially viable, and another 20 expect to be so in the future.  Of startups with a product for sale in a specific sector, 1/3 are financially viable?  That sounds like positive news to me, not negative -- where do I invest?

Of new startups in a sector, over 40% (75 / 179) have a product in the market in the Health IT sector?  That also sounds positive to me.

In that group of new startups, around 17% (13 / 75) meet the current market need (Meaningful Use)?  Also positive, even if none meet the Comprehensive criteria that their expert panel put together.  New markets rarely produce the "perfect" product in early years.

Some other factoids:  The number of RHIOs continues to increase over time when compared to similar studies by the authors in mid 2008 and early 2007. The rate of increase seems to be slowing somewhat, but the number of RHIOs now defunct has been shrinking over that same time period.  It is now less than half of its 2007 numbers.   There are a couple of confounding factors that could influence the observed rate of growth, including the effect of ONC State HIE grants and Meaningful Use regulation on the industry, but the study doesn't address that these.

What were the biggest challenges for RHIOs according to the study?  In the core set, reporting quality measures, supported by slightly less than a quarter of the operational RHIOs.  In the menu set, reporting to public health (immunizations, syndromic surveillance, and electronic laboratory reporting) as an aggregate was the weakest.  A quarter of operational RHIOs were able to support that.  A critical observation that I would make here is that public health needs to get more engaged with RHIO initiatives.

The strengths?  Exchange of summary patient data (core) and laboratory reporting (menu-set) was supported by more than half of the operational RHIOs.

The funniest statement in the article?  "Most EHRs do not automatically enable the types of HIE required to achieve meaningful use..."  Have the authors looked here?  Or perhaps here and here?


The authors' conclusion? These findings call into question whether RHIOs in their current form can be self-sustaining and effective in helping U.S. physicians and hospitals engage in robust HIE to improve the quality and efficiency of care.

Yes, it does "call the question", but the study doesn't answer it, and it wasn't designed to do so.   Compare these findings to that of any other nascent industry that started in the last century. I think you'll find that we are in better shape than some, and worse than others.  The authors would be advised to compare this new industry with others.  This is my biggest issue with the study.  I didn't find it badly done as other studies on Health IT were, just under-analyzed and very poorly reported.


If we want to understand how well RHIOs are doing, we should compare them to other new businesses that have been "invented" or even "re-invented" in the past few decades. One example would be alternative fuels or electric/hybrid cars -- both recipients of government incentives.  We could compare this initiative to the failure of similar initiatives in the past (see this article on CHINs from 1994), or to others being done internationally (e.g., Canada Health Infoway, or the NHS program).  I'm sure others can find similar examples.  Doing so would provide some analysis about how we are fairing compared to other industries and  initiatives.  It also might set some realistic expectations about how soon we can expect results and what those results might look like.

Friday, May 20, 2011

Dynamic Behaviors Associated with Static Documents

One of the topics of discussions today in SDWG was issues around processing the CDA header. The discussion revolved around the complexity of the header and the requirements to understand the header. But this is not about static document content. Instead it is about application dynamic behavior, or more specifically application functional requirements.

Having identified the reality, and being able to point to four examples: Senders, Recievers, Displayers and Processors, we will now start looking at the responsibilities of each. This will be relatively easy becaue we've described sender and reciever responsibilities in the same way in more than a dozen guides, as well as display requirements. We also came up with two different processing examples that show that there is NOT a set of common requirements on the processing side.

Here are some examples of requirements that have been adopted in the past.

Originator Responsibilities
An originator can apply a template identifier (templateId) to assert conformance with a particular template. In the most general forms of CDA exchange, an originator need not apply a templateId for every template that an object in an instance document conforms to. This implementation guide asserts when templateIds are required for conformance.

Recipient Responsibilities
A recipient may reject an instance that does not contain a particular templateId (e.g., a recipient looking to receive only CCD documents can reject an instance without the appropriate templateId). A recipient may process objects in an instance document that do not contain a templateId (e.g., a recipient can process entries that contain Observation acts within a Problems section, even if the entries do not have templateIds).

Display Requirements
Good practice would recommend that the following be present whenever the document is viewed:
• Document title and document dates
• Service and encounter types, and date ranges as appropriate
• Names of all persons along with their roles, participations, participation date ranges, identifiers, address, and telecommunications information
• Names of selected organizations along with their roles, participations, participation date ranges, identifiers, address, and telecommunications information
• Date of birth for recordTarget(s)

Robert's Rules

HL7, IHE, ASTM and many other deliberative bodies are supposed to be run according to Robert's Rules of Order. This is a collection of rules originally created by Henry M. Robert in the late 19th Century to enable deliberative bodies to be efficiently run.

In fact, these organizations follow Robert's Rules in spirit, rather than to the letter, unless debate becomes unwieldy.  It is at that time that we fall back upon Robert's Rules.  This week I had to fall back upon it twice to get what I felt was needed.  Fortunately there are several online sources I was immediately able to retrieve data from.  I read them more thoroughly after the first meeting where I needed them.  I just downloaded a copy of the Newly Revised edition in Brief as an eBook.  Because knowledge of Robert's Rules helped me, I thought I'd share a bit of what I've learned from it, and compare what is in Robert's Rules to the way we do business today.  Note, this is not at all an objection to the way we do business today.  Advice in Robert's Rules is that the chair should be no more formal than necessary to ensure fair debate.  Our practices in HL7 and IHE are fair, and should a "Point of Order" arise, we fall back upon the rules to the depth necessary to address the issue.

Robert's Rules are rather formal.  For example, in order to be recognized by the chair (to be able to speak), one must stand and address the chair, wait to be acknowledged, and then speak, sitting down when done.  Unless otherwise specified by the body, each person gets no more than 10 minutes, and can only speak twice.

We certainly don't do that.  Often time the discussion just happens as an orderly dialogue between several members, just as if you were in a moderately large group of people talking about a particular topic.  In larger meetings, it is pretty common to raise our hands in order to get an opportunity to speak, and members are recognized in order (as best as we can track it), rather than on alternating sides as suggested by Robert's Rules.  Periodically the presiding chair will let others in the room know the order in which speakers will go next.  In more formal meetings, such as with the Board or the International Council hand raising is replaced by turning your name-card up.  In board meetings, the board members also have microphones which have a button that lights up and activates microphone so that you can identify who is speaking (which is rarely necessary given how well we know each other).  The two speaking opportunity limit is widely ignored, and members rarely speak for 10 minutes.  The usual length of a members point is a minute or two ... but some members have more wind than others (I happen to be one of the latter).  It is customary in many groups to honor the rule that everyone not already recognized speaks before a prior speaker is recognized again.

While a member is speaking, it is not uncommon for another to ask a question, or more formally to request a point of information or clarification, and then ask the question, to which the speaker or other member to whom the question is referred to responds.  The member speaking then resumes.  Under Robert's rules, the question is directed through the chair to the person needing to respond.  In most cases it is simply directed to the right person, usually the speaker, and sometimes a previous speaker or other authority.

When a motion is made, it becomes the topic of discussion.  Depending up the size of the group, and the size of the motion, it may first be written down, but is usually expressed verbally.  The appointed secretary writes down the motion.

Motions in HL7 or IHE are often modified by "friendly amendments", in which the amender suggests new wording that makes the intent more clear.  If the mover agrees, the amendment is adopted, if not, it fails, in which case the speaker may turn the friendly amendment into a formal amendment.  If seconded, an amendment, the discussion moves on to the amendment rather than to the main point.  That can also be amended (usually in the friendly way and very rarely by vote).

Under Robert's Rules, there is no such thing as a friendly amendment.  All amendments are treated the same way.  The members must deal with a "friendly amendment" as with all other amendments. It requires a second, and a vote on the amendment, and then discussion and vote on the new motion begins.

An acknowledged speaker may often "call the question".  This is a privileged move.  Some committees will, if there is agreement, simply move on to the vote.  Others will vote to call the question.  Under Robert's Rules this is a "Move of the previous question".  It requires a second, and since it closes off debate, it requires a 2/3 majority to pass.  Calling the question does not allow for debate as to whether the question being called is ready to end.  If it passes, then voting on the previous question begins without further debate. If it fails, further discussion is still possible.

A point of personal privilege is a request by a member to the chair to address an issue that prevents the body from deliberating appropriately.  These are rare.  The most common way this is done is by requesting the chair or other party to "use the mike", or to repeat something that could not be heard.  The term personal privilege is rarely used.

Occasionally a member will call for a point of order.  This is often a statement, sometimes worded as a request, as to whether appropriate procedure is being followed, and sometimes is used to point out that the discussion has moved away from the main motion.

While a motion to adjourn is almost always in order, it is rarely used as a parliamentary tactic to suspend discussion or vote. Occasionally, the chair will remind us of other pending business that will fit into the two minute time frame that remains before cookies start disappearing.  In today's meeting, a motion to adjourn was made and seconded in jest.  That calls for an immediate vote under Robert's Rules.  We didn't bother -- first, the jest was clear, and second, the two minutes the chair took to clarify when we would next meet, and organize other discussions was clearly part of the wrap up.  Usually, on a motion to adjourn (especially when called for at the scheduled time), we vote with our feet.

Thursday, May 19, 2011

What is a valid C32 for MeaningfulUse

This was a hot topic as a result of ballot comment on the CDA Consolidation guide. The question was: Why is a procedures section or results section required if you have none. The suggestion was that this was required under Meaningful Use based on the requirments of thse suggestions in conformance tests from NIST. The conformance tests are to show that the document creator complies with the certification requirements, not that the instance meets the requirements of the stated standards and implementation guides. When in doubt, read the rules.

So, will every C32 sent under meaningful use have results and procedures? I would assert not. Will every certified product be able to produce those where they may be required? Yes.

That's the difference between a functionally valid EHR that produces C32 documents and a valid C32 instance.

Retiring the Vest for TheWalkingGallery

Those of you who know me in person have seen "The Vest" with the rising Phoenix on the back. It was selected by my 4-year old daughter 5 years ago at a fair. It's got one working pocket now, and is worn to pieces. Something better is coming, and I'm quite excited.

Soon I will be participating in The Walking Gallery by Regina Holiday. It is first event in health innovation week DC: http://www.tedeytan.com/2011/05/06/8193

To see some of what will be on display, check out:
http://reginaholliday.blogspot.com/2011/05/lygeias-jacket-rosetta-stone.html
http://reginaholliday.blogspot.com/2011/05/this-is-how-movement-starts-beccas.html
http://reginaholliday.blogspot.com/2011/05/shoot-moon-ted-eytans-jacket.html
http://reginaholliday.blogspot.com/2011/05/media-matters-carolyns-jacket.html
http://reginaholliday.blogspot.com/2011/05/collins-jacket-curse-of-black-spot.html
http://reginaholliday.blogspot.com/2011/05/little-miss-type-personality-reginas.html

How this came about is another bit of #HITsm magic. Today was the first #ONCchat tweetchat. @TheGr8Chalupa, @fasal_q and I traded some banter on the chat that had to do with my writing and Regina's art. Regina asked later if I'd walk and it took me about 10 minutes to decide to do it.

So, I've been asked to promise that I'll wear this art to other conferences and events. I MOST certainly will. Every HL7, IHE and SIframework meeting I'm at. The Interoperability Showcase. The Public Health Informatics Conference. AMIA if I get there this year.

I've also been asked for a story. I have the first part, which is mine, but there is more that I want to do. I've asked a few friends for their stories also. I'll boil them down to the essentials because this is about a community of volunteers who are patients and patient advocates. This is a story I've long wanted to present, and a blog post that I've written several times but was never quite ready to publish because writing it brings me to tears sometimes.

It starts here:

I've spent many a night up till 3:00 am writing Healthcare IT standards and implementation guides.  Some of these are now used in the hospital 17 miles from my home.  Soon my Doctor who has been using an EHR will be able to given me an electronic copy of my health record in an standard (HITSP C32) that I and many others edited for HITSP, based on a guide (XPHR) that I worked on with many others in IHE, building from the CCD guide that I and thirteen others edited in HL7, building upon the HL7 CDA standard which I also helped develop.  That work now currently being updated in the S&I Framework projects from ONC.

My dream is that nobody will need to spend three days in the hospital recieving inadequate care because medical records closes at 5:00 on Friday.  My step-father did due to the fact that my mother didn't carry his X-rays in her 3-inch thick folder of his medical data.  My mother-in-law did too, fevered and raving, antibiotics not working, and doctors confounded until they discovered when records arrived on Monday morning that she'd just completed chemo the previous week.

I've been at this for 7 years and we still aren't where we need to be for myself, my family and my community.  My community includes a group of people like me.  All save one write blogs, most about about Healthcare IT, and most are also on twitter.  This community leads or has led work on standards in ASTM, DICOM, HITSP, HL7, IHE, ISO, and the S&I Framework. They work for themselves or healthcare IT vendors or healthcare provider organizations, developing solutions to problems they or their family have faced personally.  But nobody pays us to stay up until 3:00 am to finish the work.  We are patients, volunteers and advocates for better health IT.  What we do in standardizing Healthcare IT must be something can be used with our parents and our children, and our communities.

G's adult son was to recieve an upper-GI imaging procedure using barium, but it was given by mistake to a pre-teen with the same first name, because using only the first name protected privacy.  G is a privacy expert who knows how to address this and other privacy requirements safely.

L is an expert also in privacy and security, as well as in public health.  She was told a family member couldn't translate for their Spanish speaking mother, recovering from emergency brain surgery.  They needed to use a hospital supplied translator -- over the telephone.

The story doesn't end here yet, but I haven't gotten the other stories that I asked for. I already know what they are, but I need permission to print them. The anonymization I've done won't stand up to any deep scrutiny. So there will be a follow-up post to this one.

But this story will also never end, because there will always be one more thing to do, and other volunteers like these who will work til 3:00 am to do it.

Fortunately for me, it's not quite 2:30 am, so I'll get an extra half-hour of sleep tonight.

Wednesday, May 18, 2011

New NeHC University Program: Exploring the Broader Health IT Landscape

A message from our ONC partner
The Office of the National Coordinator for Health Information Technology (ONC) is a cooperative agreement partner of the National eHealth Collaborative (NeHC). As part of NeHC's continuing effort to promote education and awareness of health information technology (health IT) and ONC initiatives to all stakeholders, NeHC has recently made major changes to its Nationwide Health Information Network University education program. NHIN University is now NeHC University, an updated and expanded education program designed to explore the broader health IT landscape, in addition to the programs of the Nationwide Health Information Network. We hope that NeHC's new programs will enable stakeholders to gain a more comprehensive understanding of the impact that health IT will have in transforming the American health care system.
We encourage you to take advantage of NeHC University's new class offerings. The full schedule, course catalogue, and registration information are now available:
  • Health IT Orientation
  • Health IT Trends
  • Health IT Policy and Standards Committees Quarterly Updates – featuring ONC's Director of Policy and Planning, Jodi Daniel
  • Spotlight Learning Series
    • Learning Health System – featuring Dr. Charles Friedman, ONC's Chief Scientific Officer, and Joy Pritts, ONC's Chief Privacy Officer
    • Accountable Care Organizations (ACOs)
  • Industry Leader Briefings
  • Guest Lectures
  • NHIN University 300 Series 
Questions? Email NeHC University.

IHE Radiology Technical Framework Supplements Published for Trial Implementation



IHE Community,

IHE Radiology supplements published for Trial Implementation

The IHE Radiology Technical Committee has published the following supplements to the IHE Radiology Technical Framework for Trial Implementation as of May 17, 2011:
  • Cross-Community Access for Imaging (XCA-I)
  • Imaging Object Change Management (IOCM)


These profiles will be available for testing at subsequent IHE Connectathons.  The documents are available for download at http://www.ihe.net/Technical_Framework.  Comments should be submitted to the online forums at http://forums.rsna.org.

Why I write

“Amidst all this bustle it is not reason, which carries the prize, but eloquence; and no man needs ever despair of gaining proselytes to the most extravagant hypothosis, who has art enough to represent it in any favourable colors. The victory is not gained by the men at arms, who manage the pike and sword; but by the trumpeters, drummers and musicians of the army.” - David Hume (via @samheardoi

"Writing crystallizes thought (and forces you to think out your ideas more clearly) and crystallized thinking motivates positive action."
- Paul J. Meyer


Give it Away

At the HL7 working group meeting session this morning it was announced that HL7 will be exploring the possibility of giving away its standards to implementers.

I thoroughly support exploration of this possibility. Such action would enable wider implementation of HL7 Standards in the industry. That will serve to make products that support the standard much more valuable to those that create them. This is the kind of value that HL7 has been providing for years to its members already -- but this action could offer the entire HealthIT industry even more.

I can see several risks involved in considering this action. It certainly is scary from a business standpoint. I applaud HL7 in being willing to think about those risks. My hope is that industry support for it would serve to make HL7 an even stronger organization.

Tell me, please, what your thoughts are on this topic. I'd be interested in how this makes you feel about HL7 as an organization. Would it make you more likely to participate? Would you be willing to support HL7 more or less if this were to be implemented?

Tuesday, May 17, 2011

Fork It

No, I'm not swearing. At least not now.

More than a decade ago I worked for a company by the name of INSO (it was one of many names it had, it was also known as the Software Division at Houghton Mifflin, InfoSoft, and subsequently eBusiness Technologies).  We had a project to complete - a way to enhance search engines with linguistic analysis, and a MAJOR customer wanted it in time for their next BIG product release.  The deadlines were impossible.  We had cut every possible thing we could and we still couldn't make the deadlines work.  There was no way adding more staff would help (we had 9 months to make this baby walk, and hiring two or three more mothers to give it birth earlier wouldn't help).  We had a "Come to Jesus" meeting.  You know.  Where everyone involved sat there with all of senior management hovering while we tried to work out a solution.  We sat in that room for the better part of a day before someone figured out how to recast the project so that it would fit in the time alloted, and the rest of us joined in.  That project succeeded phenomenally.  In the first year we sold it to 12 of the top 15 search engine vendors.  A short number of years later that market consolidated into three vendors and linguistics was no longer a decent market, and that led to a long series of events that let me into Healthcare IT a decade later.  Oh how I  wish the solution we used then would work now, but it won't.  What I remember most though was how the pressure put on the team produced diamonds, rather than cracked coal dust.

So, let's move forward 15 years.  HL7 and IHE are now in a similar situation with the CDA Consolidation project.  The MAJOR customer is ONC, and their next big release is Meaningful Use Stage 2.  The deliverable is an implementation guide that can be shown to the HIT Standards Committee for consideration for Stage 2. They need it by June or July.

While the solution that the team I worked with used more than a decade ago won't work, there are still a number of tools in my toolbox that I can pull out.  The first one is an eating implement, and thus the title of this post.

We have a WIP (Work in Progress), called the CDA Consolidation Guide (or CCG for short -- just what we needed, another confusing CC acronym.  I hereby claim CC[A-CEFH-QS-Z] as acronyms that nobody else can use).  It's too big to finish in time.  We have 60 days of elapsed time to work with.  We need something that can get done in less than that.  With more than 700 comments, the current work will still be in reconciliation for the next 60 days, especially at the rate we are proceeding with now.  OK, so we take that WIP, and we create a fork.  There are now two projects.  MU2 and CCG.  MU2 is what we are going to deliver to ONC in 60 days.  CCG is something that we give the time it needs.  MU2 is a subset of CCG, so any changes in CCG that are related to MU2 need to be made on the MU2 branch.  Changes to CCG get made on the main line.  We merge back the MU2 stuff when it is ready.

The next tool is a rather brutal one.  I think an acetylene torch will do nicely.  We cut everything from MU2 that isn't absolutely essential.  To do that, we need to talk directly to our major customer and find out what their showstoppers are.  Anything that is a showstopper we keep, but everything else we cut out.  That's going to be some brutal cutting.

Because these items were in MU 1, they need to stay in this guide:
  • Problems
  • Medications
  • Allergies
  • Results
  • Procedures

In addition, there is some work on the CDA header (the consolidated header) that we need to incorporate. There are a few other possible areas of consideration. For that, I think we need to talk to our customer and get their list. Not the wish list, but the short list of must haves moving us beyond MU1. MU1 + the short list = MU2 (what is needed for Meaningful Use Stage 2). There are a number of things that might be nice to have, but lets put them on a longer track, and do them right, rather than muddle things up.

Now we need an adapter to go from 1/2" to 1/4" sockets. There are incompatibilities between what I call MU1 (the C32 requirements in Meaningful Use stage 1), and MU1a, which are the things in MU1 that MUST be corrected. MU1a is a subset of what is needed for MU2.

To bridge the gap between MU1 and MU1a we need to identify what from CCG supports what was in MU1, and figure out how to bridge the incompatibilities. That may mean creating a new template layer that adds back some constraints to ensure compatibility. The function of that layer is to create a way that we can equate C83 templates to MU2 templates where possible. The point here is to ensure that there is an adequate transition path between the existing work and the new MU2 work.

The final tool is a coffee pot. Getting this done is going to require quite a bit of effort, including a lot of late evenings (last night til 3:30 am writing the first three revisions of thi post being the first of many). I'm willing to engage, because the alternatives (and there are more than one) for not doing it are rather unpleasant to say the least.

Monday, May 16, 2011

The next Ad Hoc Harley Award ...

Those of you who have been reading for a while know the deal. For those who don't: This is an award I give out about 5 times a year to recognize a significant contribution to the development of Healthcare Standards. The contribution is often a singular one, and when it isn't, I don't do repeats in the same year. I'm the sole arbiter and judge. There is no contest or nomination period (although nominees are always welcome).

This week is the HL7 Working Group Meeting in Orlando. One of the significant activities that happens this week are the educational offerings provided at the meeting. I teach at these meetings, and sometimes also at the educational summits. This particular award recognizes the efforts of someone who's been very much involved in the educational efforts of HL7. This person has been deeply involved in recent updates to a couple of HL7 educational offerings, and also was responsible for creating one of HL7's most popular educational offerings.

One of the unstated rules of the Ad Hoc Harley award is that I usually try to pick people who have otherwise gone unrecognized. This next recipient wouldn't qualify under that unwritten rule, since he's also received a Duke Blue vase (The Ed Hammond Award) for similar contributions from HL7. But since it is an unwritten rule, I feel quite OK in breaking it this time. This is especially because outside HL7 circles, this individual probably hasn't been recognized enough, and because there is always a special place for teachers.

This certifies that 
Diego Kaminker of HL7 Argentina


Has hereby been recognized for outstanding contributions to the forwarding of Healthcare Standardization by educating others

Diego, thanks for all that you do, and for all you have taught. I can think of no-one else who's had a more direct effect on educating others about HL7 Standards.

Sunday, May 15, 2011

JIC Activities with HL7

One of the reasons I'm here on Sunday is to give my report on what IHE is doing with HL7 (which you can find here). The HL7 TSC has a meeting to review joint activites across SDOs. The major report was on JIC activities and is listed below:

Joint Interoperability Council

ICSR: The HL7 International Case Safety Report was submitted to ISO in February. There were some problems with this based on the fact that it based on HL7 Data Types Release 1. It is expected to be balloted in the next few months. It has been suggested that we need to review our processes working with ISO moving forward. One of the challenges is that the long time frames needed to coordinate mean that the standard "doesn't exist" until it has been through both organizations -- and the delays make it challenging for implementors.

IDMP: The Identification of Medical Products passed in ISO, and are still being resolved in HL7. They were not quite balloted simultaneously in ISO and HL7. This project also notes a very long time to completion. Since both ICSR and IDMP started before the JIC intitiative, they also suffered through all of the "teething pains" of the new process.

Data Types: ISO Data Types is Done!

EHR-FM: The EHR Functional Model is being jointly balloted by ISO and HL7. THe challenge for this ballot is synchronizing the two, because it leaves about 5 months of time where no development work would occur during that period.

RIM: RIM Version 1 is up for renewal. The TSC is planning on submitting RIM 3.0 to ISO to replace RIM 1.0.

GS1: Working with HL7 on standards for automatic identification data capture in Healthcare. One of these is on Universal Device Identification (UDI) currently of interest in the US by FDA, and to work with HL7 Structured Product Labeling.

There is a New Work in Progress (NWIP) on Patient ID and Caregiver ID.

The reporter also reported on an ISO NWIP that they are participating in titled:
Requirement for International Machine Readable coding of medicinal product package identifiers. Translated "Bar Coding" for Medical Packaging, that should make Barbara Duck (@MedicalQuack) happy.

Reports from the TSC via John Quinn:

On a side note, John Quinn notes that the Secretariat for ISO TC 215 is passing. HIMMS will be relinquishing this role to ANSI, until another organization can be identified to take on that role.

OASIS has approached HL7 to "reinvigorate" the MOU with HL7.

John Quinn also reported renewed interest by CMS on Claims Attachments, and by NCVHS on the same topic. They are trying to make sure that there aren't two different ways to deal with this issue. The interest is not surprising since CMS is directed to produce a regulation for this (15 years after HIPAA required it) under PPACA.

Explaining Why

One of the things I'm called to do pretty often is to explain why HL7, IHE or HItSP did something a certain way. Recently, when designing what I called A Perfect Implementation Guide, I added the explanations of why into the design of the guide.

OK, so Why should that be required?

You should be able to trace requirements of the standard back to requirements imposed by:
  • Requirements of the use case
  • Requirements for Interoperability (e.g., there may be multiple ways to do something that should be limited, such as exchanging intervals of time)
  • Requirements or limits of one or the other system involved (e.g., System X can only deal with two or fewer address lines).
  • Policy Requirements (e.g., Facility is required by law or regulation to be able to access this data).


Providing the why supports traceability to requirements. I've seen few standards projects actually trace back to requirements in any formal way.

DAMs and Domain Information Models are a good start. The DAM can (if done well) explain the requirements for a specific message element. Not all DAMs do that (DAMs that don't are damned to leave us scratching our heads).

Explaining why a data element is required helps the reader to develop an opinion that can be informed by the requirements. Leaving that out can leave the reader uncertain as the need for a particular datum. Explaining why an element is recommended or optional, and how it might be used by the receiving system may actually get more developers to use the recommended or optional data element, especially if the why is convincing.

It goes back to identifying our audience and dealing with them appropriately. The audience for a standard or implementation guide includes implementers and end users. Implementers are not always experts in the clinical context. End users are not always experts in the engineering need. Putting in the why can help either to see the need.

Expecting either to go back to a prior document to find the why is not realistic (unless you hyperlink it). We all have tight deadlines, are over-scheduled, and under-resourced. So, make it easy for me to understand WHY you put that there. DOn't assume that it's obvious. Believe it or not, as an implementer, I have very little formal clinical training, and there are many more like me in the same boat. Similarly, don't expect those reading your specifications that do have clinical training to have the engineering background to understand those details. Those that have both are rare, and they probably were in the meetings where the work was discussed.

Let's think about our audience, and start making it easier for them to implement, and giving them the why's that will begin to make them as expert as those who write these specifications.

Providing the why will do one more thing: It will free us up from having to remember why we did that, giving us more time to work on new things.

Saturday, May 14, 2011

Stupid XSLT Tricks

Found this post in my drafts folder. I had intended to post it after I had tested the technique. I have used it now (with a few tweaks), and it works. You might need to fiddle with it yourself a bit. It's another of those idioms I'll keep in my bag of tricks.


This is another one of what I call stupid XSLT tricks.  Stupid because programming should be clear and simple, and this is NOT.  Tricky and worth writing about because it does something useful.

I'm trying to line up two sets of assertions (in Schematron).  I have a list of template identifiers (in an XSLT node-set), and I want to find all Schematron rules (and assertions) that apply to that template.

Fortunately, I can pretty much expect that one of the expressions similar to the following will be present in the context of the rule.

*[cda:templateId/@root="templateId"]
*[cda:templateId/@root='templateId']
* [ cda:templateId / @ root = "templateId" ]
* [ cda:templateId / @ root = 'templateId' ]

What I cannot count on is consistency in use of white space or quote delimiters.  So, if I remove all the white space (using translate) , and the quote delimiters (using translate as well), what I'm left with is this:

*[cda:templateId/@root=templateId]

Just to simplify a little further (because there are quite a few variations on * as well), I decided to take whatever string was after cda:templateId/@root=, (using substring-after to give me templateId], and whatever in that appeared before the ] (using substring-before give me templateId). Now, if one or the other of those strings don't appear, I'm left with an empty string. I can now take my node-list containing template identifiers and turn it into a string in this form:

{templateId1}{templateId2}

I can now check to see if the templateId I found is in the list of templateIds I produced by wrapping it in a pair of delimiters (I used { and } ) using concat.

So this is what the XPath expression to find a list of templates looks like:

$ruleDoc//sch:rule[
contains(string($tlist), 
concat('{',substring-before(    
      substring-after(
        translate(translate(@context,'"'',' '),' ',''),
        "cda:templateId/@root="
       ),
']'
),
'}'
)
  )
]/sch:assert

Why was this important?  Because I want to process the assertions in an XSLT for-each  loop storing by the first cda:elementName or @attribute in the @test attribute of the Schematron assertion.

That way, when I look at a template in the CDA Consolidation project, and in the other templates, I'll be able to quickly find tests on the same elements within the two different sets of templates.

This is challenging work, something that I usually enjoy.

Wednesday, May 11, 2011

Idioms as insight into Models - A synthesis across Templates and CDS models

Software developers use programming languages in idiomatic ways all the time.  The infamous "endless" loop in C/C++ looks like this:

for(;;)
{
}

Most programmers will learn to recognize these common idioms and move on.  Sometimes the idiomatic use so simple that it never needs to be codified, and is easily understood (like the endless loop above).  At other times, the idiom is rather complicated and in some ways, inexplicable.

Here is an example from XSLT.  See if you can figure out what it does before I explain it:

substring-before(concat(translate(X,'+','-'),'-'),'-')

This particular idiom is one I use regularly.  For a string X, it first translates any occurrence of '+' into a '-'.  Then it appends a '-' to the string.  Then it extracts whatever string appears before the '-' character. If X is an ISO 8601 formatted date-timestamp with an optional time zone, this expression will extract just the date-timestamp portion of it without the timezone.

Language developers will often take common idioms and use them to develop new language features that re-implement the idiomatic usage.  If I were writing an XSLT library, I'd certainly put in some timestamp manipulation functions to avoid having to write this in its idiomatic (and idiotic) form.

Idioms are important because they identify common patterns of use.  In looking over a number of templates in IHE, HL7 and HITSP, and identified a bakers dozen of common idioms.  From that, and a review of the documentation, we have enough information to put together a template meta-model.

Earlier today I was on a call talking about clinical decision support and GELLO.  GELLO has to be my least favorite language for expressing things in a clear form. It shares a common feature with XSLT which is that it is declarative.  Declarative programming is counter-intuitive for most people when the first encounter it.  SQL is also a declarative programming language.  The benefit of declarative programming languages is that they specify what the result looks like, not how to generate it, or the order of the steps needed to accomplish the task.  The counter-intuitive part is that you have to think "backwards" and "upside-down" or "inside-out" and forget procedures.  It takes a while to learn that skill (or at least it did for me).

One of the pieces of feedback that the speaker had gotten about GELLO was the love/hate, and mostly hate relationship that non-technical people had with GELLO when examining clinical decision support rules.  I feel almost the same way about OCL as I do about GELLO, which isn't surprising since GELLO is derived from OCL (and should become a proper subset of it some day).

Having recently had a similar experience with OCL (because MDHT expresses constraints in OCL), and having figured out that the idioms used are a key to figuring out the meta-model, what I realized was that the same was also true in Clinical Decision Support.  The idioms used in clinical decision support are what will be important to providers when we finally figure out what it's meta-model is.  For Templates, we had to get to over a hundred of them before I could see (and use) common idioms.  Having over 1000, I'm pretty certain we have all the key ones nailed down.  I suspect we'll need to do the same for Clinical Decision Support.

There's still some odd-ball stuff that we'd have to represent directly for templates in a "low-level" construct like Schematron/XPath or OCL.  We can live with that.  After all, sometimes you need to call into an assembly language routine from a high level language too.  I'm certain the same will be true for clinical decision support as well.

As I think about this even more, I'm struck by yet another similarity.  High level languages (and from them Object-oriented languages) originated from assembly.  If you think of GELLO or Schematron as the assembly language, you can see where models and high level languages for templates (or CDS) will finally emerge.

I love it when separate activities come together like this.

IHE Canada Announces New Partnership and 2012 N.A. Connectathon Dates Announced!



IHE Community,
IHE Canada Partners with Canada Health Infoway
IHE Canada enters into an exciting new phase in their history. The International Board approved a formal agreement with Canada Health Infoway that will see this organization in its Standards Collaborative function and take over the role of Canada’s IHE National Deployment Committee. This change is a natural and welcomed evolution for IHE Canada. It will provide a formal representation at all government levels that establish the direction and priorities for healthcare standards in Canada.

IHE Europe’s Connectathon a Landmark Event!
Over 400 participants gathered in Pisa, Italy in April for the European Connectathon, creating a landmark event with a diverse program of activities held in conjunction with IHE’s Connectathon testing. Over 75 organizations conducted 2,200 tests of IHE integration profiles. IHE Europe also hosted a new testing event called Projectathon where 13 Member States of the European Union exchanged patient information to prepare for a new cross-border pilot program. Learn more about IHE Europe and visit their website.

IHE North America Connectathon Announces 2012 Dates
IHE USA is excited to announce the IHE North American Connectathon 2012 will take place January 9-14, 2012. In preparation for this event IHE USA has posted the 2012 Policies and Guidelines for participants to review in advance and prepare for registration opening August 22 and continuing through September 30, 2011. View Participant Resources and information on IHE USA’s website.



PublicHealth Syndromic Surveillance Guide Published for Public Comment

I've spent a good bit of time talking about Syndromic Surveillance for Meaningful Use on this blog, especially given the various mistakes made in the original selection of standards and the subsequent approaches used to correct those mistakes. Now we have another minor whoops to contend with.

PHIN recently published a new implementation guide for implementing syndromic surveillance.  But this was not developed through a  consensus-based standards development process as suggested by OMB Directive A-119.  Instead it was yet another Federally sponsored project developed outside of an SDO.  I should acknowledge that they did have SDO input or I wouldn't have even known where to find this.  There is an opportunity to comment, it's just not one which people who follow and develop healthcare standards would normally be tracking.

I did a quick skim through the guide.  Once again, the work isn't bad, but I am still concerned about the lack of input from the side of producers of the messages in the guide.  There are a couple of places in the guide where the value set is marked TBD.  The most annoying one is the lack of specificity on laboratory results.  Haven't we been down this path before.  At least they could specify LOINC as the vocabulary and build from other work on reportable and notifiable conditions.

I'll do a more detailed analysis as time permits. The deadline for comments in June 20th, so there is still time.

Tuesday, May 10, 2011

Call, call, call, call ... sounds like a raven.

Where does the time go?  

Tuesday's are the worst day of my week.  I get on the phone starting at 9:00 and don't get off until 5:00, with no break for lunch.  More than half of these calls are dealing with SDO efforts involving HL7, IHE or the ONC SI framework.  Others are related, e.g., EHRA calls.

In a typical week, for HL7 I have as many as 5.5 hours of meetings. For IHE PCC, I have from up to 3 hours of calls I could attend.  For the ONC Transitions of Care I could attend 7.5 hours of calls. For the ONC Laboratory Reporting Initiative I could attend 4 hours of calls.  For EHRA there are as many as 4 hours of calls I could attend.

That's 3 solid days (24 hours) worth of phone calls, leaving the remaining 16 to do the actual work, and doesn't count any time for internal calls or "real work".

Needless to say, I'm not on every call I could possibly be on.  After all, everything that is urgent is not necessarily important.  But for some reason, I cannot seem to make a free hour on Tuesday (internal calls are not shown above).  On Tuesday's everything seems to be important.  So, today's blog post is me whining about calls ... instead of something useful.

And of course, there's personal time needed to finish up proofs on the CDA book.  Fortunately, I finished last night at oh-dark-thirty.  It should be out in a few more weeks.

Sorry.  Maybe if everything wasn't so urgent it would be different.  Maybe next week will be better.  Oh.  I forgot.  It's the HL7 Working Group meeting.  We start at 8:00 with meetings over breakfast and go into Dinner and the meetings after dinner.  Well, at least I'll be in Orlando where the weather will be nice -- nope, sorry, not this time.

Monday, May 9, 2011

Some thoughts on Canonical Pedigrees

One of my colleagues is working on the HL7 Canonical Pedigree Project.  The point of this project is to develop reference content that could be used to test various representations of the pedigree.

One of the interesting challenges in Pedigree representation is being able to look the genetic information from the perspective of different probands.  Being able to look at a genetic history from different perspectives allows for a variety of different techniques to be used for analysis.

In order to represent the family tree, the HL7 Pedigree model allows for two persons to be represented with a coded relationship between them.  The coded relationship comes from the HL7 Family Relationship Role Type vocabulary.  The essential model is that the patient is related to (at least) one other person, who could in turn be related to other persons, et cetera.

There is no requirement for the "pedigree" graph produced to use any specific relationship (e.g., parent, sibling, spouse), unlike what would be found in a "historical pedigree" such as this one.  The vocabulary allows for exact (natural mother) and inexact (mother) relationships to be represented. Transforming when this sort of vocabulary is used to represent relationships, changing the viewpoint from one subject to another can be challenging.

I argued with my colleague that any canonical representation of a pedigree needs to also include a canonical representation of the relationships, and that the current vocabulary doesn't help at all, since it has a focal point that isn't really reversible.  Changing the proband requires changing the direction of relationships and associated vocabulary.

23 and me has a great video describing the cousin relationship which helped me work through some of this.  If you want to whether someone is your Nth cousin, and at how many removes they are, there's a simple answer.  Cousins share a common grandparent (or great-grandparent, et cetera), but not common parents (that would be sibling, nephew, aunt, et cetera).  To find the degree, count the number of greats between both parties to their common grandparent.  Take the largest number and add one.  If you and I share a common grand-parent, then the number of greats for both of us is 0, to which I add one, and discover we are first cousins.  Now, if it's my grandparent, but your great-grandparent.  To find the number removed, you are looking at the difference in generations.  Simply take the difference in the "great" count.


The unifying principal that I worked out is this:
In order to canonically represent relationships, you must only represent those between ancestors and their offspring.  You can take it in either direction, dealing with "begat" or "was begatted by" as the preferred direction.

In the canonical form, then, you wouldn't need to represent the "cousin" relationship directly.  Instead, you'd relate the two subject to their common *-grandparent.

We can construct a vocabulary to do that.  Just use F to represent natural father, and M to represent natural mother.  FF becomes my father's father, and FM my father's mother.  The vocabulary can be modified to be more concise.  Whenever F or M repeats, just put the repeat count following it.  FF could be represented as F2.

To deal with ambiguity of the ancestry (if X is my first cousin, is it through my father's or mother's side), we could simply use P to represent parent.  So, if X is my first cousin, we have a common grandparent, Y.  X is related to Y by P2, and I'm related to Y by P2.  If you don't know how far back the relationship goes, you could use the + operator.  My cousin (first, second, third or more) and I would be related to a common ancestor Y using P+.

The nice thing about using this form for a canonical relationship is that it doesn't matter who the proband is, the set of relationships that are in the pedigree don't have to be modified when the subject changes.


While this would seem to capture almost everything needed in a pedigree, there's one genetic relationship that isn't expressed.  See if you can discover which one.  The answer is in comments below.

Friday, May 6, 2011

Cross Enterprise Document Workflow

Automating workflows within an enterprise is very different from automating them across enterprises. Within an enterprise one can expect a common platform for workflow automation. Processes can be well-defined, controlled and responsibilities assigned for management and escalation. Think about what you go through to purchase a piece of hardware or software in your own environment. Across enterprises it gets quite a bit more complicated. A common platform is hardly likely. Obtaining agreement on processes is not easy, and can require quite a bit of manual intervention. Think about what it takes to approve a new supplier for goods and services. In most businesses though, dealing with a new supplier still works because there are well defined inputs and outputs at the interfaces between the organizational workflows. You issue a PO, the supplier responds with a well defined package of goods or services and an invoice. When all goods and services are delivered, you pay the bill.

Imagine how complicated cross-enterprise workflows would get in your organization if your IT department required detailed information about the steps used to diagnose and repair a broken laptop that it sent out to someone else to repair, and furthermore wanted to automate the workflow across the two organizations to keep track of status. Some will tell me that they even manage this, and I'm certain they do. Usually with one or two other organizations that they have developed a pretty strong business relationship with. Scale that up to hundreds of other organizations or more, and I can assure you that it isn't done.

But in healthcare, these sorts of workflows happen all of the time. And healthcare providers need to track and route what is happening. Enter Cross Enterprise Document Workflow (XDW). The purpose of XDW is not to "automate" workflows within or even across enterprises. Instead, it's about tracking what happens, or should happen in the next few well-defined steps, and being able to look inside any "sub-workflows" initiated.

Think about a simple referral. Provider detects finding, refers patient to specialist. Specialist examines patient, treats patient and report back to the orginal provider. Getting a bit more complex, the specialist has an additional finding that he/she sends off to someone else (e.g., pathology) to look at. That result is reported back to the specialist. Because of that result, additional treatment (e.g., surgery) is needed. Complications due to surgery require additional treatment (e.g., PT) for recovery. And so on.

Consider that each provider involved in care may have a referral network that includes hundreds of other healthcare providers and organizations. Consider also that the patient may decide to use other healthcare providers that aren't already known to the originating provider.

So we need to think about this differently. The function of workflow automation in healthcare needs to support tracking what is happening to a patient without requiring a "definition" of the workflow to exist up front. XDW allows a healthcare provider to initiate a task, provide inputs in the form of documents, and to track the status of a task, and look at the "outputs" (again, documents). A provider that is working on a task can "claim ownership" of it, generate outputs associated with it (e.g., diagnostic results or consult notes), and initiate new tasks to ensure the completion of the workflow that was initiated by the initial request for consultation.  As the workflow proceeds, it can take new and unanticipated directions without any pre-coordination being required by the initiating provider.

Sounds like cool stuff.

Thursday, May 5, 2011

IHE PCC Reconciliation

IHE PCC just finished a week full of meetings in Oak Brook to complete development of its profiles and white papers for public comment. One of the profiles we are working on this year is Reconciliation of Diagnoses, Allergies and Medications, or RECON for short.

The RECON profile has one actor, the reconciiation agent. Usually this would be implemented by an EHR system. The actor has functional and technical requirements.

Functionally the actor is required to access clinical information about problems, medications and allergies. That information must be able to come from clinical documents, and may also come from queries using the IHE QED profile, or using data from other unprofiled sources.

The actor must then determine which of the various diagnoses, allergies and medications reported from the various information sources are in agreement, and which are different. It must then suggest changes to make to a final list. IHE doesn't say how those changes are suggested, but does offer some guidance in this profile on what information might be used to help make those decisions.

The healthcare provider using the system that supports this actor will then be able to make any corrections, additions, or updates to the lists.

Finally, the system must be able to report the reconciled results. The results reported include what was accepted by the healthcare provider during the reconciliation process, of course. But they also identify the healthcare provider(s) who reconciled the data, and the information sources that were used during reconciliation.

This profile will enable systems to take advantage of information communicated in, for example multiple CCD documents, and to coordinate it with information stored within an EHR. One benefit of this profile that because it also records the sources of information which have already been reconciled, a receiving system can also avoid work by taking advantage of that fact. Thus, if a system receives information in document C which contained information reconciled from document A and B, then it could avoid reconciling against document A and B again.

It will soon be published for public comment and I look forward to your feedback.

Comments on the CDA Consolidation Guide

This is the last thing you are going to hear from me on this topic for a little while ... at least until the HL7 Working Group Meeting later this month.

I just uploaded my ballot comments (an Excel spreadsheet) on the HL7 CDA Consolidation Guide.  I copied my comments verbatim from what IHE prepared and agreed to this afternoon.  There was barely enough time to prepare those, and all we looked at were problems, medications and allergies.  Some of the comments also came out of the CDA Documentation Work group's review of the project against  requirements the work group agreed upon.

Overall, I'm very disappointed with what has been produced to date.  I'm also uncertain as to the future of this project.  We sorely  missed our goals in this round of balloting.  There's a tendency in the Structured Documents workgroup to push to meet arbitrary and/or unreasonable deadlines (in my personal opinion).  This is especially true when there are external forces like our national agenda pushing so dreadfully hard on this project.  It's hard to get anyone to slow down.  The idea that this guide could become the basis for Meaningful Use Stage 2 still has a lot of people excited.  

The project has got a LOT of potential, and I still support it.  We need to take a step back from the current deadlines.  If they remain, I don't believe the project will succeed.

What we need is a tool that will solve the problem.  To get there, we need to understand the requirements that tool must meet.  That includes the template meta-model the tool must support, the requirements on the documentation that the tool must produce, and requirements on production of the end result.

We also need the data of the pre-existing work, entered into the tool, and vetted against existing conformance critiera.  Getting at the HL7, IHE and HITSP conformance data is not an insurmountable task.  Much of it is already in machine readable form (Schematron) and structured text (even if it is Microsoft word tables or PDF files).  Extracting and Vetting the data is tedious but doable.  I've previously extracted data from IHE and HITSP specifications before -- it truly can be mind-numbing.  It takes about a week or so (effort, not elapsed time -- that's much longer in my world).

This is what it takes to get the data:

  • For documents:  Extract table to spreadsheet.  Normalize to common format.  Generate XML from spreadsheet rows.
  • For sections:  Same general idea.
  • Conformance constraints:  Extract these from source documents based on a conformance style (I built a list of constraints in HITSP specs by ensuring all constraints used the same style, then generated a TOC using word from that style (w/o page numbers).  If no common style is used, create and apply one manually (mind-numbing but actually pretty quick work).

After we have the existing template data, we can import it into MDHT.  Then we need to modify the constraints to produce the new templates, and compare the differences.  That comparison can be automated if the template meta-model is appropriately designed.

Trying to re-architect something that took six years to develop across three organizations which includes hundreds of templates and thousands of rules should NOT be done manually.  I was involved in most of these efforts, and I did quite a bit of that work manually.  It wouldn't hold together today if it hadn't been for the efforts of one person who QA'd that work across the same set of organizations.  It won't scale.  People cannot keep up with what we need.

But what did we do in the CDA consolidation process?  We relied on people to identify inconsistencies when software would have done a better job.  We relied on people to create examples when software could have done so more accurately.  We still rely on people to produce the final schematron. We relied on people to produce a change log.  We relied on people when what we wanted to start with was automation.

Let's get smart, and go back and build the automation that we want.  MDHT is a great start.  All you need do is compare this (HL7 Genetic Testing Report Guide ZIP file using MDHT) to this (CDA Consolidation ballot).

And what should we do in the interim while we develop MDHT?  Well, six years of effort may not have produced the greatest set of implementation guides, but C83 and C80 work for us already today.  More than 250 EHRs support much of it already.  A couple of small changes in Meaningful Use stage 2 would accomplish many of the documentation goals that the Transfers of Care project has.  C83 already includes templates that support H&P, Progress Notes, Consult Notes, Emergency Deparment Encounters, Referrals and Discharge Summaries.  These come from the very same source projects that the CDA Consolidation project is working with.  If you cannot wait for harmonized output from MDHT, then the next best thing is to work with the harmonized results that we humans produced over the last six years.