Monday, August 31, 2009

Follow us at #PHIN09

Bookmark this page to see what's being said this week about PHIN 2009 in realtime. Last week I wrote about events at PHIN 2009. This week I'm here and will be joining in the conversation.


Saturday, August 29, 2009

Sushi

My family and I have been enjoying Sushi for years. My oldest daughter's favorite is Tobiko (flying fish roe), and my yougest likes Ikura (salmon). We used to call these little bubbles and big bubbles, and occasionally Nemo eggs.

One of the traditions that I enjoy working in healthcare standards is that we make sure there is at least one sushi outing at every standards event. I don't know who first stated it, but the rule is that it isn't an official standards event unless there is sushi. So tonight I though I'd share some of my favorite sushi places with you.


Last week we went to Maki Sushi in Park Ridge, just a short jaunt from the Hilton Garden Inn where we were meeting for ANSI/HITSP. The folks from the Hilton Garden were gracious enough to drive us over in their bus and pick us up on the way back, which was fortunate. We had 24 people join us for this event, and still needed an additional 3 cars to get us all there. Maki Sushi serves an excellent menu, and also has other things on the menu for those who don't enjoy raw fish. We got out of there for about $45 per person, which is an astonishingly good value.


Tonight I enjoyed a meal with the organizers of the Interoperability Showcase at the PHIN conference. We went to Magic Fingers Sushi Bar in Atlanta, Georgia. Last year we showed up with a party of about 14 and this restaurant managed to add three new signals that we were in the right place to the list below. Tonight they lived up to the expectations from last year. If you like fresh wasabi, they'll grind it right at your table.


When we are in the DC area, the best by far is Sushi Taro, though I haven't been there in too long. They've been remodelling, and still weren't done the last time I was there, although they are done now. If you go there, I heartily recommend the chef's Omakase (chef's choice) which is always fantastic. It's a bit on the pricey side, but not for the area.


In Oak Brook, Illinois (the home base for many IHE meetings), the best place to go these days is OSYS in the Oak Brook Mall. This is another really good value, and you can leave this restaurant within the very restrictive meal guidelines of many companies. They also have a good Sake menu. Another place to go in Oak Brook is Momotaro, which isn't quite up to what it used to be, but still provides good sushi at a good value. What I used to like about this place was the face recognition of the staff, but they are under new management and the new staff simply doesn't know us any more.


If you happen to be travelling through New Haven, Connecticut and are a true Sushi lover, you absolutely have to try Miso. Tell Jason that Keith or Lori sent you and he'll know how to treat you (but he's usually off on Sunday and Monday). Every time my family drives through Connecticut to visit family we stop here. I can tell you that this place is about as pricy as what you can find in a major metropolitan area, but the food is far better than anywhere I've been with one exception (Japan). I've never looked at a menu here, Jason just serves up what he thinks we will like. I've never been disappointed either, but I also come prepared to pay the bill. It's not really fair to compare them on price because I've never looked at prices when I order here.


Signs that you are in a good sushi bar:

  1. They give you a warm washcloth to wipe your hands with after your are seated.
  2. They have cold sake on the menu (if not, don't bother).
  3. They have unfiltered sake on the menu (a really good sign), better yet, more than one (excellent).
  4. The chopsticks aren't break-apart (not a deal killer), but rather plastic (better), or *lacquered wood (best).
  5. *They give you chipstick rests (really good sign).
  6. They have fresh wasabi (really good sign), *and will grate at table side (excellent)
  7. If you order Ama Ebi, they will Tempura the heads for you (good sign) if you ask, or you do not even have to ask (excellent sign).
  8. You can order off-menu without frazzling the waiters.
  9. There are things like Ankimo (mokfish liver), fresh Scallop or Kampachi on the menu.
Signs you've been hanging out with us include having eaten at two or more of the restaurants above, having a pair of your own collapsable chopsticks like these from Think Geek, or for the real connoisseur, these (from Hashi-Gallery in Kyoto).



Tell me about your own favorite sushi places, just in case we happen to be there for a meeting. We are always looking for new places to try out.

Service Collaborations and New Toys

If you want an example of what a HITSP service collaboration does, this blog page serves as a simple illustration.

I've been experimenting here with new toys to link up three (actually four) different web services. This blog is where I do my writing on standards. Then I'm using another online service to announce my blog posts to my twitter account. Finally, I've just begun using that twitter account to announce new standards actitvities. Since they always contain certain keywords, I can use a twitter widget to search for those posts. You can see it on this page just to the righthave. Since most of my posts here are already included on twitter, there's really no need to repeat them on this blog. So I configured the widget to search for posts I make to the twitter account containing certain keywords. It's not as pretty as I'd like, but I haven't found another better widget that lets me include the results of a search on these pages. Finally, I used a fourth service to create a little twitter badge for me so that you can easily find and follow me there.

I can now easily post on my blog OR announce new standards activities on twitter, and both places are updated without me having to manage the exchanges between the different services. To add to that, I've been writing a lot more often, and strategically e-mailing links to relevant posts to different people I know in the industry from when I know they will be interested.

The end result: Readership this past month nearly 250% of by previous best month, and about 400% overall, this past week is up 150% my best week, and about 500% of my previous average, and Friday topped the charts doubling my previous best day, and again, about 300% my previous best week.

How do I know that? Because I've been using a fifth service to measure the results and check the quality of my posts, and have been doing that since day 2 of this blog. Remember, measurement needs to baked in.

Friday, August 28, 2009

IHE PCC Profiles to be Tested

IHE - Changing the Way Healthcare Connects

IHE Community,

IHE North American Connectathon Applications Due September 30

The IHE North America Connectathon will take place January 11-15, 2010 in Chicago. Applications for testing participants are due September 30, 2009 and are available online at http://www.ihe.net/north_america/connectathon_2010_participants.cfm.

The Connectathon is the healthcare IT industry's largest face-to-face interoperability testing event. This year IHE profiles in the following domains will be tested:

  • Anatomic Pathology
  • Cardiology
  • IT Infrastructure
  • Laboratory
  • Patient Care Coordination
  • Patient Care Devices
  • Quality, Research and Public Health
  • Radiology

Last year's IHE North America Connectathon drew close to 400 individual testing participants from more than 70 organizations.

In addition, participants will be able to register to test their support of related Healthcare Information Technology Standards Panel (HITSP) Interoperability Specifications. For more information on HITSP Interoperability Specifications, please visit www.hitsp.org.

Thursday, August 27, 2009

Data Mapping and HITSP TN903

I routinely recieve requests for information about how to go from an HL7 Version 2 message to a CDA document, and whether HL7 has established such a mapping. To date, I have to answer that no they have not. I can tell you that many organizations have established such a mapping, and that I too have done the work. In fact, I have developed mappings from many HL7 Version 2 segments to CDA documents, several times over the past six years. Those of you aware of the history of CDA will realize that means Release 1 as well as Release 2.

These mappings aren't publically available because people who do this work usually put a lot of thought and effort into this sort of mapping to solve customer problems. It's a piece of intellectual capital that has value to an organization. However, there will shortly be new tools available that will enable others to create this sort of mapping from data that can be freely downloaded. Furthermore, the mapping will not be just from Version 2 to CDA, but will also address mappings between NCPDP Script, HL7 Version 2, HL7 Version 3, CDA, X12N, CDASH and CDISC specifications.

Who is doing this work? The HITSP Data Architecture Tiger Team, and actually, ANSI/HITSP over the past 4 years without even realizing it. During what I call the HITSP ARRA Diversion, where HITSP spend 90 days working on specifications and technical notes specifically to support ONC and HSS requirements for ARRA regulation, we developed the HITSP TN903 Data Architecture Technical Note. We are also being supported by AHRQ-USHIK, who has developed a data element registry that will be pivotal in the deployment of this information.

TN903 describes a new process for HITSP, whereby we:

1. Identify HITSP Data Elements (see section 4.3.1). These are then constrained as necessary with respect to precision, vocabulary, length, et cetera at the HITSP Data Element level if such constraints are warranted. A HITSP data element is defined in four columns. The first column gives the identifier, the second, a short name, the third a definition that should be clear enough for an implementer to understand it, and finally, any specific constraints on that data element.

2. Map these HITSP data elements into specific data elements found within the respective standards (see section 4.3.2). This mapping appears inside the data mapping section of a HITSP Construct (a HITSP way to say specifications starting with the letters C, T or TP). This mapping is defined by first identifying the data element the way that is most appropriate to the standard (see Table 4-4).

3. HITSP Data Elements will be loaded into the AHRQ-USHIK Data Element Registry, as will data elements defined by the respective SDOs. I've been given to understand that the most recent versions of X12, NCPDP and HL7 Version 2 are currently in the process of being loaded into USHIK. A data element registry is one of three types of metadata registries identified as being important by HITSP in TN903.

4. HITSP data element mappings can also be loaded into USHIK, so that the relationships between HITSP Data Elements and data elements defined by the standards can be identified.

The end result is that relationships between HITSP data elements and standard data elements will be available at some point in time in the future in electronic form. You'll be able to clearly see HITSP Data Element 5.15 (Subscriber ID) maps to X12N 270_2100C_NM109 67, and to segment IN1-36 of an HL7 Version 2 message, and finally to /cda:ClinicalDocument//cda:act[cda:templateId/@root='2.16.840.1.113883.10.20.1.26'] /cda:participant[@typeCode='HLD']/cda:participantRole/cda:id of a CDA Document that uses CCD templates.

What can you do with that data? Well, the first thing that springs to mind is that you can use it to automate transformations from one standard to another. I have to caution you that you will need to be careful here. These mappings are both contextual (e.g., in the context defined for use of a specific HITSP construct), and inexact. In order to do the mapping, HITSP defines data elements at a higher level than the standards do. This allows us to identify useful concepts that we can harmonize across the standards. However, because of the inexact nature of the mappings, you will need to carefully check that these mappings remaind valid when used in other contexts (e.g., when transforming from an NCPDP Script message containing medication history into a C32 representation of the same). Even so, I expect this resource to be extremely valuable for interface developers as we move forward.

The HITSP Data Architecture describe in TN903 was developed based on four years of experience harmonizing standards. We didn't always do it this way, and have just begun integrating this process into our harmonization efforts. We have many months before we've completed identification of data elements across the standards used in existing HITSP specifications, and at the same time, we have new data elements coming in from the fourteen existing extensions and gaps requested from HITSP by ONC.

We started over this last week during the HITSP Face to Face integrating this into existing HITSP processes. On Wednesday of this week, we (Care Management and Health Records) worked through the Data Element identification and Mapping process with the Clinical Research Tiger Team. Their next round of specifications will be our first trial of the new process with a new specification. Most of the technical effort of data element identification and mapping to standards needed for this harmonization request has been completed. The next step involves actually writing the results down in the new Data Architecture component we will be developing, and the two new components that the Clinical Research Tiger Team will be writing. I'm very pleased with how the new process is working with this group, and hope to see early results from it in the next two weeks.

Wednesday, August 26, 2009

Public Health Information Network Conference Events

Sunday through Wednesday of next week I'll be participating in yet another IHE Showcase this year, at the Public Health Information Network (PHIN) Conference. We'll be demonstrating more work on interoperability related to public health, using profiles from IHE and specifications from ANSI/HITSP.

Monday (August 31st), I'll be speaking on Lessons Learned Implementing Standards-based Public Health Alerts, and participating in another IHE Interoperability Showcase. We demonstrated public health alerting earlier this year at the HIMSS annual conference. To my surprise and great pleasure, there was a good deal of press on the topic. It promises to be important this year especially in light of Novel H1N1 flu, which predated our initial collaboration, but demonstrates the need for it. The alerting demonstration at PHIN will focus on H1N1.

The alerting demonstration uses the ANSI/HITSP T81 specification. This specification relies on the HL7 InfoButton URL implementation guide. The demonstration shows how an EMR can easily use this specification to query an alert repository for relevant alert information. We weren't the only organization to demonstrate this capability with CDC and JHU earlier this year. Developers from Regenstrief were also quickly able to enable their portal to use the same capability.

I had prototyped the EMR side of this in about 4 hours at last year's PHIN conference after spending some time learning about the CDC project. I spent about a week cleaning up the user interface later in the year to show it at HIMSS, and there's some more refinements coming soon. Developers from John's Hopkin's University were able to implement the public health alert repository side of the interface in a matter of weeks in December of last year. Testing at connectathon took all of 20 minutes once we were set up.

At HIMSS, this was one of the smoothest and easiest interoperability demonstrations I ever participated in. If you missed it at HIMSS and happen to be at PHIN next week, drop by the IHE Interoperability showcase to see a demonstration. If you saw it at HIMSS, see how it's been enhanced since then.

If you want more details on how this is being moved forward within CDC, you might be interested in the entire session. I'll provide more details on the session later next week, I don't want to provide any spoilers ahead of time.

On Thursday, I will also be participating in another educational event at PHIN. Lisa Spellman (HIMSS Director of IHE), members of the IHE Quality, Public Health and Research domain, and I will be presenting a longer tutorial session on standards for public health. The second half of the session promises to be very interesting. The participants will be developing a profile proposal in realtime that will be submitted to a relevant IHE Domain dealing with public health issues. In that session, we use a similar process for developing the submission that the IHE Planning committees use for selecting proposals. It helps people learn about how IHE works, and gets them immediately engaged. I did this session two years ago at an IHE Canada meeting with two other friends from Toronto, and the results were spectacular. Not only did we wind up with a profile proposal, but it was also accepted by the IHE Patient Care Coordination Domain for development. I won't promise that what we develop will result in the same level of success, but I will promise to support what is produced as a result of this session as best as I'm able.

Tuesday, August 25, 2009

FUD

FUD, or Fear, Uncertainty and Doubt is a marketing practice that involves undermining competing products. It was a classic strategy used by computer and software manufactures in the 1970s. In this report on the meeting of the President's Council of Advisors it appears that the practice hasn't abated.

My own interpretation on the remarks of Eric Schmidt, PhD. (chairman and CEO of Google) is somewhat different from those of the report I cited above, but the FUD is still there.

Dr. Schmidt's comments can be found here starting around minute 56 of the webcast. His remarks can also be found in the text that I transcribed from that webcast below. I've added numbered markers [1] to link my own responses back to his remarks.

Eric Schmidt:
"This is obviously very important stuff. I'm concerned you are solving the I.T. problem and not the Web problem[8]. The stereotype is that the I.T. problem ends up with large complex databases which are highly specialized, very difficult to change and they are often very proprietary in nature [1].

Everything I've read [4] indicates that that is an underpinning of what's really going to happen independent of whether the legislation suggests it or otherwise. So in thinking about how do you -- and there are many reasons why this is like to occur, the nature of the way it's funded, the fact there are very legitimate and real privacy and security concerns and the nature of the public-private partnership in medicine [2].
So I was trying to think about how would you solve this is you were a web person, an internet person? [3] One thing you would have is a rule that people's electronic health record is owned by them and that anybody who has one is required to, at the patient's request and written knowledge and all that kind of stuff, to export it. In other words, there is no closed data. I don't know if that is possible or not [5].
Another criteria that you could look at in your design of the system is ultimately the innovation occurs with graduate students in universities and teaching hospitals and so forth (not in the existing incumbent institutions) who are wild and crazy and have new ideas. That's how innovation in computing always occurs [6].
It is not at all obvious to me that this path will generate thousands of new startups of people who will be using the data that you correctly said is so interesting to come up with new and interesting solutions, new insights, new drugs, et cetera, et cetera.
And the third thing I've noticed is that in medicine, there is an awful lot of owned or not broadly available information that's clinical in nature. Most of the medical database seem to be copyrighted and not generally available for reuse, although there are some exceptions [7].
So it seems to me if you are a web person and you look at this problem, you say, "The person owns their data." You need to have lots and lots of people who can then take that data and aggregate it again, with the HIPAA laws and so forth and so on, and build applications, and then they have to have a relatively freely available set of information.

So are you all working on that problem? Is that problem a separate problem? I understand the legislation doesn't call for this stuff."

To answer Eric's final question, I'd like to tell you one of my favorite innovation stories of all time.

It started back in 2004 when I was tasked by the company I worked for at the time to make them a leader in standards. I was nominated for this role because I had worked for another company previously that employed many of the developers of the XML and HTML standards, and I had done some standards work in the past. So, I knew a little something about standards.

I joined HL7 and got involved in this weird thing called a connectathon, which was being held jointly by this organization called Integrating the Healthcare Enterprise (which I'd never heard of before) and HL7. At the time I was working on some very innovative technologies using natural language processing, electronic text, and XML. We had developed an NLP system to identify problems, medications, allergies and procedures in electronic text, code it in SNOMED CT, and mark it up in XML very similar to the HL7 CDA Release 1 standard.

IHE had developed a profile called Retrieve Information for Display (RID) which we would be demonstrating at HIMSS. To make the problem more interesting, NIST had developed an ebXML based Registry, and we wanted to work them into the demonstation. So we added a wrinkle whereby instead of just making the information available inside a hospital network, we all added some code to our systems to make the information available through NIST's registry.

All of the vendors invested a great deal of time, effort and money to participate in this demonstration, but much to our chagrin, the highlight of the whole thing turned out to be NIST (so much so in fact that we had to rearrange the booth around them)! Why? Because they could access every document from every EHR in the booth. Cross enterprise document sharing was born the way that many innovations are, completely by accident.

IHE realized that it had a hit on its hands, and made plans to develop this capability as a new profile, Cross Enterprise Document Sharing, or as it is rather well known today: XDS. In that same year, HL7 completed CDA Release 2.0, which was (and is yet), the latest and greatest version of its markup languages for storing clinical documents. CDA Release 2.0 is to clinical documents what HTML is to the web. I won't claim that XDS registries are the Google of clinical documents, but they are pretty darn close. Subsequent profiling efforts by IHE have radically closed that gap.

These profiles were subsequently adopted by ANSI/HITSP, and additional work with ASTM and HL7 on the CCD (in 2006), and by HL7 and IHE on other CDA specifications now provides us with a rather rich source of clinical data. We don't just have summaries in the HITSP specifications, we have the records.

None of this requires the preconditions of ownership suggested by Dr. Schmidt. That is simply that the patient has a right to the information contained within his providers Electronic Medical Record. I believe this to be existing policy under HIPAA, and further expect that to be strengthened under ARRA (based on remarks made by Dr. Blumenthal in response to Eric) .

So, yes, we are working on that problem. In fact, it's mostly solved from a technology perspective. What we need now is some good old IT investment to get that technology deployed.

On other feedback to Dr. Schmidt:

1. I don't care what technology you use to manage a business, it's almost certainly going to have a database behind it, and that database will be proprietary in organization, even though it may be built using standard database systems.

2. The knowledge represented in a "proprietary" database organizations is a strength when database structures efficiently address business requirements. I would expect Google to be proud of the proprietary data structures it uses to make web searching efficient. How the technology works is really beside the point. The fact is that healthcare providers need to use technology to manage the care they provide patients.

3. Just because they don't work for Google, don't assume that the people who are working to solve these sorts of problems aren't web people, or that the solutions aren't web based.

4. You aren't reading the right material. Read the standards, implementation guides and specifications, understand them, and then make your assertions. Try Googling XDS, HITSP or CCD.

5. Ownership and access to data are clearly two different things. Google didn't need ownership of all the data on the web to provide the huge value it does today, it just needed access. If you read current HIPAA regulation and ARRA legislation, it's pretty clear that patients have access. That is all that is needed to provide the value you want without even bringing up the issue of ownership. The best way to avoid a problem is, well, to avoid the problem.

6. Sometimes innovation is an accident in disguise (FWIW: My mother told me to always avoid always and never to say never).

7. I think it's a good thing that we have copyright laws to enable people and organizations who develop knowledge to benefit from it, don't you? What would you suggest the government do about that?

8. I think the real problem is an IT problem and not a web problem. The web problem we've got licked. The IT problem is how to get providers to deploy the right healthcare IT applications that enables the web solution. If you look at comparative industry spending on IT, you'll see where healthcare is compared to sectors like technology, banking et cetera. We need investment in the IT problem to enable the web solution.

Monday, August 24, 2009

I need a new Joke

Today I sat in an hour long presentation from the Office of the National Coordinator (ONC) in the HITSP Leadership meeting. This meeting is in preparation for the HITSP Face to Face meeting being held this week in the Chicago area. The focus of discussion was on the requirements for exchange of Consumer Preferences that HITSP will be addressing later this year.

I've been greatly impressed with the approach that ONC has been taking in the development of these requirements. If they continue in this vein, I'm going to have to repurpose my rather old joke about needing an office to coordinate those activities nationally. I think I might begin directing it elsewhere (e.g., you should probably be talking to the ...).

Today's meeting was with Dr. Carol Bean, the Acting Director of the Office of Interoperability and Standards (OIS), and Jodi Daniel, Director of the soon to be restructured Office of Policy and Research (OPR).

The process that I'm seeing being used for development of these requirements is much closer to what I see commonly used in Software development processes. It has much closer collaborations between the groups involved in the development of requirements (ONC), designs (HITSP), implementation (NHIN) and verification and validation (certification and testing). I don't expect all of the information barriers to be broken down immediately.

What I was most appreciative in today's meeting was the introduction of Policy issues into the discussion. About a third of the time was spend on the policy issues, and the ramifications of policy on technology and visa versa. This is a very refreshing approach, because we've typically avoided these issues in the past given that we have no way to get input. Jodi acknowledged that while these are not directly related, we can certainly better understand the requirements and scope of the needed technology if we have some ideas about the direction that policy is headed.

The meeting closed with a great deal of feedback being given to, and being heard by ONC. I saw quite a bit of note-taking by the ONC representatives while others were speaking. I also saw and heard how the ONC staff responded to the messages that were being communicated to them.

This is beginning to be a two way communication channel, instead of just being an obscure way to pipe my output to /dev/nul.

Wednesday, August 19, 2009

The Challenge of a Standard Vocabulary for Laboratory Orders

One of the challenges that HITSP has been asked to address this year is to close gaps in Laboratory workflows. Standards are readily available to support the ordering of labs, and to deal with status updates. Some of the challenges that need to be addressed in the exchange of laboratory orders are related to specimen tracking, CLIA compliance, and managing changes in orders. These are all challenges that I believe are well understood, but a key challenge remains in the identification of vocabulary suitable for ordering laboratory tests.

I've enumerated some of the challenges on order codes below:


Specifying the Method of the Test
Laboratories must perform the tests that a physician orders, not something else. Additional negotiations are needed between physicians and labs when the test that they've specified isn't available. This wastes time and effort on the part of both parties. Allowing a physician to specify the method is necessary for various reasons:
  1. It produces results in the units expected by or familiar to the provider.
  2. It has particular characteristics (for example, the chance of false positives or false negatives) that are important to a particular diagnosis.
  3. It produces results quicker than other methods.
  4. It is more cost effective than other methods.

For example a physician could order a test in a variety of ways to measure the concentration of Hemoglobin A1C (a test commonly used to test for, or check the control of diabetes). There are three different methods listed in LOINC that can be used to calculate the % of Hemoglobin A1C in blood.

  1. Calculated from other results
  2. Electrophoresis
  3. High performance liquid chromatography
If the physician specifies the method, then that specific test method must be used. However, LOINC also contains order codes that allow the test to be ordered without specifying the method, leaving the lab free to use any method that it has available.


Additional Services (Reflex Testing)
Many times a lab will automatically initiate additional testing based on the outcome of a laboratory test. For example, in micro, the results of a culture could be tested for microbial sensitivity to drug treatments, or the identification of a Type A Influenza could result in additional typing testing to determine the virus subtype (e.g., Avian Flu H5N1 or Novel Flu H1N1), or a urinalysis with abnormal results could reflex to include microscopic examination. Different tests may be performed depending upon the outcome of the original test. In Microbiology tests, the collection of drugs which may be reflex tested could depend on what is cultured.


The collection of rules with respect to what is actually performed in the reflex testing are traditionally identified by the service (order) code. LOINC codes do not currently include common test orders which include reflex testing.

Panels
LOINC contains numerous panels defines for common requests, e.g., Blood Chemistry, Complete Blood Count, Urinalysis Panel, Lipids, Metabolic Panels, et cetera. While these panels have been defined in LOINC, it isn't clear that there is industry agreement on what is required and optional within these panels. Some panels include results reported using a specific method which have their own challenges.

Billing
In some cases, it may be necessary to associate a different code for the same test depending upon the purpose for which it is used. This apparently is because there are different billing requirements for the different tests. I have to admit this is one area I least understand, and if true, one I least appreciate. If it's the same test, why aren't we paying the same price for it, regardless of what it is used for.

The Way Forward
There have been efforts in the past to resolve some of these issues, but none appears to have been successful yet. I honestly believe that one reason these efforts haven't been successful is because the scope was to broad.

I am certain that HITSP can identify a set of order codes dealing with commonly ordered lab tests if the scope is restricted to avoid some of the complexities identified above. I believe that we can, working with other standards organizations, identify a value set of from 50 to 100 common laboratory orders that could be the start of a solution. We needn't boil the ocean here or eat an elephant in one gulp. There are likely a good set of codes that could cover 50% or even more of commonly ordered labs.

What would the benefit of having this value set be? If we can accomplish the goal of standardizing 50% of labor orders, we should be able to reduce the time spent by healthcare professionals and their staff addressing issues around those tests, freeing them up to perform other tasks.

Another benefit would be that EMR systems could be installed using that default set of order codes, and supporting the standards for placing and updating the orders. If this were to happen, the time and resources needed to implement a basic laboratory interface between a healthcare provider and a lab could be cut dramatically.

The end result might very well increase competition among the providing labs for these commonly ordered services, which is the elephant presently standing in the room. I certainly understand the challenges brought about by commoditization of a marketplace, having experienced it several times in my own career. However, I see competition as an incentive for labs to find faster, cheaper, better ways to provide these commonly ordered services.

Many years ago, I was a service manager in a computer store. Due to the commoditization of the computer market, I saw eroding margins eat away at the profitability of that store. It eventually closed, and we see few computer stores organized the way the used to any more. Today you can go online and purchase a computer system for less than $500 that would outperform the $3500 model I was selling and servicing 15 years ago. Wouldn't we love to see that sort of erosion in healthcare costs over the next 15 years?

I realize that not all labs, test methods, or collections services are created equal. I'm certain many providers are happy with the services that they are getting. I don't think that it is necessary to require labs or providers to stop using existing code sets. I simply think that providers should have the option to use a standardized set and get the same level of service as if they used proprietary ones.

Removing interoperability barriers from the cost equation provides more choices for providers, payers, and in the end patients. More choices may mean that there are winners and losers in the market. We should also wind up with a more efficient and cost effective healthcare system. In the end, that result best serves me, who is just one more consumer of healthcare.

Monday, August 17, 2009

Quis custodiet ipsos custodes?

Quis custodiet ipsos custodes? (Who will guard the guardians?)
-- Juvenal

I was reading the reports from the Healthcare Policy Committee meeting held last Friday. One of the recommendations from the certification/adoption workgroup was that:

"ONC should develop an accreditation process and select an organization to
accredit certifying organizations"

I agree wholeheartedly with this notion. I think it will address many of the concerns that have been expressed about CCHIT, both in the media, and in reports from the HIT Policy Committee. One point of clarification I'd like to see made is that ONC not "develop an accreditation process", but rather that they develop a set of requirements for a certification process, and to leave the development of the process to the organization selected. Let's not reinvent the wheel again. One organization that has plenty of experience certifying certifiers is the American National Standards Institute.

On the flip side, the committee also recommended that:

"If necessary, ONC should commission (not just harmonize) the development of
standards."

On this recommendation, I am rather more hesitant -- because of the lack of oversight. The Federal government has been commissioning healthcare standards development for a number of years. There are many challenges in these activities:

  1. They are often based on US specific requirements. US-based requirements are certainly important, but healthcare is a global activity. When US standards are developed in a way that doesn't account for global requirements, we are left with standards that are used in just one of many jurisdictions.
  2. Many of these standards activities will occur within HL7. I've heard a number of challenges over the last few years in HL7 from non-US members about the amount of time HL7 spends on US realm specific implementation guides. If ONC wants to start developing US-based standards and guides, it should consider funding a US-realm affiliate in HL7 to support this process.
  3. Federal initiatives in the development of healthcare standards are not coordinated across the federal agencies. There are numerous projects sponsored by the FDA, CDC and VA which have overlaps.
  4. The selection of contractors to develop standards almost certainly preselects the standards organizations where these actitivities will occur, and often the approaches used. This may be appropriate when the activities is specific to one SDO, but this will not always be the case.
  5. Contracts to develop consensus based standards need to work within reasonable timelines. Most recently I've been involved with a project that is looking to go from start to finish in just a few short months. This is neither practical, nor an acceptable way to develop a consensus based standard. Good work takes time.

There are three suggestions for ONC that I would make in regard to this recommendation:

  1. Track the healthcare standards activities commissioned by Federal agencies.
  2. Make a call for standards development before funding new work
  3. Commission standards development activities as a "last resort" after a harmonization process identified gaps in standards, and calls for development through existing channels fails.

Tracking Standards Activities
I can count at least a double handful of standards activities that have been developed in HL7 or elsewhere over the past two years that have been either commissioned by or supported by Federal government agencies. Some of these include:

  • CDA Implementation Guide for Patient Assessments (ASPE)
  • eMeasures (Indirectly by CMS)
  • Implementation Guide for CDA Release 2 – Level 3 Healthcare Associated Infection Reports (CDC)
  • Implementation Guide for CDA Release 2 – Level 3: Public Health Case Reporting (CDC/NCPHI)
  • EHR Vital Records Functional Profile (CDC)
  • Composite Privacy Consent Directive (SAMSHA)
  • Individual Case Safety Report (FDA)
  • HL7 Version 2.5.1 Immunization Implementation Guide (CDC)
  • Hardened Rules for Clinical Decision Support (AHRQ)

This is an incomplete list by the way. It is based on my own knowledge of these activities, and some cursory review of the HL7 Project planning resources. I know there are more projects going on, but there is no one resource that I can use to access that information. ONC should be responsible for tracking these projects and making that information publicly available.

The information should include all contracts and projects that involve the development of standards, implementation guides, value sets, minimum data sets, clinical decision support rules and similar specifications that have Federal involvement. One use of this resource would be to help coordinate Federal activities in healthcare standards. There should also be a way to readily identify these projects before contracts are awarded similar to what is presently offered on the grants.gov website.

Call for Standards Development
I'd like to see ONC use the process ANSI/HITSP already in place for:

  1. Requesting information about standards that might be used to fulfill harmonization requests
  2. Request development of standards to meet gaps found during the harmonization process

We've successfully used those processes in ANSI/HITSP to obtain information about standards activities relevant to harmonization requests and to foster the development of new work. These processes have been very effective in pushing forward activities within OASIS, HL7, IHE and elsewhere to develop standards meeting requirements for the EHR- Emergency Responder and Consultations and Transfers of Care harmonization requests (formerly known as Use Cases).

I would recommend that ONC lead with a request for standards development first to the relevant standards development organizations. We've seen that they can effectively produce necessary standards without additional federal spending. It's only when those activities aren't effective that commissioning of standards should be considered. The benefit to this approach is that it doesn't necessarily preselect the SDO, and opens the process to participation by multiple organizations. It also doesn't precondition the development down one or another pathway, which leads to more innovation in the exchange of healthcare information.


Friday, August 14, 2009

IHE Patient Care Coordination Accepting Profile Proposals

This is indeed a week for announcements. One of the many hats that I am presently wearing is that of cochair of the IHE Patient Care Coordination (PCC) Technical Committee. I expect to change that hat to co-chair of the PCC Planning Committee in coming weeks.

The Patient Care Coordination domain in IHE was formed in was established in 2005 to deal with integration issues that cross providers, patient problems or time. We deal with general clinical care aspects such as document exchange, order processing, and coordination with other specialty domains. The committee also addresses workflows that are common to multiple specialty areas and the integration needs of specialty areas that do not have a separate domain within IHE.

We are presently looking for proposals for new profiles to be developed over the next year. See the details below.

Keith

P.S. If you have an idea that is more appropriate for the IHE IT Infrastructure or Quality, Research and Public Health domains, contact me here and I can put you in touch with the right people.




IHE PCC Call for Proposals:
Interested parties are invited to submit proposals for new Profiles for the Patient Care Coordination (PCC) Domain to be considered for development in the 2010-2011 cycle of the IHE PCC Domain. Feel free to forward this announcement to other interested parties. The deadline for submission of a proposal for new Profile is September 15th, 2009, at 11:59 pm CST.

Proposals must follow the Brief Proposal Template and address:

  1. What is the problem and how does it look in practice (use case)?
  2. How should things look in practice if it were fixed?
  3. What specific standards could be used to solve the problem?
If you are interested in submitting a proposal to IHE, please complete the submission using template above. You can send it to me via e-mail and I will forward it to the correct parties. If you need my contact information, direct message me on Twitter here, or leave a comment and I'll follow up with you.

Invitation to attend IHE PCC Planning Committee meeting: October 5-6, 2009
The PCC Planning Committee will hold its 2009-2010 planning meeting on October 5-6, 2009 at the RSNA headquarters in Oakbrook, IL. The first day will run from 10:00am – 5:00pm CDT to allow for early morning travel and second day will run 9:00am – 6:00pm CDT. The PCC Planning Committee will vote on the final selection of short-listed profiles to move forward for submission to the PCC Technical Committee for 2009-2010 at this meeting. The meeting will be held at the Radiological Society of North America, 820 Jorie Boulevard Oak Brook, IL 60523

IHE membership is required to attend or participate in the PCC Planning face-to-face meeting, and to submit votes. If you are unsure your organization is an IHE member, please check the membership list. If you would like to apply for membership, please see this webpage.

Planning Webinars will be scheduled for September and October to allow authors to present their proposals to the PCC Planning Committee in advance of the face-to-face. I will publish the schedule here after the close of the submissions.

Webinar #1: September 22, 2009 • from 1:00 pm-3:00 pm Central Time
Webinar #2: September 24, 2009 • from 10:00 am-12:00 noon Central Time
Webinar #3: September 29, 2009 • from 3:00-5:00 pm Central Time [Joint with QRPH]
Webinar #4: October 1, 2009 • from 11:00 am - 1:00 pm Central Time [if needed]

Thursday, August 13, 2009

HITSP and Meaningful Use

This seems to be a week for announcements. In two weeks HITSP will be holding its second town hall meeting. This one will be worth listening in on, as they will be discussing some of the HITSP work products that impact meaningful use. I would be on this call, but unfortunately, I'll actually be in HITSP meetings all week long ;-)>

Keith


HITSP Free Webinar Series The Healthcare Information Technology Standards Panel (HITSP) is identifying the standards that will support the exchange of healthcare information across the United States. Learn more about the Panel and how you can engage in shaping the future of HIT interoperability during a series of free webinars - one for each month of 2009.

HITSP and eTown Hall II with Dr. John Halamka

Date: Thursday, August 27, 2009
Time: 2:00-3:30 pm ET
URL: https://ansievents.webex.com/ansievents/onstage/g.php?d=663911895&t=a
Meeting ID: 663 911 895
Audio: 1.866.699.3239 (toll free)
Participant ID: 663 911 895


What you will learn
During this 90-minute webinar, participants will:


  • Get an update on the final definition of meaningful use released by the HIT Policy Committee.

  • Examine the relationship between HITSP work products and meaningful use.

  • Explore how HITSP will continue to work with the Office of the National Coordinator, the HIT Standards Committee, and the HIT Policy Committee to facilitate the development of needed standards and achieve interoperability.
Learn more about the next work products for HITSP.

Who should attend
Consumers, government representatives and policy makers, healthcare providers, standards developers, vendors, and any other interested stakeholder.
Presenter


  • John D. Halamka, MD, MS, chair of HITSP

Dr. Halamka is also CIO of the CareGroup Health System, CIO and dean for technology at Harvard Medical School, chair of the New England Health Electronic Data Interchange Network (NEHEN), CEO of MA-SHARE, and a practicing emergency physician.


Registration
Participation is complimentary, but advance registration is required at www.HITSP.org/webinars


Sponsors
HITSP Education, Communications and Outreach Committee


Accessibility
Participation during the live event requires both telephone and Internet access. Sessions will be recorded for playback via the Internet.


Wednesday, August 12, 2009

HL7 Balloting opens on eMeasure Specification

In June of this year HL7 approved a project to create a new standard to define the structure and format of quality measure definitions. The first draft of this standard is now open for review and balloting through the HL7 Ballot web site. I was supposed to be the modeling facilitator for this project, but given the very short time frame in which these materials were put together, Bob Dolin did most of the work on the modeling. I'm impressed with the progress that has occured with this ballot in a very short period of time.

See the message from the project team below which I've reproduced below without the e-mail addresses (to prevent harvesting by spam-bots). If you want to contact the project team, just leave a comment here and I will forward it along, or you can find e-mail addresses for Bob Dolin and Liora Alshuler on the Structured Documents home page.

Announcement
The HL7 Health Quality Measure Format (HQMF) ballot is now open. Industry stakeholders are encouraged to review and vote on this draft standard.

What is HQMF?
The HQMF is a standard for representing a health quality measure as an electronic document. Through standardization of a measure's structure, metadata, definitions and logic, the HQMF provides for quality measure consistency and unambiguous interpretation. A health quality measure encoded in the HQMF format is referred to as an "eMeasure."

The development of this specification was supported by volunteer efforts and through a National Quality Forum (NQF, http://www.qualityforum.org/) contract with the U.S. Department of Health and Human Services to promote the effective use of Electronic Health Record (EHR) systems. The initiative aims to significantly improve the quality and efficiency of patient care by making possible the capture and reporting of quality measure information for physicians and other health care providers.

The HQMF (eMeasure) specification and supporting documentation is publicly available at http://www.hl7.org/v3ballot/html/domains/uvqm/uvqm.htm .

Ballot Overview
On September 2, 2009, NQF will host a free webinar to provide an overview of ballot content for the eMeasure. You are welcome and encouraged to participate. Please visit the URL below to register for the event.

Ballot Pool Signup
Pool sign up is currently available and will remain available from now until September 7, 2009, one week before Ballot Pool close date. To sign up, direct your browser to http://www.hl7.org/ctl.cfm?action=ballots.home.

If you wish to vote on the HQMF specification, you must join the ballot pool. Please note the following important dates.

  • Ballot Pool Signup Period: Now until September 7, 2009
  • Ballot Open Date: August 10, 2009
  • Ballot Close Date: September 14, 2009

Supporting Documentation:

Please feel free to contact any one of us if you have questions during the HQMF ballot cycle.

Sincerely,
HQMF Core Project Team

  • Bob Dolin
  • Crystal Kallem
  • Liora Alschuler
  • Floyd Eisenberg
  • Thomas Murray
  • Gay Giannone
  • Rick Geimer

Tuesday, August 11, 2009

IHE Releases Trial Implementation Profiles

IHE LogoThree domains of Integrating the Healthcare Enterprise recently updated their technical frameworks and published new profile supplements for testing in the 2010 North American Connectathon.

IT Infrastructure
Click on the link above for a complete list and to download the updated Technical Framework. New and updated profiles for this year include:

  • Document Metadata Subscription (DSUB) -- This profile provides a mechanism for organizations (e.g., those supporting medical home) to subscribe to new information for a given patient and be notified when it becomes available in an XDS Registry.
  • Multi-patient Queries (MPQ) -- This option on XDS provides a mechanism to perform queries returning results for multiple patients, enabling clinical research, public health, and quality accreditation organizations to access information on multiple patients in support of their missions.
  • Cross-Community Patient Discovery (XCPD) -- This profile supports the means to locate communities which hold patient relevant health data and to translate patient identifiers across communities holding that patient’s data.
  • Request Form for Data Capture (RFD) -- Newly revised to support XHTML forms, the RFD profile provides a means for the retrieval and submission of forms data between physicians/investigators and electronic data capture systems or other data collection agencies.
Patient Care Coordination
Click on the link above for a complete list and to download the updated Technical Framework. New and updated profiles for this year include:
  • EMS Transfers of Care (ETC) -- This profile supports the exchange of clinically relevant data between EMS providers and emergency care facilities.
  • Patient Plan of Care (PPOC) -- This profile extends the the plan of care in the current technical framework to support the exchange of coded plans of care using data elements from nursing processes currently in common use.
  • Labor and Delivery Record (LDR) -- This profile supplement provides comprehensive information regarding the course of labor and delivery to healthcare providers caring for both the mother and the newborn in the postpartum period.
  • Request for Clinical Guidance (RCG) -- This profile supports the integration of clinical decision support services into healthcare IT systems.
  • Also updated for this season are the Antepartum Record (APR), Emergency Department Encounter Summary (EDES), and Immunization Content (IC) profile supplements.
  • Finally, all new content modules added by these supplements can now be found in the Content Modules Supplement.
Quality, Research and Public Health
Click on the link above for a complete list.
New and updated profiles for this year include:
  • Clinical Research Data Capture (CRD) -- This profile describes the content and format to be used within the Prepopulation Data transaction described within the RFD Integration Profile. The purpose of this profile is to support a standard set of data in CCD format which the Form Filler provides for use in Clinical Research. In addition this profile will reference the ability to convert this output into a standard case report form (Standard CRF) consisting of ODM and CDASH.
  • Drug Safety Content (DSC) -- This profile describes the content and format to be used within the Pre-population Data transaction described within the RFD Integration Profile. The purpose of this profile is to support a standard set of data in CCD format which the Form Filler provides for use in reporting adverse events as it relates to Drug Safety. In addition this profile will reference the ability to convert this output into the ICH E2B M Standard.
  • Mother and Child Health (MCH) -- This profile describes the contents to be used in automating the submission of the child and maternal health information to public health agencies via the mechanism provided by the Request Form for Data Capture (RFD) integration profile.
  • Retrieve Protocol for Execution (RPE) -- The Retrieve Protocol for Execution Profile (RPE) provides an automated mechanism for an Electronic Health Record (EHR) to retrieve a complex set of clinical research instructions (Protocol Definition) from an Electronic Data Capture (EDC) system to execute within the EHR.
These profiles and others will be tested at the 2010 IHE North American Connectathon being held January 11 - 15 at the Hyatt Regency hotel in downtown Chicago. Applications for this connectathon open August 28 and will close on September 30th.
If you want to see what an IHE Connectathon is about, read some of my previous postings from the connectathon floor:
I hope to see you in Chicago.
Keith

Friday, August 7, 2009

Cool Toys

I've been blogging here for more than a year now. When I started this blog back in June of 2008, my purpose was simply to provide a place where I could talk about some of the ideas I have about healthcare standards. Over the last year, this blog has also become I tool that I use to crystalize some of these ideas. Forcing myself to write about them brings clarity, at least in my own mind, on what I want to say.

One of the cool toys that I've been using since I started this blog is Google Analytics. I track the number of hits I get on this blog on a regular basis. The nice thing about the service is that it is free. I can do all of this tracking without paying a cent, which is good, since this blog is really a personally funded activity. It was extremely easy to hook my blog up to the service, and having done so, I get a great deal of information out of it.

Here are some interesting statistics that I know as a result of using it:
1. People are reading this blog from 64 different countries on six different continents (I don't have any data on Antarctica stats, but then, neither does Google).

2. The top five countries are:
  1. The US (70%)
  2. Canada (4%)
  3. India (3%)
  4. Australia (2%)
  5. The UK (2%)

I happen to know that Canada is underreported since I know of several Canadian readers who read this blog through syndication in livejournal (http://motorcycleguy.livejournal.com/)

3. Firefox is gaining in popularity (almost 40% of readers us it, up from 30% a year ago), and IE is losing (down to 50% from 70% a year ago)

4. The five most popular postings are:

5. Understanding Genetics (5%)
4. What is HITSP Doing? (5%)
3. Reporting Genetic Test Results (6%)
2. If I had a Hammer (14%)
1. Clinical Decision Support (17%)

5. The five most popular days are (to which I've attached the most likely article):

5. July 8, 2008 (Understanding Genetics)
4. July 17, 2009 (Hello again, it's me, stirring up the pot.)
3. August 6, 2009 (Gozinta and Gosouta yesterday)
2. July 29, 2009 (At the rim of the dam or the edge of a precipice?)
1. August 5, 2009 (I have no clue which article was most viewed)

6. The four the most popular weeks are the last four. My fifth most popular week was the first week I wrote in this blog.

Obviously, the more I write, the more I'm read. I've broken several records repeatedly in the last month. Readership has risen from a low of 38 visits in a week a month ago to nearly 200 visits this week ... you may even be the 200th reader. Don't worry, there's no prizes to be awarded or annoying popups if you are.

If you blog, you might be interested in Google Analytics. It's a cool toy. As a result of using it, I can give you this annual report, and I think I'll have to make that a ritual moving forward.

Keith

Wednesday, August 5, 2009

Gozinta and Gosouta

In an engineering career spanning three decades, I’ve been involved in a number of process improvement programs, incorporating ISO 9001, SEI CMM, some home grown work and a smattering of Six Sigma. In all of these, I’ve found that three things are necessary in any process improvement program.
  1. A well-documented process
  2. Meaningful measures applied to process outcomes
  3. Deep commitment from organizational leadership on execution
The last is also the most critical. Process improvement cannot be a grass roots initiative. It takes a great deal of resources and commitment to execute well.

As a result of being involved in these activities I’ve learned quite a bit about software development as an engineering discipline. A key take-away from these initiatives is that measurement needs to be baked into any process. You can begin with a process that doesn’t incorporate measurements into it, but at the end you will have a process that includes them.

Over the last five years, we’ve seen increasing attention on quality initiatives in healthcare. There are many parallels between these quality improvement initiatives within healthcare, and similar ones I’ve experienced in the field software development. One of the most heartening parallels is the deep commitment to this quality improvement effort at the Federal level.

Unfortunately, the more deeply I look into current initiatives, the more I see some critical gaps in the first two requirements, which I’ll cover in the reverse order.

Lack of standards for communicating well defined Quality Measures
One of these gaps is in the area of meaningful measurement. In order for a quality measure to be meaningful, it needs to be correlated with outcomes. The better the correlation, the more useful the measure is as an indicator of quality. This may be obvious, but needs to be stated in any case.

Presently, we describe many healthcare measures in the US using billing data and codes. However, all discussions I’ve ever had with clinical experts indicate that billing data is insufficient to provide refined clinical judgment. In a recently well-publicized case reported on by John Halamka in the Limitations of Adminstrative Data, these may not even provide a good indicator of what is happening to a patient. That doesn’t mean the measures we are using don’t provide some indication of quality, just that there are better measures that could be developed.

The next part of this issue is to ensure that we have a good way to communicate the measure definition so that they can be applied to the process. There is groundbreaking work presently being developed by the HL7 eMeasure Project. This project is developing a standard to specify how the definitions of quality measures will be exchanged. The first draft is out for ballot starting August 10th. One challenge of this project is that it is exploring areas of the HL7 RIM that have been underutilized in prior standards. I personally don’t expect the first draft to pass ballot because of this challenge, but it is a good start.

Lack of standards for communicating Guidelines
There are also gaps in standards used to communicate process documentation. The issue here is in the relationship between clinical guidelines and quality. If you look at clinical guidelines as “process documentation”, and quality measures as the measurement to be baked in, you will see not only do we lack standards that allow measurement to be baked into our processes, but we also lack standards that allow the processes themselves to be described. This is another interesting challenge because it often falls into the realm of decision support that I’ll discuss later in this article.

If we are to implement clinical guidelines in electronic health records, they need to be codified in a way that they are executable and measurable. However, guidelines aren’t written to facilitate their implementation or measurement in EHR systems.
The current process goes something like this:
  • Write the guideline using evidence-based medicine.
  • Write the quality measure based on the guideline.
  • Apply available codes to the quality measure.
  • Apply the measure to an institution.
There are several reasons why the process works this way, but breaking it up into these individual steps is in some ways like playing a game of telephone. Each step introduces an opportunity to deviate from the original intent of the guideline. One of the difficulties today in developing executable clinical guidelines is that there is no standard language for computably describing these guidelines.

For the purpose of illustration, I’ll take a very simple guideline in use today, which I’ve slugged to avoid giving medical advice:
“Patients with a suspected PROBLEM should take DRUG unless DRUG is
contraindicated within one hour of onset of PROBLEM symptoms.”
This guideline assumes a great deal of medical knowledge. Consider the following terms in it:
  • suspected PROBLEM
  • DRUG is contraindicationed
  • onset of PROBLEM symptoms
  • PROBLEM symptoms
A key role of measure developers is to supply definitions of these phrases. There are a lot of subtleties involved in this measure, and while current EHR systems are able to support it, they must be implemented appropriately to do so.

There are two ways to alleviate this problem. The first is to hack the guideline to fit the available data. Here are just a few possible hacks (note that I DO NOT RECOMMEND hacking guidelines, this is just meant to illustrate the problem):
  1. Use the discharge diagnosis to identify patients with PROBLEM.
  2. Exclude patients allergic to DRUG.
  3. Compute the time between admission and administration of DRUG.
Every single one of these hacks results in the measurement of a clinical guideline that is different from what was written. First of all the phrase “suspected PROBLEM patient” is not meant to be a confirmed diagnosis of PROBLEM. It just means that the patient has one or more symptoms of PROBLEM. It doesn’t even mean waiting for confirmation by some lab test or diagnostic procedure. Next, there are more contraindications for a DRUG than allergies. Finally, the starting time for the interval measured is after admission, which is almost assuredly later than onset of symptoms (unless the PROBLEM was suffered after admission).

The right way to resolve this is to ensure that the guideline clearly and crisply defines what it means. This takes effort, and introduces a natural tension between the need to publish the information for human use, and to codify it for measurement. With just a little bit of training non-clinical persons can easily implement a guideline similar to the one described above, and as a result many lives can be saved. However, encoding that same clinical guideline takes some effort, and is needed if you want to automate measures of its implementation. Here, the person in the loop that makes the guideline easily implemented becomes less effective. This is a repetitive process that people aren’t terribly good at, but where computers excel.

Automation of Clinical Decision Support
The last challenge is perhaps the most difficult of all to address. We’ve been struggling for years to develop standards that will allow us to automate clinical decision support. Much of the attention has been focused on languages (e.g., GELLO, Arden Syntax) for specifying the clinical decision support rules.
As a software engineer, I have a different perspective on this problem. Clinical decision support rules that are executed by a computer system are nothing more than software programs. Any software engineer who tries to tell you that there is one best language for developing computer software is simply ignoring the plethora of computer programming languages available. Programming languages are tools, and if you’ve read what I have to say about tools previously, you know what’s coming. You need to select the right tool for the job. Different decision support problems will require different techniques to solve them. We shouldn’t try to limit decision support by any one language.

The definition of a standard language used to define decision support rules is not necessary or even desirable. Rather, we should focus on the development of a standard way to integrate decision support services into healthcare IT. Decision support standards need to be focused upon what the inputs and outputs of a decision process are (the Gozintas and Gosoutas that one mentor of mine liked to call them) .

If we can define what those inputs and outputs are in a standard format, and provide for an arbitrary (perhaps even proprietary) definition of the logic used make the decision, we’ve developed the core component necessary for a standard guideline. These Gozintas and Gozoutas happened to be the same data points needed for quality measures, and the same ones that providers record during the provision of care.

Two profiles developed by Integrating the Healthcare Enterprise can help here. The Care Management (CM) Profile developed last year describes how Gozintas to a clinical decision support process can be codified in a Guideline using the HL7 Care Provision Draft Standard for Trial Use. This profile is desperately in need of a document format for clinical guidelines that better supports what it is attempting to accomplish.
The Request for Clinical Guidance (RCG) profile will be published for Trial Implementation here in the week of August 10th. The public comment version of it can be found here until then. This profile defines how a Clinical Decision Support Service can be integrated into a Healthcare IT environment using the same HL7 DSTU listed above. It presently needs to have the Gozinta’s and Gozouta’s defined for it by a separate content profile, but ideally, these would also appear in computable documents. I plan on testing this profile out later this month and will tell you the outcome in a subsequent post.

If we piece together definitions of a quality measures, the inputs and outputs for clinical decision making, and arbitrary languages and algorithms for decision making, we can build a process definition with measurement baked in. We omit the contentious step of defining a single standard language for decision support, but move healthcare quite some distance towards having a repeatable quality process.
Those Gozintas produce a a Gozouta that I want.