Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Thursday, November 16, 2017

Data Philosophy and Game Theory

It's Dirk Stanley's fault.  He asked on FB:

#CMIO #CNIO #Informatics #HealthIT and other #ClinicalJedi, need your help : Philosophically, which is more important? 
1. Data in
2. Data out 
3. Both are equally important 
All opinions welcome.

My response was pretty straightforward:
Data in, else we would not be where we are today. We need data to act, and can always get it ourselves, and trust it better when we do. But, if all are playing for optimal payoff, it could be viewed as a zero sum game, and so best view is both.

But at the same time it needs a heck of a lot more explanation.

Data sharing as presently practiced by healthcare organizations is in some ways a zero sum game, and in some ways not.  

It's not a zero-sum game for healthcare organizations. The lack of accessible data used in healthcare decision making results in efforts to obtain data. With current practice (fee-for-service), the healthcare organization will do what is necessary to obtain that data (order the test, do the assessment, et cetera).  And it will get paid to do it, so there is little to no additional cost.  For some of the testing, there is little gain (many lab tests are low-margin, commodity items), and others substantial gain.  For example, an MRI can cost $1-3K or even more, and when all is said and done there is definite value going to the creators of the data.

The data, thus gathered, also becomes an asset of the healthcare organization which gathered it, which has value to the organization when they use it, but could help a competitor in some way if they share it.  Sharing that data with a competitor has a cost. The organization has little to gain by the sharing of that data, and something to lose (potentially) if they access data shared by another party.

Victor Dubreuil - 'Money to Burn', oil on canvas, 1893There's a money to burn in this system. Between healthcare organizations, it isn't a zero sum game. The patient (or their payer) is continuously pumping in cash to the system to fill the data needs of the healthcare organization, and which supports the care of that patient when they stay within the same organization.  As soon as the patient moves to a different healthcare organization, they lose the value of that data, that ultimately, they (or their employer) paid for.  

Where the zero-sum part comes from is that what the provider gains with regard to acquiring data, the patient (or their payer) loses. As long as that remains the case, there's little incentive to reduce duplicate testing.  I watched this recently as a young person I know went through a month repeating almost the same tests they had gone through previously to diagnose a chronic illness because more than 3/4 of the way through that process she was forced to change healthcare providers (due to a forced change in health insurance).  There was NO reason for them to refuse the additional testing (getting the diagnosis is critical to their health), and no value for their new healthcare provider to use the previously performed tests. Their payer was out the money for the repeated testing, even though those tests had been done previously.

In the rest of the world of business, when I pay for something to be done, I own that thing. If it a work of intellectual property, I paid for it, I own it, and I have easy access to use it. In healthcare, when a patient pays for a test (or a payer pays on their behalf), the data is treated as if it is owned by the organization that ordered the test (gathered the data), and as the patient I don't always have easy access to it.  I can only benefit from it as long as I maintain a relationship with that healthcare organization.

I can see how game theory could be applied to this situation, such that a system of value-based care could be designed where the greatest value is when there are incentives for data sharing.


Wednesday, November 15, 2017

Understanding and addressing technical debt

Architects and accountants have something in common, which is that they need to understand their organizations assets and liabilities.  For an accountant, these are fairly understood.  For an architect, one might think that they are as well.  Your assets are you IP and processes that add value, that enable your organization to out-pace its competition.  And your liabilities are those that don't.  We have a special word in architecture for IP liabilities: it's call technical debt.

Technical debt is a great opportunity for architects to benefit their organization, and here's why: it's something that is already costing your organization in terms of resources and credibility.  You can probably count the defects in the package, the tech support calls raised, the number of open customer issues that  are due to technical debt. You can put a very clear value on it, which makes it a great candidate for reducing cost.  It isn't free, but it is often quite worthwhile.

How do you do it? It's pretty simple -- pick a mess and clean it up.  I don't just mean pick the stuff up off the floor either, like your teen would clean their room.  At the very least, polish it like a fourth year recruit at West Point.  At best, remodel, and I mean completely remodel or rebuild -- like Grahame Grieve did for HL7 Version 3 creating FHIR.  The corollary to "if it ain't broke don't fix it" should be "if it keeps breaking, stop fixing it and replace it."  When car repairs exceed the cost of payments, it makes sense to get a new car (unless you are talking about something like a 69 Pontiac LeMans*).

It's painstaking work.  Usually messes like this accrue because code becomes fragile, knowledge gets lost, nobody knows quite how that works (or doesn't).  And yet there is still some underlying value to the code because it does something important and cannot be otherwise expunged, so some extra effort is needed.  It's like the antique in the attic that just needs the right refinishing to become an awesome heirloom.  This is frustrating work, often risky, and sometimes it's downright boring  and tedious (ever read through nearly a thousand different logging messages).  On the other hand, the value of the work can be made very clear and well defined.

The biggest challenge you will run into in trying to take on work like this are people who are concerned about the risks you are taking on. The biggest tool you have to combat risk is knowledge, and sometimes that means making the time to obtain more. The most fragile software components are usually the ones where the least is known about them. Go learn it. In the end, you'll be glad you did, even though getting it finished wasn't the most glorious thing you've ever done.

  -- Keith

P.S. As a teen, I spent the better part of a winter replacing an engine in a 69 Pontiac. It was cold, it was hard, it sucked.  It was my ride to school, and I learned a ton.  It looked something like the picture below, but was black.






Tuesday, November 14, 2017

HL7 FHIR Proficiency Exam

Take the HL7 FHIR Proficiency Exam and get acknowledged by HL7 to be:


Prove your proficiency with the HL7 STU3 FHIR Specification.  Become identified as an individual with FHIR proficiency.  Employers, vendors and providers, help HL7 to influence the quality of the FHIR workforce.

Note: This is Proficiency exam rather than  a professional implementation credential.  HL7 is in the planning stages of a full professional certification

Competencies Tested

It's about breadth rather than depth:

  • FHIR fundamentals
  • Resource Concepts                                                        
  • Exchange Mechanisms (includes RESTful API)     
  • Conformance and Implementation Guidance     
  • Terminology
  • Representing healthcare concepts using FHIR resources
  • Safety and Security
  • The FHIR Maintenance Process
  • FHIR licensing and IP      

Do you want to be part of the pilot?

The test is currently being piloted, and is currently available for a limited time to a limited number of individuals.  Space is very limited, so get it fast.

  • Help HL7 improve the test
  • Be one of the first to be certified
Registration and logistics

Pilot Test: What to Expect

  • Online   at test centers, or remote.
  • Closed book
  • 2 hours to complete 50 questions
  • Multiple choice, multi-select, and true/false.
  • No penalty for guessing.
  • Passing score (for the pilot) 70%
  • Cost $20 for members, $40 for non-members (for the pilot only).

How to prepare

Obtain HL7 FHIR Proficiency Study Package

Study FHIR STU3!

This test was made possible by:
Grahame Grieve
Brett Marquard
Brian Postlethwaite
Bryn Rhodes
David Hay
Ewout Kramer
Eric Haas
Virginia Lorenzi
James Agnew
Josh Mandel
Lloyd McKenzie
Rob Hausam
Simone Heckmann
Viet Nguyen
Mel Grieve


Monday, November 13, 2017

On evaluating abilities

When one ponders all the various evaluations of interoperability, you need to look at multiple factors.  A key component of this word, as for many other non-functional requirements of systems is the word "ability".  It denotes a capability, or capacity to achieve some desired goal with some level of effort.

The same is true for other non-functional requirements: Reliability, securability, accessability, usability, affordability.  In each of these, the measure is of degree, rather than a "yes vs. no" evaluation.

When headlines make claims about the existence or non-existence of "interoperability", they most often make the assumption that it exists or it does not.  However, when other evaluations of non-functional requirements are done elsewhere in industry, there's an assumption of degree, where achieving a particular score might assess a product as having one of these "abilities".  Consider the term "drivability" in the automotive industry for example.

When you hear that group believes that product isn't ...able, does that mean that it isn't?  In my world, no.  What it actually means is that product doesn't meet group's goals with respect to ...ability.  Unfortunately, without stating what group's goals are, there's precious little that can be done with that reporting other than to investigate further.

Were early cell-phones usable? It depends.  Did you live in an area where you had coverage?  Could you afford to use them?  Did it make and receive the calls that were important to you?  If your answers to those questions were yes, moderately, and mostly, you might say that they were somewhat usable.  If the answers were no, yes, and no, you would say no.  When I worked in the city, my answer was yes.  When I had to travel to a rural destination my answer was no.  These were the goals that warranted my purchase of a bag phone two decades ago.

Determine the goals.  Assess whether the capability meets those goals.  Only then can you assess whether the capability is sufficiently present or not.  TODAY.  Tomorrow the expectations will be different.

The bag phone I evaluated above would certainly be considered to be barely usable today, even though twenty years ago is was more than moderately useful.

   Keith

Thursday, November 2, 2017

Shifting into Sixth Gear

  1. Standards are like toothbrushes.  Everyone needs one, and everyone wants to use their own.
  2. Standards are like potato chips.  You cannot have just one.
  3. And then there's simply XKCD 927 (a well worn, perhaps even "standard" image in standards circles).



And if you look back to late 2012 and early 2013, you can see some of the discussions I had in this space around a battle between two competing standards from the same organization, one for Clinical Decision Support and the other for Quality Measurement.

What rarely happens in this space is that something new arises from the mess that actually solves two different problems ... in this case though they were two different sides of the same coin. The conditional: If (X) then (Y), and the measure: [patients for whom Y is relevant]/[patients for whom X is true].

What happened?  Clinical Quality Language is what happened.  And in the words of its inventor, "we started with an evaluation environment ... we already had the ELM infrastructure ... and we added an execution language".

Yes, I'm crediting one person for the invention because I watched how this played out, and while every standards effort is a corporate (little-c) one, this one was very much driven by one person which assistance from a cast of dozens and input from many more.  Much in the same way as FHIR was originally driven forward by Graham Grieve, but became an effort backed by many.

In fact, recently, CQL was recently recognized by CMS in the following fashion:

CMS Announces Transition of Electronic Clinical Quality Measures to Clinical Quality Language for the CY2019 Reporting/Performance Periods

So, for changing the paradigm in a big way, in fact, for being to CDS and Quality Measurement what Grahame Grieve was to FHIR, I'm awarding this Ad Hoc Harley as follows:

This certifies that  
Bryn Rhodes


Has hereby been recognized for changing the paradigm in Clinical Decision Support and Quality Measurement

P.S. Bryn and I are working two different tracks yesterday and today at the Digital Quality Summit in DC hosted by HL7 and NCQA.  It's no accident that I chose today to award this particular accolade, but Bryn's award was pretty much in the bag last month when I realized how long it had been since I issued one of these, and looked back at who I had missed.

Monday, October 9, 2017

Where do I find the Medication Generic Name in a CCD Document

The answer is, it depends on your CCD version:

CCDA 2.1 has this to say:
 4. SHALL contain exactly one [1..1] manufacturedMaterial (CONF:1098-7411).
     Note: A medication should be recorded as a pre-coordinated ingredient + strength + dose form (e.g., “metoprolol 25mg tablet”, “amoxicillin 400mg/5mL suspension”) where possible. This includes RxNorm codes whose Term Type is SCD (semantic clinical drug), SBD (semantic brand drug), GPCK (generic pack), BPCK (brand pack).
     1. This manufacturedMaterial SHALL contain exactly one [1..1] code, which SHALL be selected from ValueSet Medication Clinical Drug urn:oid:2.16.840.1.113762.1.4.1010.4 DYNAMIC (CONF:1098-7412).
         1. This code MAY contain zero or more [0..*] translation, which MAY be selected from ValueSet Clinical Substance urn:oid:2.16.840.1.113762.1.4.1010.2 DYNAMIC (CONF:1098-31884).

CCDA 1.1 has this to say:
 4. SHALL contain exactly one [1..1] manufacturedMaterial (CONF:81-7411).
     Note: A medication should be recorded as a pre-coordinated ingredient + strength + dose form (e.g., “metoprolol 25mg tablet”, “amoxicillin 400mg/5mL suspension”) where possible. This includes RxNorm codes whose Term Type is SCD (semantic clinical drug), SBD (semantic brand drug), GPCK (generic pack), BPCK (brand pack).
     1. This manufacturedMaterial SHALL contain exactly one [1..1] code, which SHALL be selected from ValueSet Medication Clinical Drug urn:oid:2.16.840.1.113762.1.4.1010.4 DYNAMIC (CONF:81-7412).
         1. This code SHOULD contain zero or one [0..1] originalText (CONF:81-7413).
             1. The originalText, if present, SHOULD contain zero or one [0..1] reference (CONF:81-15986).
                 1. The reference, if present, SHOULD contain zero or one [0..1] @value (CONF:81-15987).
                     1. This reference/@value SHALL begin with a '#' and SHALL point to its corresponding narrative (using the approach defined in CDA Release 2, section 4.3.5.1) (CONF:81-15988).
         2. This code MAY contain zero or more [0..*] translation (CONF:81-7414).
             1. Translations can be used to represent generic product name, packaged product code, etc (CONF:81-16875).

HITSP C32 has this to say (you can actually find this in the HITSP C83 specification):
2.2.2.8.13 Free Text Product Name Constraints
C83-[DE-8.15-CDA-1] The product (generic) name SHALL appear in the <originalText> element beneath the <code>

It's pretty clear that the preferred way to handle this changed in between CCD 1.0 (HITSP C32) and CCDA 1.1, and also that some critical information loss occurred with regard to how to record generic name between CCDA 1.1 and 2.1.  I think as industry understanding of CDA expanded, the need to express the detail about generic name probably changed, but not necessarily for the better.

If you want to include generic name, you would do it in a translation -- when you don't already list the drug using an RxNORM code from the Semantic Clinical Drug value set (generic codes) (e.g., you use a Semantic Branded Drug code).

I have two statements to make about this:

  1. Not all implementers are informaticists or would understand the distinction between types of RxNorm codes.  We (HL7) need to remember to speak to who is doing the work, not to ourselves.
  2. Brand and Generic information is already represented as relationships embedded in the RxNorm terminology itself.  The simultaneous transmission of a brand code and a generic code for that same drug simply repeats what is already present in RxNorm.  The advice I give these days would be to trust RxNorm before you trust your trading partner, and if what your trading partner tells you CONFLICTS, someone needs to go raise a red flag about inconsistent data.

   Keith



Wednesday, September 27, 2017

Security and Privacy: Where are we headed

So, the new iPhone's are here, along with new security features.  Combine that with this recent bit in my inbox and I have a few predictions.

A study published in Healthcare Informatics Research finds 73 percent of medical professionals have used another staff member's password to access a patient's electronic health record at work, HealthITSecurity reports.


Facial recognition, will be used to solve for this problem.  Patient safety advocates will jump in to take advantage of the technology, which will be followed shortly thereafter by the computer saying, you look tired, are you sure you should be caring for patients …

At some point in time, this will move into the commercial domain (e.g., software developers, others creating IP).  It will expand into eavesdropping protection, which will lead to DOS attacks by small children popping their heads up in the seat behind you while you are trying to get work done on the plane or train or subway.

At some point at an IHE Connecthon, all testing work will stop as we all have to get exceptions to have competitors in the same room with our code, but cannot complete the process with them standing too close. This will lead to an eventual revolt against security and privacy altogether as similar challenges pop up across the business spectrum.

Eventually we will give up altogether on having any sort of privacy or security, and the world will live peacefully together.

   Keith

P.S. And then the aliens come and wipe us all out because we couldn't even hide from them properly.

Restricted Includes

Call me stupid. I spent the last 12 hours working on a performance challenge before I realized what the real solution was.  The issue was that I was using a FHIR _include parameter on an existing query to get included resources that needed to be displayed.  The performance was absolutely miserable.

To explain a bit, MedicationStatement and MedicationOrder reflect two different sides of an intention that a patient be given or taking certain medications.  The MedicationStatement resource is (quoting DSTU2):

A record of a medication that is being consumed by a patient. A MedicationStatement may indicate that the patient may be taking the medication now, or has taken the medication in the past or will be taking the medication in the future. The source of this information can be the patient, significant other (such as a family member or spouse), or a clinician.

Whereas MedicationOrder is:

An order for both supply of the medication and the instructions for administration of the medication to a patient. 

And while neither MedicationOrder nor MedicationStatement reference each other, the MedicationStatement does provide for "supportingInformation" as a Reference to any resource.  I wanted to link the two to show the physician intention with the actual prescriptions and refills given.

But then when querying for MedicationStatement for a time period, I also wanted MedicationOrder, so I just grabbed the included references.  Needless to say, this was a MISTAKE, because a patient may have been taking a medication for years and had literally hundreds of refills (I'm not kidding here 3 years of monthly refills on three meds is > 100, and hell, that could even be me).

The first sign of this was some icky performance.  But see, the MedicationOrder stuff is there not because I have an immediate use for it, but rather because I'm following the CCD/CCDA pattern long established, and I KNOW it will be used in something I have to work with downstream, so I included it.  So, it is kinda hidden and took a while to track down.  AND then I spent about 8 hours trying to improve the performance of the MedicationOrder retrieval instead of asking about the quantity of data.

It might have been advantageous to go after MedicationOrder in the _include because of my data model and processing flow, but FHIR syntax for queries don't cross into _included resource in DSTU2 (I get to play with STU3 soon, maybe they've solved the problem there).  I cannot in DSTU2 say: Give me these MedicationStatement resources, with ONLY the _included MedicationOrder resources that look like that.  Yeah, I'm sure I could use the extended query syntax to get to this, but I'm looking for a bit more elegance here (that's what engineers call complexity that looks cool).

So, here's my thought on syntax:

_include=MedicationStatement:supportingInformation:MedicationOrder(setName)&_restrict:setName.searchParam=value

This would name the set of included fields, AND allow me to set an inclusion restriction on them.
If we only had that, AND if I implemented it, my problem would be solved.  A simple matter of programming, what? Yeah.

Nah.  FHIR Query syntax is complicated enough.  But here is a use case for something we haven't thought of and the nice thing about it is that it seems simple enough to understand (even if I don't yet really know how to implement it).  Is it in the 80%?  Maybe.  I have ONE use case for this.  I could probably find others.  I'm not going to spend much more time on this, I still have to fix that performance problem now that I've found it.

   Keith





Monday, September 25, 2017

In my Inbox

This morning I received a long not necessarily relevant announcement to email list I don't remember subscribing to, followed by 30 replies. The replies are all from relatively educated people, many of whom know better, and are summarized below for your reading amusement:

R1: Please remove me from this list
R2: Hi R2, R3 did not send this to you ...
R3: I am not R2
R4: Please respond to the person/s directly and not send a reply to all
R5: Please remove me from all future emails concerning this program
R6: I find reply all useful when unsure who the admin is.
R7: Must you use "reply to all"
R8: Meme "Reply All"
R9: For God's sake everybody -- quit hitting 'reply all' ...
R10: Please remove me as well.
R11: The same here.
R12: This is officially ridiculous. Can everyone stop replying to all these emails?
R13: Same
R14: I don’t know what this email is either and I certainly did not send it out. Please remove me as well.
R15: Hitting reply on the original message only sends the message to the person who sent the email which should be the admin of the list.
R8: Good luck, R3! Keep me posted on the outcome.
R17: Please remove me from your list...
R8: Who's on first?
R20: You guys realize by replying all and asking people to stop replying all that you're just part of the problem, right?...
R21: I just became an Ohio State fan…
R22: I don’t know why I am on this list, so please remove me as well, whoever the admin is.
R23: And good Lord, people, there’s a contact email in the body of the original message:______
Although I must say this has been highly entertaining and a big improvement over the typical Monday.
R24: Please remove me from this list.
R25: Please remove me from this list. Thank you!
R26: Dear whomever, I already have <degree>.  I need <job>...
R27: 
R28: Me too (in reply to me too).
R29: It appears the original email came from ____. Please direct your request to her alone...
R30: Sorry R?, but hitting reply to all just fills our inboxes with garbage.

    ... and still going ...

P.S. My e-mail is simply going to point to this blog post and ask everyone to comment here.

Monday, September 18, 2017

Comparing Dynamically Generated Documents

Sometimes to see if two things are similar, you have to ignore some of the finer details.  When applications dynamically generate CDA or FHIR output, a lot of details are necessary, but you cannot control always control all the values.  So, you need to ignore the detail to see the important things.  Is there a problem here?  Ignore the suits, look at the guns.

Creating unit tests against a baseline XML can be difficult because of detail. What you can do in these cases is remove the stuff that doesn't matter, and enforce some rigor on other stuff in ways you control rather than your XML parser, transformer or generation infrastructure.

The stylesheet below is an example of just such a tool.  If you run it over your CDA document, it will do a few things:

  1. Remove some content (such as the document id and effective time) which are usually unique and dynamically determined.
  2. Clean up ID attributes such that every ID attribute is numbered in document order in the format ID-1. 
  3. Ensure that internal references to those ID attributes still point to the thing that they originally did.

This stylesheet uses the identity transformation with some little tweaks to "clean up" the things we don't care to compare.  It's a pretty simple tool so I won't go into great detail about how to use it.

   Keith


<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" 
  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:cda="urn:hl7-org:v3">
  <xsl:template match="@*|node()">
    <xsl:copy>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>
  
  <xsl:template match='@ID'>
    <xsl:attribute name="ID">
      <xsl:text>ID-</xsl:text>
      <xsl:number count='//*[@ID]'/>
    </xsl:attribute>
  </xsl:template>
  
  <xsl:template match='/cda:ClinicalDocument/cda:id|/cda:ClinicalDocument/cda:effectiveTime|/cda:ClinicalDocument/cda:*/cda:time'>
    <xsl:copy>Ignored for Comparison</xsl:copy>
  </xsl:template>
  
  <xsl:template match="cda:reference/@value[starts-with(.,'#')]">
    <xsl:attribute name="value">
      <xsl:text>#ID-</xsl:text>
      <xsl:value-of select='count(//*[@ID=substring-after(current(),"#")]/preceding::*/@ID)+1'/>
    </xsl:attribute>
  </xsl:template>
  
  <xsl:template match='@ID' mode='count'>
    <xsl:attribute name="ID">
      <xsl:text>#ID-</xsl:text>
      <xsl:number count='//*[@ID]'/>
    </xsl:attribute>
  </xsl:template>
  
</xsl:stylesheet>


Wednesday, September 13, 2017

Matt the Mighty, A PrecisionMedicine Super Hero (Dad)

Every year in September, HL7 has its "Plenary" session. This is a half day where we hear from folks outside of the working groups on important topics related to what we do.

This year we heard from Matt Might, whom I now would christen Matt the Mighty for his Super-Dad precision medicine powers.  Either that, or as close in real life as one could come to a Doctor McCoy.

You really have to hear him tell the whole story because A) He is an awesome story teller, and B) there's simply so much more depth to it.

The long and short of it though, is not only does he help to figure out how to identify a rare (n=1?) disease, and develop a diagnostic test for it, and identify other possible sufferers, but also a treatment (not a complete cure,  but addressing some effects) among already FDA approved substances (lucking out on an OTC drug), and develops model legislation that his state passes to allow for "Right to Try" use of medications for these cases, and builds a process by which other n=1? disease patients can benefit from it, starting with his own son.

That's Mighty powerful application of precision medicine (pun fully intended).  If you weren't here, I'm sorry you missed it, and urge you to listen to him speak elsewhere.

   Keith

Thursday, September 7, 2017

Demand Driven Pricing

One of the things we've seen from early warnings about Hurricane Irma is a significant increase in prices in airline fares from some airlines.  Some of this, I'm sure is due to automated pricing algorithms on fares based on demand, for which there may very well be little or no human intervention.

That got me to thinking about how demand driven pricing AND demand driven reimbursement could have an interesting impact on prices for healthcare services IF it were possible to apply them more interactively and faster.

In the battle of algorithms, the organization with the best data would most likely win.  I see four facets to that evaluation of "Best": Breadth, Expression, Savvy, and Treatment (see what I did there?).

  • Breadth
    More bigger data is better.
  • Expression
    If your data is organized in a way that makes correlations more obvious, then you can gain an advantage.
  • Savvy
    If you know how A relates to B, you also gain an advantage.  Organization is related to comprehension.
  • Treatment
    Can you execute?  Does the data sing to you, or do you have to filter signal from a vast collection of white noise?
In the 5P model of healthcare system stakeholders, Polity (Government), Payer, Provider, Patient, and Proprietor (Employers):
  1. Who has the largest breadth of data? The smallest?
  2. Who has the best expression of data? The worst?
  3. Who has the greatest savvy for the data? The least?
  4. Who will be most able to treat the data to their best advantage? The least?

It seems pretty clear that the patient has the short end of the stick on most of this, except perhaps on their "personal" collection of data.

Payers are probably in better shape than others with regard to breadth, followed closely by Polity. The reason I say that is because government data is dispersed ... the left hand and the right hand can barely touch in some places.  Providers rarely have the breadth unless they begin to take on the Payer role as well (e.g., Kaiser, Intermountain, et cetera).

Providers have a better chance of having better expression, being able to tie treatment to condition in more detail, and have some chance at understanding outcomes as well.

It's not clear that employers are THAT much better off than patients, although frankly I honestly don't know how much information they really have.

Treatment is where it all comes together, and right now in the US, it seems that nobody has yet found the right treatment ...

Anyway, it's an interesting place to explore further.

   Keith


Wednesday, September 6, 2017

The Good, the bad and the ugly (HL7 Ballots)

Polling StationHL7 Balloting just closed this last hour.  Here's my recap of what I looked at, how I felt about it, and where I think the ballot will wind up from worst to best.  Note: My star ratings aren't just about the quality of the material, its a complex formula involving the quality of the material, the likelyhood of it being implemented, the potential value to end users and the phase of the moon on the first Monday in the third week of August in the year the material was balloted.

VMR (Virtual Medical Record) 
  1. HL7 Implementation Guide: Decision Support Service, Release 1 - US Realm (PI ID: 1018)
  2. HL7 Version 2 Implementation Guide: Implementing the Virtual Medical Record for Clinical Decision Support (vMR-CDS), Release 1 (PI ID: 184)
  3. HL7 Version 3 Standard: Decision Support Service (DSS), Release 2 (PI ID: 1015)
  4. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) Logical Model, Release 2 (PI ID: 1017)
  5. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) Templates, Release 1 - US Realm (PI ID: 1030)
  6. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) XML Specification, Release 1 - US Realm (PI ID: 1016)

This had a total of six artifacts on the ballot.  Together they get 1 star for being able to pass muster to go to ballot.  As a family of specifications, this collection of material looks like it was written by a dozen different people across multiple workgroups with three different processes. What is sad here is that the core group of people who have been working on this material for some time (including me) is the same across much of this work, and it all comes out of the same place.  VMR was always an ugly stepchild in HL7, and these specifications don't make it much better.  Don't lose hope though, because QUICK and CQL are significant improvements, and the FHIR-based clinical decision support work such as CDS Hooks is much more promising. All appear to have achieved quorum and seem likely to pass once through reconciliation.

Release 2: Functional Profile; Work and Health, Release 1 - US Realm (PI ID: 1202)   
Yet another functional model.  Decent stuff if that is what excites you.  I find functional models boring mostly because they aren't being used as intended where it matters.  Pretty likely to pass.

HL7 Version 2.9 Messaging Standard (PI ID: 773) 
The last? of a dying breed of standard.  Maybe? Please? Not enough votes to pass yet, but could happen after reconciliation (which is where V2 usually passes).

Pharmacist Care Plan  
  1. HL7 CDA® R2 Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm (PI ID: 1232)
  2. HL7 FHIR® Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm (PI ID: 1232)
Another duo, missing the overweight architectural structure of VMR, but certainly adequate for what it is trying to accomplish.  The question I have hear is about its relevance.  Except in inpatient settings, I find the notion of a pharmacist care plan for a patient to be of very little value at this stage.  In fact, we need more attention on care planning in the ambulatory setting.

These are for comment only ballots and the voting reflects it.  While not likely to "pass", the comment only status guarantees that these will go back through another cycle.  Based on the voting, the material needs it.

HL7 Guidance: 
Project Life Cycle for Product Development (PLCPD), Release 2 (PI ID: 1328)  
HL7 continues to ballot its own processes.  What makes this one funny is that this particular ballot comes out of a workgroup in the Technical and Support Services steering division, which previously rejected another group in that divisions balloting a document because T3SD (their acronym) doesn't do ballots (BTW: That's a completely inadequate summary of what really happened, some day if you buy me a beer I'll get _ and _ to tell you the story.  Better yet, buy them beers).

It's a decent document, and likely to "pass".

HL7 CDA® R2 Implementation Guide: 
International Patient Summary, Release 1 (PI ID: 1087) 
I could get more excited about this particular piece of work if it weren't for the fact that it's all about getting treatment internationally, rather than being an international standard that would eliminate some of the need to deal with cross border issues.  But, it's the former rather than the latter, so only three stars.  A lot of the work spends time dealing with all the tiny little details about making everyone happy on every end instead of getting someone to make some decent decisions that enable true international coordination.

This one is tight, will likely pass in reconciliation, and is getting a lot of international eyes on it.  It's good stuff.

UDI Implementation: 

  1. HL7 Domain Analysis Model: Unique Device Identifier (UDI) Implementation Guidance, Release 1 (PI ID: 1238)
  2. HL7 CDA® R2 Implementation Guide: Consolidated CDA Templates for Clinical Notes; Unique Device Identifier (UDI) Templates, Release 1 - US Realm

By itself, neither one of these might have gotten four stars.  Together they do.  UDI needs a lot of explaining for people.  These documents help.

While the balloting looks tough (the second document is "failing" to pass by a 2/3 majority), it's all about doing what DOD, VA, and others want to ensure interoperability between them.

HL7 CDA® R2 Implementation Guide: 
Consolidated CDA Templates for Clinical Notes; Advance Directives Templates, 
Release 1 - US Realm (PI ID: 1323)  

This is a useful addition to what we can do today with Advance Directives, and a great example of how to deal with backwards compatibility right, and they almost nailed it perfectly (my one negative comment on this item is a fine point).

Not a lead-pipe cinch but surely the issues in this one will be resolved during reconciliation.

HL7 CDA® R2 Implementation Guide: 
Quality Reporting Document Architecture Category I (QRDA I) Release 1, 
STU Release 5 - US Realm (PI ID: 210) 
Useful, necessary, and boring, but of great value.  Sometimes it pays to be boring.
Definitely a lead-pipe cinch to pass.  Third highest in positive votes, with 0 negatives.

HL7 Cross-Paradigm Specification: 
Allergy and Intolerance Substance Value Set(s) Definition, Release 1 (PI ID: 1272) 
ABOUT. DAMN. TIME. An allergy value set we can all use. Nuf said.
The interesting back story here is who is voting negative (who cares) about this.  It looks like a lot of VA/DOD interoperability is going to get decided through standards. I'm pretty certain this stuff is going to get worked out, which has tremendous value to the rest of us.

HL7 FHIR® IG: SMART Application Launch Framework, Release 1 (PI ID: 1341) 
I spend the most time commenting on this one.  I'm looking forward to this seeing this published as an HL7 Standard and in getting some overall improvements to what I've been implementing for the past year or so.

There's definitely some good feedback on this ballot (which means likely to take a while in reconciliation), even though it seems very likely to pass.

HL7 Clinical Document Architecture, Release 2.1 (PI ID: 1150) 
This was the surprise of the lot for me.  I expected to be bored, having said CDA is Dead not quite four years ago.  I was, pleasantly so.  There was only one contentious issue for me (the new support added for tables in tables). They got to four stars by making sure all the issues we've encountered over the past decade and more were addressed. They got an extra star by making it easy to find what had changed in the content since CDA R2.  All in all, a pleasant surprise. CDA R2 still reigns supreme, but I think CDA R2.1 might very well become regent until CDA on FHIR is of age.
Oh yeah.  It passed, so very likely to go normative, which will make discussions about the standard in the next round of certification VERY interesting.

   Keith










Thursday, August 31, 2017

URGENT HELP NEEDED (Humor)

Originally found in my personal inbox from a software developer still using a Commodore 64. I thought I'd share it today.



If you're reading this, then you are already part of a chain that goes
back to the early 1980s. Early in the morning on June 4th in 1982,
software engineer Dwayne Harris sat down to write a BLISS module for
the then-new VMS operating system. Little did he know, but a
radioactive bug had crawled into his VAX 11/785 prototype, shorted a
power supply capacitor and opened a worm-hole into another dimension.

A small type-2 semi-demonic entity emerged from this dimension and
took up residence in the VMS source repository. Fragments of the
semi-demonic entity's consciousness were also embedded in Dave
Cutler's subconscious (thus explaining the WindowsNT video driver
interface.)

Every 2^20 seconds, a secret society of software engineers gathers in
an unused USENET news group to ritually banish this semi-demonic
entity. Things have been going fine, but the old guard is retiring and
moving on to other projects. We are in desperate need of new software
engineers to carry on the work of this once mighty society of software
engineers.

If we fail to achieve a quorum of 0x13 participants in the banishment
ritual, the semi-demonic entity will be released and any number of
modern plagues will fall upon the online public.

In 1995, we started our ritual late and internet explorer was released
upon the world. Only through fast action was complete disaster averted
and MS Bob coaxed back into a vault underneath Stanford University.

Because of the recent influx of former redditors into the remains of
the USENET backbone systems, we can no longer perform our rituals. As
an alternative, we have developed this chain letter.

At EXACTLY 7:48:12PM PDT 22 July 2015 (10:48:12PM EDT) and every 2^20
seconds afterwards, we ask you to email a copy of this letter to five
software engineers in your address book. The flux of mystical
representational energy through MAE WEST and MAE EAST should be
sufficient to ward off the evil that now faces us.

Remember, for this spell to work, you must be a software engineer and
send the email to other software engineers.

Wednesday, August 30, 2017

An Ongoing Problem in FHIR

How do you find a problem that was occurring during a particular time span?  This is relevant if you are doing a search for Conditions (problems) that are active within a particular time period for something like a quality measure or clinical decision support rule. As I've previously discussed here, temporal searching is subtle.

So, if you have a time period with start and endpoints, and you want to find those conditions which were happening in that time period.  There are only two rules you need to care about:

  1. You can rule out anything where the onset was after the end of the time period.
  2. You can rule out anything where abatement was before the time period started.
What's left?  In the following analysis, I'm ignoring "things that happen at the boundary points". For the sake of argument, we'll assume that time is infinitely divisible and that no two things occur at "exactly the same time".  Obviously we quantize time, and boundary conditions are inevitable.  But they aren't IMPORTANT to this discussion.
  1. Things that had an onset within the time period, or before the time period started.
    1. For those items that had an onset within the time period, clearly its in the time period!
    2. For those items that had an onset before the time period started, one of three things must have occurred:
      1. The problem abated before the time period started (which is ruled out by rule #2 above).
      2. The problem abated during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem abated after the time period ended, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.
  2. Things that abated within the time period, or after the time period ended.
    1. For those items with an abatement within the time period, the are clearly withing the time period.
    2. For those items abated after the time period ended, one of three things must have occurred.
      1. The problem onset was after the time period that ended, in which case it is ruled out by rule #1 above.
      2. The problem had an onset during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem onset was before the time period started, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.

So your FHIR query is Condition?onset=le$end&abatement=ge$start

Done.  Simple ... err yeah, I'm going to stand by that.

   Keith

P.S. Yeah, so easy I had to come back and reverse the le/ge above.  Duh.

Tuesday, August 29, 2017

Or Leave the Tricky Bits to Me


One of the things that I really enjoy about my job is when I get to play with something particularly challenging, and as a result come away from the experience with a better understanding of how things work, or a better process model.

Often times, code gets away from us as developers (same is true with standards).  If you've ever had one of those situations where, as an engineer, you found yourself in the position of having developed a piece of software from the middle-out, you know what I mean.

Middle out solutions are where you have a particular problem, and basic principles are simply too basic to provide much help ... and details are sometimes rather nebulous.  I just need to fix this one problem with ... fill in the blank.  And so you find a way to fix that one problem.  Except that later you find an odd ball exception that doesn't quite fit.  And then there's another issue in the same space.

After a while you find you have this odd mess of code that just doesn't quite work because you came at things the wrong way.  And then some thread comes unwoven and it stops working altogether .. at least for that thing you cared about right now.  That thing somehow was important enough (unlike the rest of the work) to make you take a step back and try a different approach.

Somewhere along the line you took the lenses and flipped them around so that now you can see the forest instead of the trees, or vice versa.  And now that strange jumble of code begins to make sense all over again, fitted together in a different way, to your new model of understanding.

That's what I like about my job.  When that happens.


   Keith






Monday, August 28, 2017

Skip the Tricky Bits When You Can

Someone asked me for an OID today.  I have an OID root (or seven), and needed to assign a new OID in one space to represent a particular namespace.  The details aren't important.

I considered several choices.  One of them was someoid.0 and the other was someoid.2 (since someoid.1 was already assigned).  While if I had been assigning these OIDs in a meaningful order it would have made sense to make this OID appear before what I was already using someoid.1, I chose to assign it to someoid.2 instead, even though someoid.0 is perfectly legal.

Why?  Because not everyone understands that an OID can contain a singular 0 digit in one of it's positions.  And choosing an OID that some might argue with is just going to create a headache for me later where I'm going to have to explain the rules about OIDs to them.  I can avoid that by simply choosing a different OID.  Not only have I avoided a future support call, but I've also avoided a potential issue where someone else's incorrect interpretation of a standard could cause me or my customers problems somewhere down the line.

It would be nice if standards skipped the tricky bits, but we know they don't.  So, when you have a choice, think about your end-user's experience, and keep it simple.  Not every decision you make will let you do that, but for those that do, simply make it a point to think about it.  You'll be glad you did.

   Keith

Friday, August 25, 2017

Four Reasons Why Blockchain isn't the next big disruptor in HealthIT

Don't get me wrong, Block Chain is cool technology, but it is probably NOT the next big disruptor in healthcare.  It's certainly a hammer in search of a nail, but there are so many fasteners in healthcare that we are working with that simply aren't nails.

Fundamentally, Block Chain is a way to securely trace (validate) transactions.  For digital currency, the notion of transaction is fairly simple, I exchange with you some quantity of stuff ... Bitcoin for example. The block chain becomes the evidence of transfer of the stuff.  It's a public ledger of "exchanges".  The value add of the block chain is that it becomes a way to verify transactions.

1. The Unit of Exchange is Different

What's the transaction unit in healthcare?  In my world, it is knowledge related, rather than monetarily related.  The smallest units of knowledge are akin to data types, a medication (code), a condition (code), a lab result (code and value), a procedure (code), an order, an attachment, an address.  Larger units are like FHIR resources, associating data together into meaningful assertions.

2. The Scale of the Problem is Different

Today, there are about 200,000 Bitcoin transactions a day.  If we look at the unit of exchange I mentioned above, a typical CCDA document embodies something on the order of 100 knowledge units.  Let's say there are 150,000 physicians in the US, and each one sees 20 patients a day.  Multiply 150,000 x 20 x 100 = 300 million transactions per day.  To put that number in perspective, Amazon sold about 36 million items on Cyber Monday in 2013.

3. Transactions are Private

When the unit of exchange is an association of an individual (the patient) with a problem, medication or allergy, asserted by another individual (the provider) it's not the same as when it is the exchange is of a disclosed public quantity of stuff between two pseudonymous addresses.  Public ledgers, even with some level of protection behind them, still contain a persistent record of all transactions. After an assertion is made, the effects are pretty permanent, including any damage done, all future assertions to the contrary not-withstanding.  Ask any patient who's every been falsely accused of drug seeking behavior.

4. The Fundamental Problem is Different

The challenge in health IT is not "verification" of knowledge exchanges (transactions), but rather, "enabling" knowledge exchanges between two parties.  With block chain, the question of where to go to "get the ledger" isn't an issue.  In healthcare today, it is.

Block chain is cool tech, no doubt.  Surely there is a use for it in healthcare.  But also, it isn't the answer to every problem, nor specifically the answer to the "Interoperability" problem.  Though right now, you can be assured that it is effectively a free square in your next Interoperability buzzword bingo session.

   -- Keith




Thursday, August 24, 2017

Interoperability and HealthIT: Are we there yet?



Are we there yet?  The short answer, as I quoted from a speaker earlier last week, is: "There is no done with this stuff".  The longer answer comes below.

If you are as old as I am, you remember having to have a case full of Word Perfect printer drivers, Centronics and Serial cables, and you might even have a Serial breakout box to help you work out problems setting up printers.  Been there, done that

What's happened since then?  Well, first we standardized port configurations based on the "IBM PC Standard".  Except that then we had to move to 9 pin serial cables.  And then USB.  And today, wireless.  Drivers were first distributed on disk, then diskette, then CD.  And now you can download them from the manufacturer, or your operating system will do that for you.

If you happen to have a printer that isn't supported, well, if it supports a standard like Postscript, we've got a default driver for that, and for PCL printers, and several dot matrix protocols.  So, today you can buy a printer, turn it on, autoconfigure it, and it just works, right?  Mac users had it a bit easier, but they still went from the old-style Mac universal cables to USB to ...

I upgraded my network infrastructure the other day, and come to find out my inkjet printer that had been working JUST fine on all the computers in the house, and iPhones and iPads, no longer worked on my various Apple devices.  I tracked it down to a compatibility issue between new features of my WiFi router and my old printer.  As a consumer, my expectations of interoperability were definitely NOT met.

Which brings us back to my main point.  The expectation of users with regard to interoperability still isn't being met, even if the situation is improving.  It took us twenty some years to get from where we were then to where we are now, and some configurations still aren't "Plug and Play" with respect to printing.

To figure out how to measure where we are with regard to interoperability, we first need to figure out what it is we want to measure.  And then we need to figure out how to measure the distance to that goal.  When "where we want to go" is an obscure location, figuring out how far we have to go is huge challenge.

Let's assume we want "Plug and Play" interoperability.  What does that actually mean?  We probably want to start with a basic platform and set of capabilities.  You have to define that, first functionally, and then in detail so that it can be implemented.  Then we have to talk about how things are going to connect to each other.  Connecting things (even wirelessly) is hard to do right.  Just ask anyone whose ever failed to connect their Bluetooth headset to their cell phone.  Do you have any clue how much firmware (software embedded in hardware) and software is necessary to do that right?  We've actually gotten that down to a commodity item at this stage.

If we look at the evolution of interoperability in hardware spaces such as the above, we can see a progression up the chain of interoperability.

1. Making a connection between components.
This is a progression from wires and switches to programmable interfaces to systems that can automate configuration of a collection of components.
2. Securing a connection over the same.
This is a progression from internal physical security, to technical implementations of electronic security, to better technical implementations, with progressions advancing as technology makes security both easier and harder depending on who owns it.
3. Authenticating/authorizing interconnected components.
We start from just establishing identities, to doing so securely, and from complex manual configurations, to more user friendly configurations, and finally to policy based acceptance.  At some point, some human still has to make a decision, but that's getting easier and easier to accomplish.
4. Integrating via common APIs or protocols.
Granularies start out at a gross level (e.g., CDA Document), and get more refined as time goes by and communication speed and response times get better, and drive from data (a set of bits) to functional ( function to produce a set of bits to understand) and back to data again (finer grained data) and algorithms (functional instructions again on how to produce data).  This is a never ending cycle.
5. Adapting to capabilities of connected components.
This starts at the level of try and see if it works and respond gracefully to errors, to declaration of optional feature sets, to negotiations between connected components about how they will work together.
6. Discovering things that one can connect to.
We first start by making a list for a component, then by pointing components to lists of things, then by pointing components to places where they can find pointers to lists, and finally, by broadcast protocols where basically, all you need to know is how to look around your environment.  Generally, there will always need to be a first place to look though (it might be a radio bandwidth, a multicast address, or a search engine location)
7. Intelligently interconnecting to the environment one is in.
The final destination.  We don't know what this really looks like for the most part.

Where we want to go is that final stage, and arguably, that's what we have finally begun to reach with the end user experience installing a printer (with some bobbles).  There's still some hardware limitations on Bluetooth devices because those are mostly small things, but even that is reached stage 6.  For healthcare, we are somewhere around stage 4 with FHIR.  CDS Hooks is arguably stage 5. Directories and networks like Carequality or Commonwell or Surescripts RLS will be progress towards stage 6.

The progression down this stack takes time, and the more complex the system, the longer it takes. Consider that printers, headsets and even cell-phones and laptops aren't enterprise class computing systems. The IT industry in general is making progress, but we aren't at a stage yet where enterprise level ERP, CRM and FMS systems are much further along than level 5 or 6, even multi-million dollar industries.  The enterprise level EHR and RCM and EDI systems used in similar sized businesses are moving a bit slower (a classic issue in HCIT).

So, back to measurement.  Are we there has a context.  If your goal is to get to stage 7, be prepared to wait a while and continue be frustrated.  In 2010, my family drove nearly 5000 miles to get sushi. There were plenty of stops along the way, and getting to each was exciting.  If you want to have fun along the journey, identify the way points, and make a point that this IS your NEXT destination. Otherwise, sushi is still a very long way off.

   -- Keith