Monday, March 31, 2014

Could we do HQMF Using the FHIR DSTU Today?

I struggle sometimes coming up with content to write here, especially when I'm working on internal vs. external projects.  So just for fun I've come up with a little external project for myself that I can talk about here.  We'll see how it progresses, and eventually it may become a real project in HL7.

The project that I'm looking at is how to create a Measure Specification in FHIR.  Interestingly, some of the available components I need are already present in the specification.

Looking over the HQMF Release 2 Standard, I need resources to support the following major components of a measure definition:

HQMF Measure DocumentFHIR Resource of Resource Component
/QualityMeasureDocumentComposition
./author
./custodian
./verifier
./participant
Composition.author
Composition.custodian
Composition.attester
N/A
./controlVariable/measurePeriod
./subjectOf/measureAttribute
Observation
Observation
./definition/ValueSetValueSet
/QualityMeasureDocument//MeasureDescriptionSectionComposition.section
/QualityMeasureDocument.//DataCriteriaSectionComposition.section
./definitionN/A
./*CriteriaQuery
/QualityMeasureDocument//Population Criteria SectionComposition.section
Numerator, Denomonator ... Criteria N/A
/QualityMeasureDocument/MeasureObservationSectionComposition.section
./measureObservationDefinitionN/A

The four missing pieces deserve some discussion.
./participant is a way in HQMF to describe the contributions of a participant that aren't in the role of author, verifier or custodian.  I'd handle that as an extension.

./definition isn't needed because of the way that Query works.  A criterion in HQMF uses Query By Example.  In FHIR, we'd simply describe the query that was supposed to be executed using the Query resource.  Because of the way that resource works, model defintions aren't necessarily needed, or can be virtualized in the way that the Query URI is specified to be indicative of what set of resources are being queried.  Each query results in the production of a list of resources which is used on either the PopulationCriteriaSection or the MeasureObservationSection.

The various ways to configure counters like InitialPopulationCriteria needs a new resource to perform AND/OR/NOT Boolean operations over the results of executing the various Query resources to produce a count.  These are essentially set operations, like union, intersection and difference.  There are a couple of ways to approach this.  Today, I could cheat this using the List resource and specifying codes that indicate how to compute a count using codes like Union, Intersection, and Difference, with the list references pointing to the queries to be performed.  In fact, there's no reason NOT to do this since, as far as I can tell, List codes aren't fixed by FHIR.

Finally, we come to the tricky bit, MeasureObservationDefinition, which specifies how to compute a measure such as average wait time in the ED.  There's no existing way in FHIR to specify computations, which is the same problem we had in HQMF Release 2.0, and sort of punted on it by adding Appendix C on Expression Languages to HQMF R2.  I'm going to take a pass on figuring that out right now, as it may require a FHIR-based RESTful service or a Computation resource or something else to do that.

What this tells me is that I could build an implementation guide today to express MOST of what would show up in an HQMF Release 2.0 based measure definition TODAY.  Expletive Deleted!  Did I just say that?  Yes.  I did.

I also think it would be possible to build a transform from an existing HQMF R2 measure into that.  It might not handle every last bit, and transforming from Data Criterion into Query resources would be the hardest part, so probably worth investigating first.

Oh, and BTW, this also means that FHIR could be used to support Query Health.

So the surprising answer is yes, we probably could do at least 80%.

    Keith

P.S.  One caveat is that the code:valueset=[id|uri] would need to be supported by a FHIR server.  This capability isn't in the DSTU, but I've asked for it.

Sunday, March 30, 2014

The Atomic Optimization Problem

A decade and more ago I worked on XML software instead of Healthcare standards.  One of the projects I watched fail but didn't work on suffered from a major optimization failure.  Overall, the system design seemed OK, but it had one big problem. The fundamental mechanism by which information was stored was based on entities that were too small to optimize in the storage system.  Now if you know XML, you know that there are a lot of different ways by which you can identify each node in the XML tree, including storing the ancestor/child relationships, using XPath or XPointer to index nodes, et cetera. In this particular case, the XML tree was indexed by one of these mechanisms. And so was the schema for the XML generated. The beauty of the solution was that it worked for every XML tree, and arbitrarily complex schemas. The failure of this system was that all nodes in the XML tree were treated as equal, and there was NO way to optimize behaviors for different kinds of nodes or different schema structures which applied to each node.  So, there were huge performance problems, and the database needed to be majorly redesigned, which resulted in a huge amount of rework, and eventually, project cancellation.

Why this results in implementation failure is something that not everyone gets.  You see, optimization relies on being able to discern differences between different kinds of entities, and allows you to make implementation choices between representations that make some operations more efficient than they would be when other choices are made. For example, denormalization is something that database architects sometimes do to trade between storage space and speed.  The whole notion of a star schema is a specialized denormalization of data that supports hyper-fast query at the cost of storage.  However, when everything is the same, and the system has no way of identifying things that could be dealt with more efficiently, it becomes very difficult to optimize.

Systems based on simple atoms (doubles, triples, or quadruples) can be very efficient, because each atom is the same and you can always build bigger concepts on smaller ones. This makes for very powerful systems, but not always ones which handle the bigger concepts as efficiently as they could be handled.  Consider LISP (the CAR/CDR pair) or RDF (the subject-object-predicate triple) or even the HL7 RIM (act, participation, role, entity).  These models make for very powerful systems that are built on small (usually very small), quite efficiently handled atoms.  However, each large concept is built up on smaller concepts which are built up on smaller ones and so on.  So even though your "atom" is very efficient, the larger concept you have to deal with can rely on hundreds, or even in some cases, thousands of atoms.  And that is where the performance problem becomes nasty.

When the "schema" for the object is also represented in doubles, triples or quads, and the storage system always handles each one of those units the same way, you've lost all chance at optimization.  And while I know it can be done, I've seen few products that even try because this sort of optimization is truly a black art.

Even moving higher up the tree, as in the RIM, to act, participation, role, entity doesn't always cut it.  There's just too much variation in the details of act or entity (or other class) that are of varying importantance depending on what you are trying to do.

One of the things I like about FHIR is the fact that the atomic level is dealing with Resources, and that each resource can be separately optimized to meet specific implementation needs.  I think this is the right granularity for efficient implementations.  Granted, I know the existing open source FHIR servers are primarily using machine generated coded built from the resource definition tables in FHIR, but that doesn't stop someone from doing some decent optimizations on specific resources to meet an implementation requirement.  I would expect a number uses of FHIR are likely to start from existing database stores which are already (or hopefully so) optimized to meet specific implementation requirements.

Friday, March 28, 2014

IHE Patient Care Coordination White Paper Published



IHE Patient Care Coordination White Paper Published for Public Comment

The IHE Patient Care Coordination Technical Committee has published the following white paper for public comment in the period from March 28 through April 27, 2014:

* A Data Access Framework Using IHE Profiles

The document is available for download at http://www.ihe.net/Public_Comment/#pcc. Comments submitted by April 27, 2014 will be considered by the IHE Patient Care Coordination Technical Committee in developing the final version of the white paper. Comments can be submitted at http://www.ihe.net/PCC_Public_Comments/

Wednesday, March 26, 2014

FHIR & SOA Discovery Day

This showed up in my inbox today. I think it will be an excellent opportunity for early implementers of FHIR to learn more from each other ....

With the tremendous amount of implementation interest in FHIR, this specification has taken both HL7 and the broader health IT community by storm, exposing resources via REST and providing easy access to healthcare data.  Over the past several meetings, in discussions with key FHIR community stakeholders and members of the FHIR Governance board, what has surfaced is an awareness that the behavioral side of FHIR has not settled in on a consistent pattern.  In fact, there are many ways of applying FHIR in a behavioral context, and the SOA Workgroup has been approached to take a look at the alternatives and to relate FHIR to our SOA services and architecture.

The Discovery Day is part of an overall effort to identify an approach that exposes the benefits of a consistent behavior when implementing FHIR, and documents an approach for doing so.  This one-day event is intended to provide background information as due diligence as we undertake efforts such as:     
  • Defining approaches to expose services in FHIR
  • Determining the best approach to apply existing SOA patterns using FHIR
  • Aligning how SOA can be combined with FHIR to allow for consistent implementation patterns
To understand “the art of the possible”, you are invited to attend or participate in the first-ever “FHIR and SOA Discovery Day”, to occur on Tuesday, May 6, during the Phoenix Working Group Meeting.   This special event has the following objectives:
  • To identify early-adopters implementing FHIR with interest in shared services and SOA (particularly for more complex behavior)
  • To review multiple case studies to identify common challenges and/or implementation patterns
  • To provide an environment for community input/feedback on implementations (informal peer review)
  • Provide to the FHIR community a better understanding of the benefits and touchpoints with defined SOA services
  • To collect an evidence basis to inform and shape SOA work that relates to FHIR

Agenda

Qtr
Time
Topic
Q1
9:00a
SOA Workgroup Meeting with FHIR Governance Board (Working Session)

Note:  This is not technically part of the “discovery day”, but all are welcome to attend.
Q2
11:00a
Welcome & Introductions

11:10a
Context Setting, Goals, SOA Workgroup Macro-View

11:30a
Presentation 1 (Short)

11:45a
Presentation 2 (Long)

12:15p
Presentation 3 (Short)
Lunch
12:30p
LUNCH
Q3
1:45p
Reconvene and Recap

2:00p
Presentation 4 (Long)

2:30p
Presentation 5 (Long)
Break
3:00p
Break
Q4
3:30p
Worksession & Discussion – What did we learn and how do we apply it?

How To Present:

Please send the following information to Ken Rubin (ken dot rubin at hp dot com) , including:

o   Project name / Presenter(s) / Organization(s)
o   Timeslot desired (short or long)
o   Brief Description of the Project
o   Brief Description of the Business Problem You were Addressing
o   Brief Description of the Technical Aspects of your project
o   Relationship to SOA (Optional)
o   Your recommendations to us, or challenges/concerns for us (Optional)

Submissions will be reviewed by the SOA workgroup and slotted based upon the total number of responses received and fitness to the objectives.   Ad-hoc presentations will be taken from the floor, time permitting.  All HL7 meeting attendees are welcome to join us.


Note:  Timeslots are intended to include time for Q&A.  We suggest 5 minutes of Q&A for short sessions and 10 minutes for long sessions.  Presentation allocation times may adjust based upon abstracts received.

Tuesday, March 25, 2014

Concerns in HL7

Quite a bit of time has been spent recently on many HL7 lists discussing the Concern act found in CCDA and prior specifications including the CCD, the IHE PCC Technical Framework and the HITSP C32.

This structure is based on the Patient Care Concern act, which went through "Harmonization" (The RIM modification process) in 2005/2006, but probably finished being implemented properly in the RIM sometime in 2010 or thereabouts (due to administrative rather than technical issues, and mostly dropped balls on the administrative side).

The Concern Act that the Patient Care Workgroup developed has a great deal of capability.  Current "concerns" among CCDA implementers and HL7 developers is that it is overkill for many uses, e.g., handling allergy lists.

Back in the day (2006) when it was first added to a CDA Implementation Guide (the IHE PCC Technical Framework was first to adopt it), the rationale for including it was to address two issues:

  1. Tracking who added the item to the current (allergies or problems) list.
  2. Tracking when it was added or removed.
Several have noted that this information also appears in audit logs, which is certainly a true statement. However, audit logs are not information items that are typically made available to normal users.

These two pieces of data have clinical significance.  The who tells you who added (or removed) the item from the list, so you can track down what happened.  This may be distinct from who made the observation (e.g., a DX of a disease may have come from another provider).

The question arose from Grahame's work on supporting CCDA capabilities in FHIR.  In looking through FHIR, there are two Resources of interest for Concern.  One is Condition, and the other is AllergyIntollerance.

In examining these two resources, they both support capture of "who recorded" the information and when, in the Condition case, called the Asserter, and in the Allergy case, the Recorder.  Condition also includes onset and abatement dates.  So it already has the necessary components needed to support the capabilities of Concern that are used in the wild to MY knowledge.  The allergy case is missing the onset/abatement dates, likely because of the [mistaken in my opinion] assumption that this is either not important, or that allergies never go away (Never and always are two dangerous words to use in Healthcare or in any other endeavor).  I can live without this capability in allergies because it can be added back through the extension mechanism.

So, to handle what is needed for CCDA in FHIR, applying the 80/20 rule, I'd just use Condition and AllergyIntollerance, and push all the concern pieces NOT present into extension.

    Keith

NOTE: In CDA and elsewhere in HL7 Version 3, Allergy is treated simply as a specialization of problem. While this makes sense to engineers, it makes little sense to many physicians.  This is in part why Concern is used in both places, but also because it supports the capture of information not found in Observation.



Wednesday, March 19, 2014

The next Ad Hoc Harley ...

The point of the Ad Hoc Harley awards is to recognize people who have accomplished something significant. In this particular case, I'm recognizing someone who has in a large part taken over one of what I have long considered to be "my job", and done so in a way that is both more complete, and better than the work that I do here on this blog.

Recently I added his work to my auto-tweet list, which is still a pretty short list. Anything he writes winds up getting distributed immediately.  What it takes to get on that list is the production of consistently high quality information that I think will be important and relevant to my followers, mostly people who are interested in what is going on in Health IT, especially HL7 and IHE.

His work is also featured on the HL7 Help Desk for Meaningful Use and has developed much of the content for that in HL7.  He started listening to the Structured Documents list about 21 months ago and his first post to the list was about 18 months old, with a very telling summary of the discussion on open versus closed templates.  This was our first example of his ability to summarize a long and involved discussion and engage with experts to make it easy for implementers to understand what is going on.

Without further ado, let me award ...

This certifies that 
Brian Zvi Weiss of CDAPRO 


For outstanding explanations of CDA and contributions to development of the HL7 Help Desk

Tuesday, March 18, 2014

A Final Post from Riyadh (for now)

Yesterday was a fairly busy day for me, being the last day of a 10-day trip to Saudi Arabia.  Last week I spoke at KSU on eHealth Standards (I'll update the link for the slides when I get it).  Yesterday I attended a seminar organized by the Saudi Association for Health Informatics held at King Fahad Medical City.  To topic of the lecture was Innovation, and the presenter was Dr. Bassam Al Hemsi, an OHSU Alumna.  Dr. Al Hemsi had invited me to the lecture that afternoon, and it was a tight squeeze, but due to the kindess of an MOH employee (Dr. Ayman BafaQeeh), I was able to attend.  The kindness and hospitality of the Saudi's I have met here is quite overwhelming.

Dr. Al Hemsi has an extensive history in Healthcare and IT, including 5 major facilities in Saudi and as CIO of the National Guard's health system.  He is a surgeon, informaticist and educator, and an innovator in his own right.  The key point of his presentation was to describe what it would take for Saudi to become an innovator in Health IT.  He talked about how People + Processes + Information + Technology become products and services, and illustrated some of his own work in the field.  He emphasized the difference between a Leader and a Boss, and emphasized the importance of being the latter to his audience.  He also emphasized the need to do, and even to make mistakes as being important to developing the skills necessary to succeed.  I quite enjoyed his presentation, even as I recalled the many mistakes I have made in my career that taught me so much.

I got to spend some time with Dr. Al Hemsi at his clinic before the presentation, where he kindly showed me some of the innovative things he had designed for use in his Hemodialysis Clinic.  It was pretty cool stuff. The most impressive item was a 110" multi-touch display he had built, including the software his team had designed to make use of it, but he has also designed several medical devices and innovative software using a digital pen for data capture that he also uses in his clinic.

Dr. Al Hemsi is a good speaker, and his audience was quite engaged in the topic.  It was very clear to me, especially in the educational settings that I have been in over the past ten days that the current crop of Health Informatics students and practitioners in Saudi are quite enthusiastic about eHealth and Standards.

   Keith

Friday, March 14, 2014

IHE IT Infrastructure Technical Framework Supplement and Handbook Published

IHE IT Infrastructure Technical Framework Supplement Published for Public Comment

The IHE IT Infrastructure Technical Committee has published the following supplement to the IHE IT Infrastructure Technical Framework for public comment in the period from March 14 through April 13, 2014:
  • Data Segmentation for Privacy (DS4P)
The document is available for download at http://ihe.net/Public_Comment/#IT. Comments submitted by April 13, 2014 will be considered by the IHE IT Infrastructure Technical Committee in developing the trial implementation version of the supplement. Comments can be submitted at http://ihe.net/ITI_Public_Comments.

IHE IT Infrastructure Handbook Published

The IHE IT Infrastructure Technical Committee has also published the following handbook as of March 14, 2014:
  • De-Identification
The document is available for download at http://ihe.net/Technical_Frameworks/#IT. Comments are invited at any time and can be submitted at http://ihe.net/ITI_Public_Comments



Thursday, March 13, 2014

Seven Hours ahead and Way Behind ...

on Blog posts.

This has been a very busy week.  Once again I'm in Saudi Arabia, and as usual, my days are packed with day job work on developing Saudi standards, and my evenings hours reserved for conference calls with the US, and late hours are spent preparing for the next day and way to late hours for school work.

I was privileged to be a member of the Faculty presenting at King Saud University for the closure of their inaugural AMIA i10x10 program, teaching alongside my academic adviser Dr. William (Bill) Hersh. You can see a picture of the distinguished faculty and students involved in the program in the image below.

At the back row is Dr. Hersh in the blue suit and tie.  Next to him on the left is Dr. Amr Jamal, head of the program for King Saud.  And then me, again in my thobe (which was quite a hit with students and faculty both).  I think some few didn't recognize me until I put on my motorcycle vest, thinking I was a Saudi national ;-)

I attended the entire day and was quite interested to hear from other presenters about the Saudi Ministry of Health's eHealth program from several different perspectives.  Leading off the day was Russel Gann, formerly involved with CDC programs in the US, and just recently finishing a stint with the Saudi CDC.  He spoke to the students about the complexities of population health and the need for informatics understanding in the country.

I followed him, and spoke about the standard program in Saudi and their use of HL7 standards and IHE profiles.  The standards presently under development include HL7 Version 2, Version 3, and Clinical Document Architecture Release 2, and also include IHE profiles.  This I presented with my OHSU student hat on.  I figuratively switched to my HL7 Ambassador hat, presenting the CDA Ambassador presentation to the students and faculty so they would understand what CDA and CCD could do for eHealth.  Finally I put my IHE hat on to present what IHE was.  Lastly, I returned to my student hat and put it all together to explain why these standards and organizations were important to the students.  Many of the students expressed a deep interest in the formation of an HL7 Saudi Affiliate, and were also interested in participating in the work of the MoH supporting healthcare interoperability in the Kingdom.

Dr. Hersh gave the students a brief overview of the AMIA 10x10 history, talking about how the program came about and how the international edition emerged as a result.  We also heard several award winning student presentations after his talk.  Afterwards, there was a presentation of awards and certificates.

The day was completed by Dr. Ahmed Belkhair who spoke about the Saudi eHealth strategy.  He also presented on this topic at the most recent HIMSS conference in Orlando last month.  Dr. Balkhair's department is leading the standards efforts in Saudi Arabia, and he also has the honor to be the HL7 Saudi chair.

Later in the week I joined Dr. Hersh, Dr. Jamal, and several other distinguished faculty from the University for a dinner at Najdi Village Restaurant.  I felt quite privileged to attend the dinner, and hope to have a meal there again.  Dr. Jamal, our host was quite gracious with his attention.  I hope to have the opportunity to return again to King Saud to teach again in the future, Insh'allah.

Signing off from Riyadh (but still here for a few more days).  I'll be taking my OHSU finals from here again this weekend, as I did in December, so I must go study now.

   Keith

Friday, March 7, 2014

Killing a Bad Standards Project

This term I'm studying project management, and as we are finishing up the term, the current discussion is around killing projects.

I've previously noted how hard it is to kill a standards project in the past.

The easiest way to kill a bad standards project is before it starts.  But how do you know that a standards project is a bad one?  There are a few signs that you can look for that mean you need to do more investigation:

  1. Does the project leadership come from within the consensus body or share the same values as it, or does it come from outside expecting the consensus body to adopt its values?  If the latter, ensure that the values of the leadership are aligned with the values of the consensus body.  If they aren't, investigate further.  It may be that you are experiencing Not-Invented-Here syndrome, but it could also be a sign of a misfit project.
  2. Is the project one-sided or designed principally to benefit a single organization or type of organization? Do benefits of the standard accrue to only one of the consensus bodies stakeholder groups and not others?  There's almost never a complete balance, but if both parties don't get something, it needs further investigation.  I wrote about this idea briefly with regard to how HQMF can help take those costs out of the system 5 years ago.  I find it interesting that we are still expecting EHRs to do the heavy lifting with regard to computation, but I also see many organizations centralizing that capability to make life easier.
  3. Is the project within the main area of expertise of the consensus group, or is it stretching the edges and would fit better elsewhere.  This can be a sign of the proposer cherry-picking the consensus body that would approve the project.  This happens all over.  Sometimes it happens within an SDO, and other times it happens across SDOs.
One sign that I always look for is who is getting paid to do the work and by whom and how much influence they have and how well does the schedule fit with my own.  This is part and parcel of how much standards work gets done, but it can also be a significant clue.

Once a project has started, it becomes harder to kill.  Both IHE and HL7 have some processes in place to make sure that ongoing work receives necessary approvals before it continues on to the next step, and that causes projects that aren't really making progress to die.  I've seen several projects die in both organizations even though they may have started with very good reasons.  For example, the original CDA Release 3.0 project died due to lack of participation. It has now been replaced by a new project (previously named CDA Release 2.1, but changed to 3.0 because of HL7 rules about major and minor versions).

Even after they have died, dead projects still have something to contribute, if only as a teaching experience. The "CCR wars" eventually prevented the ASTM XML (defined in an adjunct) from being the dominant format, but a CCD is, by definition, still conformant to the ASTM CCR standard (which functionally described the data being exchanged), even in the 1.1 version now present in the C-CDA.

A last note about failure.  If you never have failed projects, you aren't trying hard enough!  Some projects are destined to fail, and others to succeed.  You cannot predict which ones will do which.  If we could, the world might be a better place, but it would also be more boring.

Tuesday, March 4, 2014

Medical Decision Making with Hope

Suppose you had a disease that was in some way debilitating, so that it limited the things you could do in life. Now suppose that there are two treatments for this disease, one of which alleviates some, but not all of the debilitation, and only restores some of your original expected lifespan, and the other which would completely cure you, but also had a 1 in 3 chance of killing you (essentially Russian Roulette with 2 bullets).

Would you prefer a guaranteed 18 years of life at 2/3 of your "capacity of living", or would you take the 2/3 chance of living a 18 years at full capacity?

Are these equivalent in your mind, or not?

As a risk equation, the two appear to have equivalent value assuming that 2/3 capacity of living value is assigned by you, and not by anyone else.

However, what this equation fails to account for is adjustment to limitations over time, and increasing improvements in our healthcare system.

I don't think I'd ever take the high risk treatment option simply because I live in hope, a) that as I adjust to my limitations, I will still find ways to remain happy, even though that will take time, and b) that in the 18 years that I have at 2/3 capacity, the medical system may find things to improve that to 3/4, or 4/5 or even to a complete cure.

It is interesting in looking at medical decision making models that this "Hope factor" hasn't necessarily been considered in many of the models I'm looking at this term.