Thursday, September 27, 2012

Level setting on Level of Assurance

Level-of-Assurance is NOT simple and can't be boiled down to 4 values. There is renewed excitement on the topic of Level of Assurance brings me back to a need to describe the fundamentals. The S&I Framework has projects on it, it also came up at the HL7 Security WG meeting. I look back in my blog and find I covered this best in March of 2011. The concepts are also covered in various articles (See References).

The fundamental that is important is that Level of Assurance applies at 2 abstractions. It is important that these two abstractions are recognized and independently managed:
  1. The Level of Assurance provided by the User Provisioning. That is how sure can we be that the identity represents the individual that it says that it represents.
  2. The Level of Assurance of the authenticated session. That is how sure can we be that the user logged on right now is the user that the identity represents.
In the case of (2) it is far easier to make a technical assessment of the level of assurance of the session authentication step. It is generally along a typical graph showing the types of authentication mechanisms, and how well those mechanisms are managed.

In the case of (1) there is very little technical aspects. Meaning that for (1) the Level of Assurance is almost completely a Policy and Procedure issue. There is high-level guidance provided by (NIST) Special Publication 800-63. This sets up the 4 different classes of level-of-assurance. But it does NOT set specific level-of-assurance values. The reason is that actual level-of-assurance can only be defined when bound to specific Policies, Procedures, Physical environment, as well as technology. This is recognized by NIST, this is why they didn’t define a vocabulary. This is also why you will not find a vocabulary anywhere.

In the world of PKI, the Level-Of-Assurance is defined by the Certificate Authority published document “Certificate Policy”. This is not very scaleable or reusable, so there is now an assignment to IANA to host a registry of Level-Of-Assurance policies. In this way there would be a registered URL that can be used as a Level-Of-Assurance vocabulary.

RFC-6711 - An IANA Registry for Level of Assurance (LoA) Profiles
This document establishes an IANA registry for Level of Assurance (LoA) Profiles. The registry is intended to be used as an aid to discovering such LoA definitions in protocols that use an LoA concept, including Security Assertion Markup Language (SAML) 2.0 and OpenID Connect.
Conclusion:
Don't expect that the NIST level-of-assurance model is executable and don't expect a standard to develop a simple vocabulary. 

References:

Tuesday, September 25, 2012

Presentations from HL7 WGM - Baltimore - Intro to Security and Privacy

At the HL7 meeting in Baltimore, the Security workgroup offered a half-day of free tutorial on security and privacy. This is a topic that we have available for tutorial for a couple of years, but it either doesn't get selected by the tutorial committee or not enough people sign-up. As a co-chair of the security workgroup I really want to get our message out. So we use our own workgroup meeting room and advertise that we will be teaching. For the Baltimore meeting we advertised this in the HL7 workgroup meeting brochure, and I also pushed it on my blog.
This session will focus on how to apply security and privacy to the health IT standards. It will cover the basics of security and privacy using real-world examples. The session will explain how each phase of design needs to consider risks to security and privacy to best design security and privacy in; and mechanisms for flowing risks down to the next phase of design. In addition, it will cover the security and privacy relevant standards that HL7 has to offer including: Role-Based-Access-Control Permissions, Security/Privacy ontology, ConfidentialityCode, CDA Consent Directive, Access Control Service, Audit Control Service, and others. These standards and services will be explained in the context of providing a secure and privacy protecting health IT environment.

First Quarter
Second Quarter

The good news is that we had about 15 people for both quarters. We are planning the same thing for Phoenix in January. I need to adjust the agenda to make sure we cover everything as Don and Trish didn't get much time to cover their slides. 

Friday, September 21, 2012

Advanced Access Controls to support sensitive health topics

I hope everyone who works on Privacy and Security in Health Information Exchanges felt the advance in the potential. What the VA and SAMHSA showed at the HL7 meeting was big. Not only was it big, but it was well done. Please review the News on this demonstration, the presentation given by Mike Davis, the Pilot specification, and the S&I Framework Data Segmentation for Privacy work. Pretty impressive stuff, right? YES it is. I am proud to be a part of  the S&I Framework overall Data Segmentation for Privacy workgroup.

This demonstration really showed the potential. However I think it highlights the potential without exposing the realities of demonstration. This is a demonstration that has not been evaluated from a Clinical Practice perspective. This demonstration modifies CDA documents without the Author knowing. This demonstration abuses some functionality of CDA. This demonstration abuses XDM metadata. This demonstration uses complex technology, leveraging highly advanced subject-matter-experts. This demonstration uses all greenfields technology. This demonstration is simply a demonstration, and when viewed in this light; this demonstration was fantastic.

I will describe the above concerns that  I have in a section below, but first I must describe my proposal. I have a proposal that does not need changes to standards, works with any document type including CDA, works with current technology, does not require high-technology on the receiver side, and gets the same job done.

Easier Way: 
There is an easier way. One that does not require any changes to standards, although I will admit that it is less elegant and more blunt. The use-case space we are trying to solve is that many HIE simply recommend against sharing sensitive topics. Thus for a population with sensitive health topics their data is excluded from HIE, and possibly the normal information that is tightly coupled with the sensitive information. I have proposed a more blunt solution, one that IHE built into the XDS metadata, yet one that has been rejected by some in the S&I Framework and ONC leadership. My proposal is somewhat represented by section 3.5 in the S&I Framework specification (4 pages of text).

My solution is to have the original CDA document in full form, mark this as “Restricted”. Then create a Transform of the CDA document that has the sensitive data removed, mark this as “Normal”; and use the Metadata ‘association’ relationship to indicate that this is a “Transform”. In the case where the documents are managed in something like XDS, this is all registered this way. Being registered does not mean it is accessible, Retrieval utilizes the metadata to control access. In the case of a PUSH using XDR or XDM (Direct); one would choose to put in both forms if the recipient has the authority for both, or just put in the normal if the recipient has only the authority for normal data (clearly don’t send it if the recipient has no authority). In both the PUSH and PULL case this is a simple Access Control rule applied to well established Metadata.

I would encourage the use of Clinical Decision Support (CDS) that the demonstration used to detect sensitive topics. This detection of sensitive topics is NOT something that security experts are good at. So I would still suggest that a tool like this can be used to flag the Author with the need to create a transform only when it is needed with suggested transform. The method of using CDS to detect the presence and XSLT to transform the CDA is a fantastic idea. I just want the Author of the document to agree that it still is a worthwhile document that they want their name on. The main difference is that I just want the sensitive data removed, whereas the VA Demo used an extension to apply tags at the element level removing some sensitive data and leaving other sensitive data in the document.

In my solution using the CDA document requires no new technology, it is just a CDA document. It doesn't have extended metadata or element tagging that would not be understood by current technology. A system that is allowed to get both Restricted and Normal data; should be careful to import the data marked Normal first as it doesn’t come with Restrictions. Then inspect the Restricted document and if necessary import anything new found AND track the restrictions.

This solution works with ANY document. Including a document that uses FHIR encoding, or Green-CDA encoding, or CCR, or PDF, or DICOM-SR, or anything else. I like CDA, but if the only solution we have forces CDA, then we are cutting off documents from the past and cutting off new document types in the future.

None of this requires new standards or abuse of standards. YET it results in the same functionality on the use-case need. This solution put up against the one demonstrated by the VA/SAMSHA makes their demo look like a Rube Goldberg machine.

Yes, the solution requires that within the USA there needs to be a definition of 'why would you mark a document Restricted', and 'who has legitimate access to documents marked Restricted', and 'what are the receiver behaviors required when using Restricted documents'. All things that must be done no matter what technology we use.

Demonstration Dirty Details: 
I don’t want to sound like the demonstration was a complete farce. IT WAS GREAT. I only want to point out that there are some details that are important to recognize and that the solution is not close at hand and easy to implement.

I already pointed out above the CDA entry level tagging, which would not be understood by current systems. It would also not be possible to get some systems to understand it. Further the topic was brought up with HL7 Structured Documents Workgroup (owners of CDA), and they indicated that there might be a better way to do it. They needed more time to think it through. Yes, making a mistake on a technology like CDA can mean long-term problems. CDA are documents, intended to be used over the life of the patient; They are not messages that are transitory.

I also addressed the need to present the transform to the Author. It is very possible that the transform is of no clinical value. More important is that a CDA document is ‘Authored’. It is a legal representation by the Author. This is not true if the Author never sees the content with their name on it. Keith has covered this far better than I have in Data Classification, Redaction and Clinical Documentation

My biggest concern is the abuse of XDM metadata. These additions are allowed by the specification, but they are expected to be processed without fail by the recipient. This is not acceptable to extend a specification and expect compliance. Extensions are allowed by many standards, but extensions behavior are not forced upon the recipient. I don’t disagree with the need to convey the Obligations; I just want a reasoned and standards based solution. This is an important advancement, we need to do it the right way. Not force a hack upon the world.

Lastly they did magic using very talented subject-matter-experts using greenfields technology. It is important to recognize the non-trivial work necessary to build this functionality into a real-workflow with real-users and sensitive to patient safety.

Conclusion:
I have been involved in standards development for a very long time, I know that it takes many years for a reasoned solution to be fully ‘standardized’. This period does not mean no-one is using that method, indeed pilots and implementations are important to proving viability. Then there is the typical 5 years it takes the market to pick up and use the standard. Thus 7+ years is not unusual. I would rather see my Easier Way used today, than continue to simply forbid sensitive topics from entering a Health Information Exchange.

So, why was my solution not part of a Pilot? Because it is already available, and thus doesn't need to be tested. I have been involved in the development and deployment of HIE technology. I know of multiple exchanges that were working on Policy solutions that would have enabled sensitive topics. For example Connecticut. These HIE solutions were put on mothballs as ONC has forced everyone to implement Direct first. I know of multiple states where progress was put on hold. I like Direct for what it was intended to solve, but I hate what the ONC mandates on Direct have done for HIE progress. It has not been a net positive.

The Easier Way can be used with Direct TODAY, and XDS, and NwHIN-Exchange. My solution does not require any changes to Standards. My solution works with any document type, not just CDA (e.g. Green-CDA, FHIR, DICOM). I ask that the Future-Nirvana not get in the way of the Good.

UPDATE:

I just remembered that the Easier Way has been demonstrated multiple years at HIMSS. The scenario is fully explained in this article on the HIMSS 2008 Privacy and Security Use-case. This is what inspired the HIE like Connecticut to put this into their infrastructure.

See also:

Saturday, September 15, 2012

The Magic of FHIR

As I look back on this week of HL7, it was like no week of HL7 I have ever had before. Not only did we have some fantastic and productive discussions on the Privacy and Security front; but there was such openness and excitement about FHIR.  Grahame has really stepped in something here. I personally think it has so much more to do with Grahame himself. The excitement he has is infectious. His personal background with experience with all of HL7, and I found out on Thursday he also has implemented DICOM in his distant past; gives him great perspective. He has surrounded himself with highly competent team. He has forced open and transparency; and strong governance. However that would not be enough to make it as big as it is.

At the beginning of the week at the FHIR Connectathon someone asked if the reason why ‘this’ was so easy is because of REST? The answers given were very positive to the REST approach. There was discussion that this REST focus should not be seen as purely HTTP-REST. It was pointed out strongly that although the initial pilots are using HTTP-REST, there is no reason that the FHIR Resources can’t be serialized into a Document, or communicated using PUSH technology, or even transmitted using HL7v2, v3, SOAP, etc. The FHIR “Resources’ are truly transport agnostic. The REST approach simply focuses the design on the concept of “Resources”. These other forms need quite a bit of discussion before they are interoperable, but it is powerful. I thought about the question of why was this so big throughout the week and although the REST philosophy is very much a contributing factor it is not alone enough to make it as big as it is.

Part of the FHIR approach is to address the most mainstream use-cases and leave the edge-cases for extensions. Indeed the concept of extending a FHIR Resource is encouraged. It is discouraged for you to do something that is abusing the specification, such as carrying the same information in an extension when there is a legitimate FHIR way to do it. Extensions allow the FHIR core to stick to the basics. The basic philosophy also recognizes that MANDATED values are likely bad. That is not to say that there would be Profiles that do mandate, but the core leaves as much optional as possible. This is in theory at the base of any standard, but it is stated boldly in FHIR. A contributing factor, but not alone enough to make it as big as it is.

Back to Grahame, his documentation tooling is amazing. The whole FHIR specification is documented in an XML spreadsheet. This core table of truth is processed by JAVA application that he fully publishes, that spits out EVERYTHING. I can’t claim that I can prove this, but everything that I heard about is generated by this application from this spreadsheet. This includes XML schema, Test tools, documentation, Java objects, C# objects, JSON, examples, etc, etc. I would not be surprised if this thing spit out stuff that we don’t know we need. This tooling is what I have heard many standards organizations want to be able to have. Grahame has made it happen for FHIR, but this alone is not enough to make it as big as it is.

A factor that I heard spoken of, but never spoken of as a factor, is the ready access to programming tools that make the grunt work totally hidden. I am not a programmer, I really want to get my fingers back into programming, but never find the time. Even if I found the time, it is something that one needs to use often. I think this is why Keith continues to do all kinds of demo code that shows this or that. It is really hard to be in the standards development world and yet also have responsibilities for programming. I think this is the sleeper HUGE factor. This array of tooling that is readily available makes super easy the processing simple-XML, JSON, and Atom feeds. I heard and saw lots of people tell me just how easy it is to process simple-XML, JSON, and Atom. This was also the feedback that I got on the IHE MHD Profile, however that didn’t really take advantage of this power, yet… I know that this factor is far more powerful than the factors I have said above, likely more powerful than all of them. We all know that the use of a standard is what makes that standard powerful. This tooling factor will make FHIR easy to use. This surely vindicates Arien Malec and John Halamka; they did tell us so. Clearly as big as this factor is it is not enough to make it as big as it is.

I have worked to coordinate FHIR, hData, and IHE MHD. I have had detailed discussions with Grahame on the concept of pulling the IHE work into FHIR, we are going to see what this might look like. I have worked with the FHA/S&I effort on RHEx as well. At this meeting I worked to pull into the tent the DICOM WADO work that is upcoming. Each of these efforts are independent, and can choose to cooperate or simply align or compete. I am amazed at how cooperative they all have been. It is early in these efforts themselves, and even earlier in the cooperation phase. I am still hopeful that we each can add value and thus the result is more powerful than any one project could be. This was also jokingly referred to in references to how to pronounce FHIR --> "FOUR"

There are many challenges that will need to be addressed. We just touched upon Security and Privacy this week. The actual problem is far bigger than a security layer like RHEx. It includes object classification and tagging. It includes an understanding of the difference between an object classification and the meaning of a communications, things like obligations. These are areas that we are working to develop even in the abstract, much less in a medium like FHIR that wants to keep everything simple. Related to this is the data provenance, aggregation and disaggregation, de-identification and re-identification. There are areas like clinical accuracy, and usefulness. There are concerns around patient safety, specifically regarding cases where not all the data was displayed to a treating doctor because that data was not understood. What does it mean to understand, and what does mustUnderstand mean?

I am worried that success for the intended use-cases will be abused for non-intended use-cases. This is of course the problem any standard has. But I see it rampant in Healthcare, mostly the abuse is government mandated.

As I write this, I am indeed listening to “The Firebird” by Stravinsky. There was joking on Twitter that one must read the FHIR specification while listening to “The Firebird”. Somehow it is working.

The excitement is not due to any one thing, nor any specific combination. It is not REST. It is not simple-XML or JSON. It is not Grahame. The excitement is driven by all of these factors converging at just the right time and place. Time will tell if this turns into something that can survive for a long time. We must be very careful to keep this in perspective.

Monday, September 10, 2012

IHE Mobile access to Health Documents - Trial Implementation


The Mobile Access to Health Documents (MHD) Profile is now in Trial Implementation. This blog is going to be in the form of a bloginar, webinar given in the form of a blog post. The source presentation is published and accessible.

Executive Summary of changes:
For those that saw the Public Comment. The big changes are (a) the resource is now a DocumentDossier which is more complete than a DocumentEntry; (b) Queries now include parameters for Document, SubmissionSet, and Folder; (c) Queries now return a list of DocumentDossier references in JSON and Atom; and (d) the PatientID is a parameter rather than a URL element allowing for more flexibility.

The Profile Explained
The profile has the goal of providing an interface to XDS that would be appropriate for a resource constrained environment such as a Mobile Device. It is not constrained to only be used by mobile devices. It is also not constrained to only serving documents from XDS, and includes descriptions for XCA as well as other environments. The client requirements were to use interface technologies that are the readily available on mobile platforms, thus could be used without a large, or any, footprint on the client side.

This interface clearly needs to work when that Document Sharing Infrastructure is indeed an XDS environment. The MHD profile gets described in IHE Actor/Transaction terms like this



This could be implemented natively on XDS infrastructure or could be supported using proxy services.




Now the MHD profile is not exactly one-for-one capable with XDS. We eliminated complexity but while doing that we needed to make tradeoffs on the side of simplicity that did eliminate some of the XDS capability.
  • the MHD PutDocumentDossier can only publish one new document at a time into a new SubmissionSet.
  • the MHD Put Document Dossier cannot be used to replace an existing document or provide a transform
  • the MHD Get Document Dossier can get only metadata about one document at a time.
  • the MHD Get Document can only pull one document at a time.
  • the MHD Find Document Dossiers supports only the OR operator within parameters.
  • the MHD Find Document Dossiers returns only references to Document Entries, requiring a MHD Get Document Dossier to retrieve the metadata
  • the MHD Find Document Dossiers does not support the XDS Registry Stored Query GetRelatedDocuments stored query.
Or if you need to provide RESTful access to a much larger network of HIEs, through XCA federation




We also recognized that those that would be interested in this profile, or those that we want to be interested in this profile, are more use to looking at interfaces using the HTTP REST methods. So we also published a cross-walk table that translates the REST methods with the IHE Actor/Transaction method


HTTP Method
HTTP Method
Transactions on Document Dossier
Transactions on Document
GET
Get Document Dossier [ITI-66]
Get Document [ITI-68]
PUT
Prohibited
Prohibited
POST
Put Document Dossier [ITI-65]
DELETE
Prohibited
Prohibited
UPDATE
Prohibited
Prohibited
HEAD
Not Specified
Not Specified
OPTIONS
Not Specified
Not Specified
TRACE
Not Specified
Not Specified

One of the big changes we did was to encode the XDS Metadata in JSON. To do this we flattened it as much as possible, although we did preserve much of the value encodings (Future enhancement). When we flattened the XDS Metadata we also collapsed the references between objects, representing SubmissionSets, Folders, and Associations inline with the document entry. Thus we created a new object type, a DocumentDossier. It can be seen as:
documentDossier:{documentEntry:{…},submissionSet:[{…},{…}],folder:[{…}],association:[{…},{…}]}
Now this is a rather quick jump into JSON encoding, and I am not going to run a JSON encoding school here. But even if you don’t understand JSON completely you can see that a DocumentDossier is made up of the DocumentEntry plus any SubmissionSets that it is referenced by, and any Folders that it is referenced by, and any Associations that it is referenced by. Each of these is just folded into the one object/resource.

Next I give an example of a DocumentEntry. This is more carefully explained in the profile, but I provide it on the blog simply because looking at it is very educational. The XDS Metadata that we are used to, is simply encoded; and flat. An example of a DocumentEntry is:

documentEntry:{patientID: "144ba3c4aad24e9^^^&1.3.6.1.4.1.21367.2005.3.7&ISO" ,
classCode: {code:" 34133 -9 ",codingScheme:“2.16.840.1.113883.6.1", codeName:“Summary of Episode Note"},
confidentialityCode:{code:”N”,codingScheme:”2.16.840.1.113883.5.25”,codeName:”Normal sensitivity”},
formatCode:{code:”urn:ihe:lab:xd-lab:2008”,codingScheme:” 1.3.6.1.4.1.19376.1.2.3”,codeName:”XD-Lab”},
typeCode:{code:””,codingScheme:””,codeName:””},
Author:{…},
practiceSettingCodes:{code:" 394802001 ",codingScheme:“2.16.840.1.113883.6.96 ", codeName:“General Medicine"}
Title:"document title",
creationTime:“20061224”,
hash:“e543712c0e10501972de13a5bfcbe826c49feb75”,
Size:“35”,
languageCode:“en-us”,
serviceStartTime:“200612230800”,
serviceStopTime:“200612230900”,
sourcePatientId:“89765a87b^^^&3.4.5&ISO”,
mimeType:” text/xml ”,
uniqueId:” 1.2009.0827.08.33.5074”,
entryUUID:”urn:uuid:14a9fdec-0af4-45bb-adf2-d752b49bcc7d “}

The API
So the RESTful interface for a DocumetDossier all operates on a core URL that is made up of a local part, a fixed resource type identification, case specific entryUUID identifier, and the Patient Identifier. The Patient Identifier is a parameter to allow for supporting the fact that the patient ID tends to need to be flexible to support Patient ID lookup, Cross-References, Master Patient Indexes, and Partially specified values.

documentDossierSectionURL := http://<location>/net.ihe/DocumentDossier/<entryUUID>?PatientID=<PatientID>
To Create a new object one uses the POST method to the root, without a entryUUID. This allows the service to create a unique UUID that will be used to hold this object from then on. To retrieve a Document’s Dossier, you simply use the GET method on the root URL with the entryUUID filled out. To retrieve the document it-self is a similar GET method on a similar root URL for the Document. Yes, I am being short in my description because the Profile has the details, and this is just an introduction bloginar.

To discover resources there are some search methods that generally follow the XDS FindDocuments, FindSubmissionSets and FindFolders all combined. Essentially the client can request any of the parameters found in all of these be matched in an OR relationship. The result returned can be a list of DocumentDossiers in JSON or Atom format. Thus allowing for a RESTful subscription model.

SecurityConsiderationsThere are many security and privacy concerns with mobile devices, simply because they are harder to physically control. Many common information technology uses of HTTP, including the RESTful pattern, are accessing far less sensitive information than health documents. These factors present an especially difficult challenge for the security model. It is recommended that application developers utilize a Risk Assessment in the design of the applications, and that the operational environment utilize a Risk Assessment in the design and deployment of the operational environment.

There are many reasonable methods of securing the interoperability transactions. These security models can be layered in without modifying the characteristics of the MHD profile transactions. The use of TLS is encouraged, specifically the use of the ATNA profile. User authentication on mobile devices is typically handled by a more lightweight authentication system such as HTTP Authentication, OAuth, or OpenID Connect. IHE does have a good set of profiles for the use of Enterprise User Authentication (EUA) on HTTP-based devices, with bridging to Cross-Enterprise User Assertion (XUA) for the backend. In all of these cases the network communication security, and user authentication are layered in at the HTTP transport layer thus do not modify the interoperability characteristics defined in the MHD profile.

The Security Audit logging (e.g., ATNA) is recommended. Support for ATNA-based audit logging on the mobile health device may be beyond the ability of this constrained environment. This would mean that the operational environment must choose how to mitigate the risk of relying only on the service side audit logging.

The Resource URL pattern defined in this profile does include the Patient ID as a mandatory argument. The advantage of this is to place clear distinction of the patient identity on each transaction, thus enabling strong patient-centric privacy and security controls. This URL pattern does present a risk when using typical web server audit logging of URL requests, and browser history. In both of these cases the URL with the patient identity is clearly visible. These risks need to be mitigated in system or operational design.

Relationship to other RESTful content standards and Security definitions is shown graphically below. The IHE MHD profile is just one effort underway right now to define RESTful content access methods. Each of these methods is being carefully coordinated so that they respect the space that the other standards fill while adding their own value. Each of these requires that the Security layer can be provided independent, for this we are looking toward and working with efforts like the FHA/S&I Framework RHEx.

More Information
I have said enough for now. There is the  source presentation that is published and accessible.  It has some more details, but the profile is the right place to go.
The Mobile Access to Health Documents (MHD) Profile is now in Trial Implementation. This blog is going to be in the form of a bloginar, webinar given in the form of a blog post. The

Sunday, September 9, 2012

Meaningful Use Stage 2 - Transports Clarified

I got the answer to the question I have been hoping to ask. The question I asked is: What does ONC think they have specified with the three transports, with the options of (a)+(b) or (b)+(c)? For more detail on how this question comes about see my blog articles: Meaningful Use Stage 2 : TransportsMinimal Metadata, and Karen's Cross or just Minimal Metadata. Essentially I wanted to ask the question so that I can understand the desired state. Once I know the desired state then I can help get clarification.

The good news is that the ONC goal is a very reasonable and very progressive. The goal is to recognize that basic Direct transport is a good start, but that it is insufficient to support more complex and robust workflows. They recognize that there are many Exchanges using XDR/XDS/XCA including CCC, Connecticut, Texas, and NwHIN-Exchange. Thus they want to encourage, through naming specific options, EHR vendors to reach beyond the minimal e-mail transport down the pathway of a more robust interaction. I speak of this stepping stone approach in a couple of blog posts so I am happy about this. Stepping stone off of FAX to Secure-EmailWhat is the benefit of an HIE, and HIE using IHE.

I am not going to fully define Karen's Cross, but the whole Karen's Cross specification recognizes that although one might be able to take an inbound Direct message that is not in an XDM content package, and invent hard-coded metadata for the XDR outbound link; that if the inbound Direct message comes in with XDM content packaging then it comes in with exactly the metadata that the XDR outbound transaction needs. Note that an inbound XDR that needs to be converted to Direct is easy, the XDR content is just encapsulated in XDM and sent over Direct.

Which does remind everyone that Direct does require that you support receiving XDM content; which minimally means you  have the ability to read a ZIP file, and display the INDEX.HTM file that will be on the root of the  ZIP file. Really easy, and almost hard to make not work.

So what ONC wants to encourage is the sending of Direct messages with the XDM content packaging. This is a part of the Direct specification, but is in there with conditions that make it only required IF your sending system has the ability to send using the XDM content packaging. So, ONC came up with the first option [(a)+(b)] to encourage the use of XDM content packaging with the Direct specification.

The third option - what they call (b)+(c) - is trying to encourage the use of XDR with the specific secure SOAP stack. They could have done this with simply XDR+XUA+ATNA, because that is what the interoperability specificaiton turns out to equal. I would say that the Secure SOAP Transport -- (c) -- is indeed more constrained than ATNA and XUA; as it does force only TLS and has some vocabulary bindings.  The advantage of using including the (c) as a standalone transport, sets the stage for future transports such as a secure HTTP/REST (e.g. RHEx); and by being just a secure SOAP stack they encourage experimentation with XCA/XCPD.

So we now know that the testing for (a)+(b) would be testing the Direct Transport using XDM content packaging; and the testing for (b)+(c) would be testing XDR with mutually-authenticated-TLS and XUA user assertions. This is a good path that I can help ONC get clarified. The diagram above is originally from EHRA and has been used by IHE. I have co-written the IHE white paper on this topic and presented along with Karen Witting. This is a fantastic result, now the hard part of getting ONC written FAQ; and test plans that hit the right things.

Friday, September 7, 2012

MU2 Wave 1 of Draft Test Procedures -- Integrity Problem

The first wave of Draft Test procedures is out: 
For more information, and the Wave One 2014 Edition draft Test Procedures, please visit http://www.healthit.gov/policy-researchers-implementers/2014-edition-draft-test-procedures
This is an opportunity to see if the interpretation you have of the Final Meaningful Use Stage 2 rules as the Testers have. I looked at three of the test procedures that fall into my scope.
  • §170.314(d)(5)  Automatic log-off  Test Procedure
    • I think they correctly changed this to reflect the various ways that are used and are appropriate. It will be interesting to see specific types of EHR technology against this procedure, it is possible someone might still be confused.
  • §170.314(d)(8) Integrity Test Procedure
    • I think they are way off base, or too aggressively focused on the detail and loosing sight of the overall. They continue to have the language in their test procedure that have caused me to write my most popular article of all times "Meaningful Use Encryption - passing the tests". I am not happy about that article, but it gets to the point. The requirement for Integrity just like the requirement for Encryption is there to assure that where ever Integrity or Encryption technologies are utilized that legitimate and approved algorithms are used. Quite often this is next to impossible to prove. The best way to prove these is where interoperability protocols are used. The Direct Project, and the Secure SOAP Transport have these algorithms built in. So, testing these for interoperability will have the affect of testing the Integrity and Encryption lines. Thus a standalone procedure should focus ONLY on uses of Hashing or Encryption that is other than specified in the Transports section. Which nothing but Transports are required. Thus this procedure should start with "The EHR vendor shall identify how they utilize Integrity other than through defined Transports.". And then focus the testing on those.  This is not going to make it easy, as the place where this is going to happen is transparently in Databases, and Data-At-Rest. Thus there is nothing that the EHR vendor can possibly show. I think this item should be… Not Tested outside of integrated as part of Transport.
  • §170.314(d)(9) Optional—accounting of disclosures Test Procedure
    • I think they got this one in good shape too. It now is clear that the interpretation of this optional criteria is a User Interface where the user can indicate that a "Disclosure" has happened. Thus this is not any automated accounting, but does provide for a way to identify disclosures using readily available technology at the fingertips of those that might be involved in a legitimate disclosure. The test procedure seems reasonable as well.

Thursday, September 6, 2012

On The Meaningful Use Stage 2 Rules

I have written many articles on the US Centric - Meaningful Use Stage 2. Some are very deep analysis of specific problems. It may seem that all I have to say is negative and nitpicky things. I want to make it clear that I am very happy with how Stage 2 came out. I just find that others have done a fantastic job of outlining the rules, so I have focused only on deep analysis in my area of expertise where I see potential confusion.

First, the ONC and CMS summaries are great stuff. They do leave out details that end up being the subject of my blog articles. But they are really good summaries. Keith has clarified much as well. I have seen plenty other summaries, I wish I had a catalog. There is just no good reason for me to pile on.

Meaningful Use:
More Privacy/Security Topics are available on my blog.

Tuesday, September 4, 2012

Meaningful Use Stage 2 : Transports

There are two perspectives
1) The Transport standard is clear. It is Direct. Everything else said in the regulation about transports is optional and therefore meaningless.
2) There is still the problem of the (b) Transport, which pulls in (b)+(c) and also (a)+(b)

Simple View – Minimum work to get Certified
There is no question what is minimally required, it is the Direct Project. So, test to this and be done. This is the easiest way to get through certification and get your CEHRT stamp.

Note that just because the Direct Project is the only required Transport, does not mean that it is the only Transport that can be used. I heard Steve Posnack reiterate this on many of the webinars. The CMS rule doesn't differentiate between transports, it is more concerned with content and outcomes (AS IT SHOULD BE).

The Direct Project is a pointed message, but not necessarily without issues, here are a bunch of my blogs
Problems in Extended Transports
Lets just say that you don't like doing Minimum work for Certification, or that  you want to go above-and-beyond, or that your EHR is so far from supporting Direct that you need alternatives. On the last topic, sorry but you must support Direct. The problems with the Transports other than Direct is simply, confusion.

I cover these problems in detail in Karen's Cross or just Minimal Metadata. The Diagram at the right is informative, it is Karen's Cross. The GREEN arrows are the (a) transport, Direct. The BLACK arrows are the (c) transport, Secure SOAP. The RED arrows are the (b) transport, a proxy service that converts (a) to (c) and (c) to (a). The (b) transport is not a transport and thus can’t be called upon as a transport. Add to that that the MU2 specification always grouped the (b) transport with either (a) or (c) and one has something that simply doesn't compute. I guessed that ONC really just wants the Minimal Metadata, but am not sure. I think they are actually asking for CEHRT to somehow certify that they can work in an operational environment where someone else provides the (b) proxy service. The use of the (b) proxy service is an operational aspect that should have been placed upon the CMS side.

Besides the Karen's Cross or just Minimal Metadata issue, one can just look at the (c) transport and treat it as I outlined in Meaningful Use Stage 2 -- 170.202 Transport. Essentially the Secure SOAP stack is simply the lower half of all of the SOAP based profiles found in IHE. ONC has chosen to chop horizontally, where IHE builds vertically. This is shown in the eye chart to the left, which is  not intended to be readable. Either way you slice it you have a secure SOAP transport stack that is carrying some SOAP content.

Thus it matters little if you use any of the Data Sharing profiles from IHE (XDR, XDS, XCA) or the Patient Management profiles from IHE (XCPD, PIXv3, PDQv3). What does matter is that you MUST be using ATNA secure communications, and XUA user assertions. YES the IHE profiles are the parent of the NwHIN-Exchange specification and are compatible. It is not that I work hard to propagate my view of the world, I work hard to keep divergence from happening when it is not necessary. I am very willing to entertain necessary divergence, and have lots of evidence that I support Direct.

But what about Encryption and Hashing?The MU2 requirement gets specific about Encryption and Hashing, but don’t worry. 
§170.210(f) Encryption and hashing of electronic health information. Any encryption and hashing algorithm identified by the National Institute of Standards and Technology (NIST) as an approved security function in Annex A of the FIPS Publication 140-2 (incorporated by reference in § 170.299).
This Encryption and Hashing requirement is important but not hard to meet. The important part is that proprietary encryption is unacceptable, old encryption algorithms are unacceptable. Modern encryption (AES and SHA) are acceptable. The use of FIPS Publication 140-2 allows HHS and CMS to benefit from the intelligence community assessment of cryptographic algorithms, thus moving up automatically when the intelligence community does. The use of Annex A rather than the core FIPS 140-2 specification allows for relaxed rules around certification, this doesn’t change the technical aspect but it does greatly reduce the invasive code inspection requirements of actual FIPS certification. The Annex A is very short, 6 pages long. The summary: Encryption AES or 3DES; Hashing SHA1, or higher;

All of the Transports include fully security as part of the specification, so they are by definition already compliant with the Encryption and Hashing requirements.
  • Direct – S/MIME authenticated using X.509 Certificates/Private Keys, Encrypted with AES128 or AES256, and Hashed with SHA1 or SHA256.
  • Secure SOAP –secured with Mutual-Authenticated-TLS using X.509 Certificates/Private Keys, Encrypted with AES, and hashed with HMAC-SHA1, for more details see: Moving to SHA256 with TLS requires an upgrade
  • Secure SOAP – End-to-End - This is in IHE ATNA, but not in MU2 – There is an option to use WS-Security end-to-end security, but this requires also an update of common SOAP stacks and is administratively harder to achieve. Risk Assessment needs to drive the cost benefit.
  • Secure HL7 v2 – There is no mention of this dirty little secret, but all of those HL7 v2 requirements in the regulation would also need to meet the Encryption and Hashing requirement. The solution here is to use the Mutual-Authenticated-TLS as is used in the Secure SOAP stack. Many toolkits support this, but not all of them. At IHE Connectathon we run into people who have forgotten to test this, they usually get going quickly.
  • Patient Engagement - Secure Messaging – There is no guidance on what Secure Messaging is, and I think this is the right solution. But whatever is used for Secure Messaging must also meet the § 170.210(f) requirements. Given that the requirements are just focused on Encryption and Hashing; this is easily met with a typical HTTPS web-portal.
  • Data at Rest – End-user device encryption. -- Okay this isn't a transport, but whatever solution used to protect data at rest, it must also meet the Encryption and Hashing requirements. A good commercial solution or even the solutions built into operating systems cover this. What they don’t cover is KEY MANAGEMENT. If you don’t protect the key then it doesn't matter how well encrypted.
Summary:
The transport to certify is clear, just get Direct done somehow. If you can’t do direct, then you are going to struggle with trying to figure out what is going to be required of you. The Test Tools will likely answer this eventually. There certainly is nothing clear today to start developing toward. Stick with XDR, which is a subset of XDS. This solution is highly reusable.

Four years of blogging

So what? I was pushed into blogging because I found that I was explaining things over and over. The sad part is that I still tell the same stories over and over, hopefully to new people. I hope that each time I add something and truly am communicating and educating. I know that I am learning as I go, so I suspect the articles get better.

In september of 2009 I posted these articles. I am not excited that this list could be produced today. I hope that I am making progress:
I have a "Topics" button on my blog that contains pointers to the most useful articles from my blog, arranged by topic. I keep that up-to-date. Here it is as of today:

Security/Privacy Bloginar: IHE - Privacy and Security Profiles - Introduction

User Identity and Authentication
Directories
Patient Privacy controls (aka Consent, Authorization, Data Segmentation)
Access Control (Consent enforcement)
Audit Control
Secure Communications
Signature
De-Identification
Risk Assessment/Management
Document Management (HIE)
Patient Identity
The Direct Project


Other