Wednesday, June 27, 2012

Leap Second, yes it has security and privacy relevance


There is a leap second on June 30th. The security relevance is,  how will your software deal with this leapsecond. Will events that happend during the extra second be properly accounted for? will it be shown as 60 seconds, or will 59 show up for 2 seconds? -- the 'accountability' side of Security.

Will your timers handle a request to delay by 60 seconds, when there actually are 61? Will a deadlock occur? -- the 'availability' side of Security.

Will your software adjust the clock at all? Or will it be terminally behind a second, likely many seconds since we have had almost a half minute of leapseconds. This is what the GPS system does, rather than deal with the accounting mess.
of course on the other side of GMT they see it differently
and businesses care too
a good quality implementation of NTP will simply smooth the second out so that there never is simply a leapsecond, but rather a bunch of leap microseconds.
but not all time sync are that advanced
And...
----------------------------------
Update: July 2, 2012 -- Fantastic analysis done By Rob Horn. Not just what the problem was, but why we find ourselves in this strange space where this matters yet doesn't really matter.

Monday, June 25, 2012

Constructive comments on Data Segmentation for Privacy

The first draft of the S&I Framework -  Data Segmentation for PrivacyReference Implementation Guide comments are due today. As usual I did leave this exercise to the very end. I spent 10 hours on Sunday to review in fine detail providing constructive comments on everything from a simple typo to a fundamental mistake. I came up with 144 comments, 6 of which I consider major show-stoppers. These 6 are not hard to fix, but they are critical that the be fixed.

I have marked up the PDF and produced a 22 page extract of my 144 individual and detailed comments.

The 6 items can actually be summarized into THREE.

Use XD* family as the Document Level control, not CDA:
The draft today proposes that even for whole document control one must use CDA. This means that if you have a DICOM object, PDF document, text document, CCR, Blue-Button, or some form of workflow (such as XDW); that this object MUST be encapsulated inside of a CDA document. This fundamentally is a waste as the exact same functionality can be achieved simply through the use of the XD* family of transactions, using the rich XD* metadata. Indeed this seems to be the message except for specific sections of chapter 3.

Not only is it unnecessary to encapsulate everything in CDA, but you still MUST support XD* metadata as an external embodiment of the metadata. Let me explain this another way, if you only use CDA then you MUST open up the CDA document only to discover you should NOT. Hence why there is security layers built into the XD* family of profiles, that place the minimal but important metadata in the transaction where the access control service can prevent the opening of the CDA document without first invoking the proper controls.

The XD* mechanism is needed to define the whole document level control, and even if the CDA document contains section or entry controls, the XD* mechanism is still needed to convey the high-water-mark (highest confidentiality code contained within the content)

Thus we MUST define the XD* family mechanism anyway, so the additional functionality inside the CDA for document level control is free, and we enable entry level control.
  • Direct – shall use the Direct specified XDM attachment to carry document level controlling metadata
  • Exchange – shall use the XDR and XCA mechanisms to carry document level controlling metadata
  • HIE – shall use the XDS mechanisms to carry document level controlling metadata

There is concern within the community that the HIT Standards committee had recommended the CDA Header as the proper metadata, and my recommendation is consistent with this. Not in letter, as I disagree that the CDA header should be the primary mechanism, but in spirit as the XD* metadata is a purpose specific metadata that is highly influenced by the CDA header. The difference is that the XD* metadata is a proper metadata, whereas the CDA header is proper documentation.

Sensitivity coding is for Policy, not for communications
There are statements in Section 3.7.5 around sensitivity coding that are misguided and wrong. We have provided expert testimony in both healthcare and military-intelligence to express why this is a bad idea. It is true that the HIT Standards committee did include a recommendation along these lines, but they were misguided and wrong too; but they didn’t have the benefit of the expert testimony that we had. Therefore we should inform the HIT Standards committee that we have learned information that they didn’t know. We must not regressing and ignoring decades of advancement.

Sensitivity codes are needed, they  are needed in privacy policy rules as tags that identify the rules to apply to specific types of data. They need to exist in privacy policy rules to identify what types of data should be handled differently. They can even be used inside of a system in proprietary and non-exposed ways (inside the black box). But sensitivity codes are not appropriate as metadata on clinical content. The use of confidentialityCodes, which are larger chunks, are the appropriate and sufficient metadata.

Entry level functionality is NON-Standard and should be identified as a gap for HL7 CDA R3
The mechanism for providing entry level tagging in Section 3.6 is not standards based. To promote this method will forever force all systems to implement this non-standards based approach. It is true that the method leverages extensions built into the standard, but it is describing a mechanism that no tooling supports today, and would be very difficult to get tooling to support this mechanism. Further it leaves many things completely unspecified, as there is no underlying standards.

We should identify this as a gap for HL7 CDA R3 to resolve.
  
Conclusion
I was asked when I tweeted my progress Sunday afternoon, if I was coming up with typical number of comments for a review. At the time I thought that this was an excessive amount of comments, but looking back at them I must say that the text is in good shape. The majority of my comments are simple constructive change requests. Even the 6 (or 3) big issues are very easy to resolve. I don't think that my recommendation to resolving these big issues is controversial to anyone other than the politically connected. They are well founded in experience and international standards efforts. Yes I am passionate that they be fixed, as I am convinced without these fixes the result will be unusable. I have spent too much time on this project to have it fail due to politics and whim. 

Monday, June 11, 2012

What User Authentication to use?

This question is the first topic for the new RESTful-Health-Exchange (RHEx) workgroup that is starting under the S&I Framework 'affiliation'. I don't know what it means to be 'affiliated with S&I Framework, but it is clear from the way it is put in a different place that it is not like other workgroups. One thing is that they seem to be using google groups and doing more discussions in e-mail. I think this is a plus as it helps the group take care of simple discussions through e-mail. It also has a cool acronym.

Specifically the question I responded to was:
"What is the reasoning behind using OAuth and/or OpenID instead of PKI/certificates? While PKI is most certainly complex, it has proven to be much stronger technologically than both OAuth and OpenID."
There are many different solutions, proving the space is rich with imperfect solutions. Each potential solution has strengths and weaknesses.

PKI (Public Key Infrastructure) is the workhorse of Security technology, made up primarily of X.509 certificates and the infrastructure used to prove that certificate should be trusted. PKI is actually at the basis of most security technology, and thus almost everything can claim they are using PKI. What each of the other solutions do is try to move the hard-part of PKI further and further away, that is the management of certificates. To actually do PKI is very hard work, not because the technology is hard, but because the operational and management aspects are hard. PKI is the center of the Direct Project trust infrastructure, and it works really well for e-mail. But PKI for end-user-devices is much too hard for consumers to manage. See Healthcare use of X.509 and PKI is trust worthy when managed and SSL is not broken, Browser based PKI is.

SAML (Security Assertion Mark-up Language) is a wonderful technology for organization-to-organization user assertions. It supports more dynamic content, thus more able to capture the current security-context rather than just identity. It can be noted that PKI tried to do this with attribute-certificates; SAML is more simple to deal with and has advanced beyond what attribute-certificates could do. BUT, SAML is really heavy weight for internet consumers to use, or even some organizational use on the internet. The IHE XUA profile is a profile of SAML identity assertions.

OpenID is similar to SAML but much lighter weight than SAML, including only the needed capabilities that are typically needed for consumer authentication to web services. It can't quite do everything that SAML can, and is harder to fully support organization-to-organization federation of current transactional context. OpenID is very easy to use as a consumer, and a service that relies on it. OpenID is very well positioned to support mobile devices, and internet consumers on fully capable home machines. It is at the core of many common Internet web services.

OAuth is unique in that it is used to delegate authentication of one identity upon a service. This is very helpful for the types of service mashups that mobile, tablet, and Web 2.0 + envisions. Thus authorizing one internet facing service to act as if it was YOU when interacting with a different internet facing service. You see this often today when hitting a new service and they ask you if you want to use your Facebook or Google account rather than create a local account. (Some of these are actually using OpenID first, then OAuth).

I prefer SAML, but do agree that OAuth is magical. Each of these should be leveraged where they best fit, but none of them fit perfectly. Note that WS-Trust (and other things) can convert any security token into another token type; when you use a bridging service that lives in both domains.What this means is that we don't have to choose ONE technology. We can choose OAuth for applications, OpenID for consumers (Patients), while using SAML for organizational individuals (Providers, Clerks, Billing, etc).

The fantastic thing is that Healthcare is not in this quandary alone, all the industries using the Internet today are in this same position. This means that there are already solutions that offer ALL THE ABOVE. One example that I have looked at is:  Hybrid Auth -- http://hybridauth.sourceforge.net/index.html. This mans that we don't even need to choose when developing a RESTful interface, service, or application. We can leverage this open-source solution, no need to re-invent. In this way we can focus on what the healthcare industry needs to focus on, that is leveraging this technology.

The hard part is not choosing between these technologies. The hard part is the Policy and Operational choices of identity authorities (ask the Direct Project about this. Even where the technology is chosen one still will struggle with trust).

Tuesday, June 5, 2012

IHE ITI mHealth Profile - Public Comment

The IHE IT Infrastructure domain has published one new supplement for Public Comment. The supplement is formally “Mobile access to Health Documents (MHD)”, but is often referred to as the mHealth profile.

The Mobile access to Health Documents (MHD) profile defines a simplified RESTful interface to an XDS like environment. It defines transactions to a) submit a new document from the mobile device to a document receiver, b) get the metadata for an identified document, c) find document entries containing metadata based on query prameters, and d) retrieve a copy of a specific document.

These transactions leverage the document content and format agnostic metadata concepts from XDS, but simplify them for access by mobile devices. The MHD profile does not replace XDS. It can be used to allow mobile devices constrained access to an XDS health information exchange. The following figure shows one possible way to implement MHD with a document sharing environment (that may, but is not necessarily, XDS based). This implementation choice is not mandatory and other architectures will be implemented.


Figure 1: Mobile access to a Document Sharing environment.

The XDS profile is specifically designed to support the needs of Cross-Enterprise security, privacy, interoperability, and includes characteristics to support this level of policy and operational needs. The MHD profile has simplified the interactions in ways that are more consistent with a single policy domain use. The MHD transactions are not specifically tied to XDS, and some of the system implementations envisioned would interface directly to an organizational EHR, or a multi-national PHR.

The following lists a few examples of the environments which might choose to use the MHD profile instead of the XDS profile. The MHD profile supports a broad set of the XDS use cases and functionality while keeping the technology as simple as possible.

  • Medical devices such as those targeted by the Patient Care Devices (PCD) domain or Continua organization, submitting data in the form of documents.
  • Kiosks used by patients in hospital registration departments, where it is anticipated that a hospital staff member will review, edit, and approve the document before it is allowed into the hospital system. 
  • PHR publishing into a staging area for subsequent import into an EHR or HIE.
  • Patient or provider application that is configured to securely connect to a PHR in order to submit a medical history document.
  • Electronic measurement device participating in an XDW workflow and pulling medical history documents from an HIE.
  • A General Practitioner physician’s office with minimal IT capabilities using a mobile application to connect to an HIE or EHR.

Technical Details

The choice for technology are simple HTTP (using RESTful pattern) and JSON for encoding.

RESTful Fundamentals

In order to fit into a RESTful model, we needed to determine what the “Resource” was that would be operated on. We naturally first thought about the Document, but eventually realized that the Resource that is fundamental to XDS is the DocumentEntry, the metadata about the document. Once we determined that this is the fundamental resource the profile falls very quickly in place.
  • The HTTP “Put” (POST) operator is used to create a new instance of Document Entry (metadata). 
  • The HTTP “Get” operator is used to get a copy of an instance of a Document Entry (metadata)
Thus we needed to define the URL in a way that works with these operators. We looked at hData and found a general pattern with the patientID low followed by types of objects. IHE already has a unique ID for a DocumentEntry, so the entryUUID was a natural. Although we haven’t folded hData into the specification, it is likely to happen at the Trial Implementation stage simply because hData brings along already written foundational concepts.
http://<location>/<patientID>/DocumentEntry/<entryUUID>/
This works great for DocumentEntry as the resource, but we also need to be able to pull the document it-self. At this point it became clear how to modify our URL to return the Document itself.
http://<location>/<patientID>/Document/<entryUUID>/

Not RESTful

The last bit of work is NOT RESTful as it doesn’t really follow the same pattern. It is HTTP based, and it is simple. We needed to bring in a XDS Stored Query, specifically FindDocuments. This was brought in one way, but might change in Trial Implementation. I propose that this is just a special case of the DocumentEntry URL without an entryUUID and with parameters. But the result would not be a single DocumentEntry, which I think is minor.
http://<location>/<PatientID>/FindDocumentEntries?<parameters>

DocumentEntry encoded in JSON

We gained much of our simplification through making the XDS metadata flat, and choose JSON encoding as it is favored by many in the mobile space. Plus JSON is different than XML and thus it will be easy to discuss JSON encoding in the context of the MHD profile while XML continues to be the domain of XD*.

Here is an example of a DocumentEntry encoded in JSON (I am sure there are mistakes given that I hand coded it)
{patientID: "144ba3c4aad24e9^^^&1.3.6.1.4.1.21367.2005.3.7&ISO" ,
classCode: {code:" 34133 -9 ",codingScheme:“2.16.840.1.113883.6.1", codeName:“Summary of Episode Note"},
confidentialityCode:{code:”N”,codingScheme:”2.16.840.1.113883.5.25”,codeName:”Normal sensitivity”},
formatCode:{code:”urn:ihe:lab:xd-lab:2008”,codingScheme:” 1.3.6.1.4.1.19376.1.2.3”,codeName:”XD-Lab”},
typeCode:{code:””,codingScheme:””,codeName:””},
Author:{…},
practiceSettingCodes:{code:" 394802001 ",codingScheme:“2.16.840.1.113883.6.96 ", codeName:“General Medicine"}
Title:"document title",
creationTime:“20061224”,
hash:“e543712c0e10501972de13a5bfcbe826c49feb75”,
Size:“350”,
languageCode:“en-us”,
serviceStartTime:“200612230800”,
serviceStopTime:“200612230900”,
sourcePatientId:“89765a87b^^^&3.4.5&ISO”,
mimeType:” text/xml ”,
uniqueId:” 1.2009.0827.08.33.5074”,
entryUUID:”urn:uuid:14a9fdec-0af4-45bb-adf2-d752b49bcc7d “}

OPEN Issues

As a Public Comment version there are many issues that have come up during the development that are not fully locked down. Most of them are due to the learning-curve of the committee. Thus I really want constructive comments on the whole Profile but specifically on these Open Issues. The open issues are far more detailed in the document, they are basically:
  • Restricted “Create” to ONE document, with derived SubmissionSet
  • No access to SubmissionSet, Folders, and Associations
  • Patient ID is needed as part of URL
  • Bring in hData as framework and thus ATOM in GET path for multiple entries?
  • Conditional get is not supported due to the differences between the semantics of HTTP and XDS concepts of resource age.
  • Do we need more on Security, specifically Audit?
  • JSON use of anonymous objects or not?

How to Comment?

The IHE IT Infrastructure Technical Committee has published the supplement for public comment in the period from June 5 through July 5, 2012. Comments submitted by July 5, 2012 will be considered by the IT Infrastructure Technical Committee in developing the trial implementation version of the supplement.  On the same web site is the instructions for submitting comments.

Updates:
I have covered Security in some past articles Securing RESTful services and Securing mHealth - the role of IHE profiles.

Monday, June 4, 2012

Introduction to IHE Connectathon and Projectathon

There is a nice video that explains IHE, Interoperability, Connectathon, and how Europe - epSOS -  is extending the Connectathon concept to a Projectathon. A projectathon tests your project specific configurations (vocabulary, document types, workflows, etc) in the context of the IHE profiles working together.  This video is well worth the four minutes.