Server processing of FHIR documents: Connectathon

In this post we’re going to think a bit about how the server might process a document that it receives. This is a massive topic, and certainly not one that can be covered in a single post – or by any one individual! Apart from anything else, there are lots of different possibilities – all legitimate in specific circumstances.

This is an important concept for FHIR – it doesn’t seek to drive any particular design or architecture – rather it attempts to support how data is moved around now, and how it may be done the future. It’s up to individual implementations to decide the details, but using common ‘building blocks’.

At the highest level there are a couple of options:

  • The server can simply store the document, and make it available to consumers on request (subject to privacy concerns of course)
  • It can use the information within the document to update internal stores – eg add a new condition to the patients list of conditions.

And this opens another can of worms – how do these relate to the ‘Documents vs Messages’ debate? There’s a point of view that says if you want to update other stores then use a Message (or a REST interface for real time) and keep the Document simply as a summary at a point in time. However, the reality is that people are using documents (especially CDA documents) for local store update, so FHIR needs to support that – it is up to each implementation to be explicit about what the rules are when doing so.

With that out of the way, let’s get back to what we need for connectathon.

If we look at the description of scenario 2.1 of connectathon, it’s actually very explicit about what is expected: it requires that the server receives the document at the /mailbox endpoint and stores it as a ‘blob’, but that it should also construct a DocumentReference resource from the contents of the Composition resource that can be used for later querying.

Now, we’ve already discussed the DocumentReference resource in some detail during the posts on XDS so you can refer to those discussions for more detail (there’s an XDS Category) – briefly XDS provides a way that a server can provide an ‘index’ (or registry) of documents for a consumer to search against, so it is not surprising to see DocumentReference being used for FHIR documents as well as CDA documents, and doubly nice that it has been designed to ‘play nice’ with XDS.

And just a brief discussion on document end points. The spec defines a number of options (and also points out that they are only recommendations – it’s up to the implementer to choose the best architecture for their requirements). This topic is worth another post at some point (and it is also one that gets active discussion on the implementers chat), but in brief:

  • /Document takes a whole bundle and stores it (presumably as a blob). It exposes search parameters that are the same as the Composition resource. The spec doesn’t talk about any other processing, but it would be quite feasible to use this endpoint in the same way as the /Mailbox endpoint (see below), but restricted to document processing.
  • /Composition is used to manage the Composition resource as any other resource. This would be useful when thinking about a ‘dynamic document’ – or as more of an ‘object graph’ than a blob – we’ll come back to this topic at a later time.
  • /Binary is the generic endpoint for binary blobs. You can store anything here and later retrieve it by its assigned ID, but there are no query parameters possible. When thinking documents, this would generally be thought of in addition with a DocumentReference resource for searching – either created separately by the client, or automatically by the server (as we are going to do here in fact).
  • / – ie the root of the server. This endpoint will simply treat the document as if it were a collection of resources and process them as a transaction. It’s not the best candidate for document processing as it (by default) doesn’t really understand the concept of documents.
  • /Mailbox. This endpoint processes Document and Message bundles, applying the business logic that is appropriate to each bundle. This is the one that is specified for connectathon – but the server should check that it is dealing with a document (by the bundle tag) before applying document processing.

And just to emphasise: the spec does not specify what the processing of a document or message should be – it is up to the server and the ‘Affinity Domain’ to specify this. FHIR simply provides the ‘hooks’ and the capability to support the processing.

So, finally, to scenario 2.1 of connectathon 5.

The following is the process that the server will follow:

  1. Receive the document via a POST to the /mailbox endpoint
  2. Extract the identifier from the composition resource. Note that the identifier is assumed to be unique within the ‘affinity domain’ (to use the XDS concept) that the server is participating in.
  3. See if there is already a document stored with that identifier. If there is, then see the logic below for managing updates. We’ll assume that this is a new document as the scenario does not include updating (maybe another bonus point Lloyd?)
  4. Create and store a DocumentReference resource from the contents of the Composition. The table below gives some notes on this. As part of this process, the server will assign its own ID to the document, which will become part of the location property of the DocumentReference resource. It is quite feasible that specific server implementations perform additional particular logic – for example extracting other document identifiers from extensions in the Composition.
  5. Save the document in an internal store as a blob (i.e. exactly as it was received). It can subsequently be retrieved using a call to the /binary endpoint of the server – ie GET /binary/{ID}
  6. Return a status 202 (accepted) status code.

You might wonder if the call to the /mailbox should return something more than simply an ‘acknowledged’ response – maybe the location of the DocumentReference resource? I did too, and so put a question out in the implementers chat. It turns out that the concept behind the mailbox is that it’s really up to the server (and local agreements of course) about what the server will actually do with the document, and so there’s no consistent response that would be appropriate beyond the simple acknowledgement. It’s best for the client to think of the mailbox as a one-way exchange pattern from client to server (though with an acknowledgement that the server did, in fact, receive and process the document).

(as an aside, the skype chat and HL7 list servers are great for this sort of question. It’s unusual not to get a response within an hour – and more usually it’s only a couple of minutes.)

The following table gives some suggestions about how the server might construct the DocumentReference resource from the composition properties. In most cases it’s reasonably straightforward.

DocumentReference Composition Notes
masterIdentifier Identifier
identifier A document has only a single identifier, so this field is generally not used. Of course, implementations can add other identifiers to the document using extensions if there are business requirements to do so.
subject subject
type type
class class
author author
custodian custodian
authenticator attester
created instant
indexed Generated by the server when the DocumentReference resource is created
status This is the status of the DocumentReference resource itself – not the underlying document. For a new document it will have the value ‘current’ – see below for notes on updating documents.
docStatus status
relatesTo Most often used to link DocumentReference resources for updated documents
description title
confidentiality confidentiality
mimeType This will either be application/xml+fhiror application/json+fhirdepending on the coding of the document.
format The format property is used when the mimeType is not sufficient to understand the structure of the document.It’s a URI, so an example of use would be when the document conforms to a document profile. At the moment there is no way of indicating this in the document, so an extension would be needed.
size Calculated by the server
hash Calculated by the server
location Calculated by the server – the URI to the document

Updating a document is a little bit trickier, as it will involve updating of some resources and creation of others. When a document is updated, the Composition.identifier remains the same, which is how the server can detect that it already has a version of the document stored. Here’s my take on how updating could work (assuming that the document is being treated as a blob – i.e. the individual resource within the document are not being extracted).

  1. The server receives a document via a POST and determines that it is an update as it has already got a document with that composition.identifier stored.
  2. The ID for the document is retrieved, and the new document saved as a new version against that ID. (This means that a query to /Binary/{ID}/_history would return the version history of the document – as it would for any resource).
  3. The DocumentReference resource that references the original document is retrieved and it’s status changed to ‘superseded
  4. A new DocumentReference resource is created, whose location property points to the new version of the document (i.e. it is version specific). The ‘relatesTo’ property of the new resource points back to the one that is being replaced with a code of replaced.

So, in summary, the document itself is versioned (and retrievable by calls to the /Binary endpoint), while the DocumentReference resource is replaced (ie there are now 2 DocumentReference resources pointing to different versions on the same document).

A consequence of this is that the location property of a DocumentReference resource should always be version specific – otherwise it will automatically return the most recent version of what it is referencing – which (as you can see) is not always correct.

Next, we’ll think about finding and retrieving a document, and how to render it to the user.

About David Hay
I'm an independent contractor working with a number of Organizations in the health IT space. I'm an HL7 Fellow, Chair Emeritus of HL7 New Zealand and a co-chair of the FHIR Management Group. I have a keen interest in health IT, especially health interoperability with HL7 and the FHIR standard. I'm the author of a FHIR training and design tool - clinFHIR - which is sponsored by InterSystems Ltd.

One Response to Server processing of FHIR documents: Connectathon

  1. Pingback: Retrieving and rendering a #FHIR document | Hay on FHIR

Leave a Reply