FHIR: Securing an ecosystem

In our previous discussions on OAuth2 and OpenID Connect, we’ve talked about how the Authorization Server can authenticate a user, and provide an ‘Access Token’ that a Resource Server (e.g. a FHIR server) can use as a basis of deciding what information it can supply and what services it will allow. (As an aside, a reminder that the FHIR spec does talk about how this could be implemented – and also points out the IHE IUA profile (formal spec) as work in this area that is worth following).

Things are – relatively – straightforward when each server supports its own authentication, but gets a lot more complex when you try to support an ‘ecosystem’ of different servers. Let’s have a look at a couple of the issues:

  • How does a Resource Server know that a particular Access Token is valid (both that it’s a valid token and hasn’t expired)
  • How can the Authorization server pass the ‘scope’ of access to the Resource Server (where the scope is what the patient – also called the Resource Owner – has agreed to)

To provide a backdrop for our discussion lets provide a picture of a simple ecosystem – i.e. where there are multiple separate servers performing the various roles.

To keep things (relatively) simple, we’re going to assume that we want a single Authorization Server for our ecosystem. In fact, this is kind of like the IHE XDS Affinity domain concept, where you have:

 “a group of healthcare enterprises that have agreed to work together using a common set of policies and share a common infrastructure”

The difference being that the affinity domain has a single Registry – in our case we have a single Authorization Server. It isn’t always going to work out this way, but it’s a good starting point. (btw this picture comes from a presentation I’m giving to our National Standards Organization this week to show how these standards could work together in New Zealand).

Here’s our ecosystem:

sources

So we have:

  • An Authorization Server backed up by some databases for identifying patients and providers (This is so that the Authorization Server can also serve up Identity (Patient and Provider) – as we talked about last time. It’s not significant in the context of this discussion).
  • A FHIR Profile and ValueSet registry. This is a FHIR based ecosystem, and we’re bound to want to define profiles, ValueSets and extensions. Of course, we will very likely use other registries as well, especially for the HL7 defined extensions, but one of our own is likely to be required.
  • A Record Locator Service. This is serving the role of an XDS Registry and holds DocumentReference resources that can be queried as we’ve previously discussed.
  • A couple of Resource Servers, one directly holding resources  and the other ‘proxying’ to an existing Medical Records system.

Trusting the Access Token.

The question is simple. When one of the Resource Servers gets an Access Token, how does it know it’s valid and how does it know what scope of disclosure the User (Resource Owner) has agreed to?

When the Authorization Server and the Resource Server are the same application, this is straight forward – in the process of issuing the Access Token, the Authorization Server can save the details in a local cache which it can reference when the Resource Server receives it in a request. But when the Token needs to be used by multiple different servers it becomes a lot harder.

A couple of options come to mind (and I’m sure there are others of course).

  • The Authorization Server could make its cache of valid Tokens available to Resource Servers via a secure ‘back channel’ of some sort. (e.g. a Redis server holding valid tokens in memory keyed by the Token with the expiry set to the token expiry). When the Resource Server receives a request it queries the cache to confirm validity and retrieve scope information)
  • The Access Token could actually be a more complex structure than a simple Bearer Token – for example it could be a JWT (JSON Web Token) that contains the expiry time and the scope and signed by the Authorization Server. This would require that the Resource Servers have the Authorization Server’s public key – but as there are unlikely to be that many Resource Servers, this probably isn’t a big problem.

Which to choose (or some other)? Well, that will be up to the implementation (or the agreements in the “Affinity Domain”). In either case, the bulk of the complexity is with the server rather than the client – which is where we want it to be. (In the diagram above, the dotted lines represent this relationship – whether a direct query or indirect via the key).

Defining the scope.

We’ve glossed over this a bit in previous discussions.

When the user initially authenticates to the Authorization Server, they can supply the details of what information they are allowing the application to retrieve from the Resource Server/s on their behalf. This is called the ‘scope’ of the request and is defined in the OAuth2 spec as follows:

The value of the scope parameter is expressed as a list of space-delimited, case-sensitive strings.  The strings are defined by the authorization server.  If the value contains multiple space-delimited strings, their order does not matter, and each string adds an additional access range to the requested scope.

So, we need to decide what is the level of access granularity that we want to support, how we’re going to represent that and then make sure that all participants in our ecosystem support them. No small task as there is no agreed standard for this.

Some possibilities from Grahame:

  • Read any data
  • Read any data not labelled confidential
  • Read/write
    • Medications
    • Allergies
    • Conditions
    • Other clinical

But you can see that this is going to provide some thought. (Incidentally, there are a number of people looking into this – if you’re interested then drop me a line).

It should also be apparent why having a single Authorization Server does simplify things. In principle there is no reason why you can’t have more than one – but preferably they will all need to support the specific implementation requirements – such as the type of Access Token and the codes used for scope. There are doubtless ways of accommodating ones that can’t, but it does add complexity – and therefore risk – to the overall solution.

I have to say I kind of like the analogy with these questions and the IHE Affinity Domain concept. Although there will likely be some standardization in the future, there are always going to be differences in specific implementations, so this concept is a nice way of wrapping all the specific details up for a specific implementation (or, in the case of New Zealand – maybe the country!).

 

 

 

 

 

 

 

About David Hay
I'm an independent contractor working with a number of Organizations in the health IT space. I'm an HL7 Fellow, Chair Emeritus of HL7 New Zealand and a co-chair of the FHIR Management Group. I have a keen interest in health IT, especially health interoperability with HL7 and the FHIR standard. I'm the author of a FHIR training and design tool - clinFHIR - which is sponsored by InterSystems Ltd.

6 Responses to FHIR: Securing an ecosystem

  1. You say “The Access Token could actually be a more complex structure than a simple Bearer Token – for example it could be a JWT (JSON Web Token) that contains the expiry time and the scope and signed by the Authorization Server” – and that’s what IHE IUA does – it defines an alternative token type instead of a bearer token, which is a JWT. But it would be lot simpler to say that the token type still is a bearer token, but internally it’s a JWT. The client doesn’t need to know anything about this, which is why it’s a better approach than IUA

  2. Your HISO's on FHIR says:

    It sounds to be a problem for the reader to define the list of scoping keywords that the authorisation server deals in. Combinations of keywords (ie unordered) define the “access range” of each request and you could say “medications read” or “read-write medications” or “medications allergies conditions read-write” or possibly “medications allergies conditions read write” etc (following Grahame’s ideas). And your affinity domain or community might agree that at least one of “read” or “write” is required per request and that one of “medications” and “allergies” etc is required (or maybe there are defaults). And a request for access to especially confidential information might include another keyword, such as “confidential” or “break-glass”. There’s a bit to be figured out here. Will FHIR itself have anything to say about the matter?

  3. Trond Elde says:

    One problem of the OAuth2 protocol is that it seem to assume that the resource owner is the same as the end user. When patient is accessing his own EHR data, then there will be no restrictions on what he/she can read (at least here in Norway). But, when the user is a health care worker, then consent given (or not given) by patient must be obeyed. As far as I can see, the authorization part, i.e. what a health care work can do with a patient’s EHR data (resource), must be decoupled. I think XACML and IHE Secure Retrieve (SeR) draft is the way to go.

    • David Hay says:

      Hi Trond,

      Thanks for your comment – I’ll let others more knowledgeable about security matters comment on those standards! But as a point of clarification, are you suggesting replacing OAuth2 altogether for clinical access, or continuing to use it for Authenticating to the system, and the other standards for Authorization?

      cheers…

      • Trond Elde says:

        Hello David,

        OAuth2 is definitely important for authentication (and federation) scenarios, but is not solving the authorization issues in health care ICT (at least in Norway).

  4. Pingback: SMART on FHIR – adding OAuth2 | Hay on FHIR

Leave a Reply to David HayCancel reply

Discover more from Hay on FHIR

Subscribe now to keep reading and get access to the full archive.

Continue reading