Wednesday, August 17, 2011

What Is The Problem With E-Health - Could It Just Be Complexity or Is It Something Else?

There have been some interesting discussions in the last little while asking ‘what is wrong with e-health’ or ‘why are we not having more success with e-Health implementation’?

Some commentary is here:

Studies point to complexity of HIT transition

August 10, 2011 | Jeff Rowe, HITECH Watch

Like it or not, spending the public’s money on the HIT transition is a Catch-22.

On the one hand, billions of dollars are being spent on a promise. On the other hand, there’s no way of knowing for sure whether the promise will come true until those billions are spent.

There are, of course, many reasons to believe we’re heading in the right direction, but skepticism remains, and skeptics seem to be getting a little help from researchers.

According to this article, “more and more studies are questioning the efficacy of electronic health records, and the U.S. Food and Drug Administration has begun collecting reports involving electronic health and IT errors, some of which have resulted in death.”

More here:

http://www.healthcareitnews.com/blog/studies-point-complexity-hit-transition

Here is the article that was being discussed.

Electronic records no panacea for health care industry

Studies show errors, inefficiencies still occur in medical services

Sunday, August 07, 2011

By Bill Toland, Pittsburgh Post-Gazette

It has become health care industry dogma that electronic records can help improve efficiency. Reduce errors. Save lives. And -- just maybe -- put the brakes on runaway health costs, by allowing better sharing of patient information and eliminating duplicative services.

It's why hospitals and physicians' practices across the country want a piece of the $27 billion in federal stimulus incentive money to help doctors move their systems away from papers and manila file folders and toward computerization.

It's why Highmark and West Penn Allegheny Health System recently announced a partnership with Allscripts and Accenture to provide Pittsburgh's independent physicians with electronic health records.

And it's why, starting in 2015, hospitals and doctors face cuts to their Medicare and Medicaid reimbursements if they haven't adopted "meaningful" health information technology hardware, electronic prescribing systems and other elements of President Barack Obama's Health Information Technology for Economic and Clinical Health act, known as HITECH.

Moving to a fully electronic system, Mr. Obama told Congress in February 2009 -- citing a 2005 Rand Corp. study -- could net $80 billion annual savings for the health system.

But do electronic records systems fully deliver on their promise? It's not uncommon for doctors, especially those from smaller practices, to complain about the computerization process itself -- it takes time and money to overhaul operations. Change is often unwelcome.

But it's also becoming more common to question whether the measures themselves will meet their lofty expectations. More and more studies are questioning the efficacy of electronic health records, and the U.S. Food and Drug Administration has begun collecting reports involving electronic health and IT errors, some of which have resulted in death.

"I don't think that we are getting our money's worth from all this treasury that we are spending," said Jaan Sidorov, Harrisburg-based health care consultant.

"The thing about these systems is that it doesn't really look like they're getting any cheaper," he said. "And the upgrades and the upkeep represents a very significant cost, especially in outpatient clinics."

Most clinics and hospital systems will say the return on investment for big IT projects is minimal in the short and medium terms.

And in some ways they can contribute cost to the medical system -- some software systems, for example, have auditing components that allow practices to uncover billable services that the practice had been missing.

In other words, the "efficiencies" that are realized may benefit the provider but not necessarily the insurer.

But that's just the cost side -- what about quality of care?

The hope is that computerized decision support systems will warn a physician if a drug dosage is too high or too low; digital health records can be transmitted more quickly among practices and specialists; computers can use logarithms to flag patients who are at risk for high-cost conditions.

The proposed benefits are tantalizing.

But lots of experts say we're just not there yet.

Overwhelming complexity

"Health information technology can meet the goals that are talked about," said Scot M. Silverstein, a medical IT expert and adjunct professor at Drexel University, College of Information Science and Technology, in Philadelphia. "But only if done well. And the amount of complexity behind that simple phrase -- 'if done well' -- is enormous and largely unrecognized and ignored."

The Journal of the American Medical Informatics Association published a report this summer suggesting electronic health records aren't as error-proof as advertised.

Having analyzed 3,850 computer-generated prescriptions received by a commercial outpatient pharmacy chain, a clinical panel found that 452 of the prescriptions, or about 12 percent, contained errors. (A "computerized" prescription is one that is typed into a computer, rather than a note pad; an "electronic" prescription is one that has been transmitted by email or wireless to a pharmacy.)

Of those, 163 contained mistakes that could have led to "adverse drug events." Most errors were mistakes of omission -- a doctor left out an important piece of data.

Notably, this "is consistent with the literature on manual handwritten prescription error rates," the report said. Also, the number and severity of errors varied by the type of computerized prescribing system, which suggests that some systems may be better designed than others.

Lots more here:

http://www.post-gazette.com/pg/11219/1165767-114-0.stm?cmpid=nationworld.xml

Some local contributors have also had a recent say on the matter.

What makes Healthcare different?

Posted on August 10, 2011 by Grahame Grieve

Tom Beale has picked up on a thread about what makes healthcare different (and kindly cited my earlier post on the subject).

I’m going to pick up on something Tom says, because it’s very much in my mind at the moment:

why can’t the health sector get its act together with ICT?

The implication here is that some other sectors have. Apparently.

Well, I’d like to know what ones have? When I look at the other industries, I see a pattern:

  • Businesses develop new services (sometimes based on new technologies)
  • As the use of the service coalesces, the variability of the service becomes a tax, not a benefit
  • A bunch of industry big wigs decide to make it a commodity instead (sometimes external prompt from government is needed)
  • They create a consortium, gather a bunch of engineers, come up with a partially/mostly bespoke solution, and call it a standard
  • Then they pass it over to the operational guys who run it to the ground with adoption and conformance etc

A bunch of piecemeal standards. What is healthcare supposed to learn from this?

More here:

http://www.healthintersections.com.au/?p=465

We also have a recent contribution from Tom Beale.

Why e-health really is hard

Every so often, someone asks: why can’t the health sector get its act together with ICT? Tell me why health is ‘different’?

Every so often a new and interesting answer to this question pops up…John Halamka just published an excellent list of 7 things that make healthcare (and by extension, health-related computing) hard in this post. Given his day job, this list can be taken as something very close to reality rather than being purely speculative. I mentioned a few of these things peripherally in an old blog post on the e-health standards crisis. Halamka’s comments just make me think that the The Innovator’s Prescription (Clayton M Christensen, Jerome H Grossman, Jason Hwang) really does provide an excellent analysis on how to think about economics and health care.

For a bit of history on the economic analysis of healthcare, including the amoral view on health of right-wing US commentator Rush Limbaugh, see here.

For a philosophical point of view, see these posts by Colin Farrelly (Professor and Queen’s National Scholar in the Dept of Political Studies at Queen’s University) – part 1, part 2.

Grahame Grieve recently put up his list of why healthcare is special, which touches on computing, sociology and economics.

In 2005 I wrote a paper for IMIA called ‘Why is the EHR so hard‘, in which I took a biomedical/social complexity viewpoint (more or less ignoring Halamka’s points above), and used EHR requirements as a way of looking at health complexity:

  • information and efficient user interface reflecting multiple levels of hierarchical biological and social organisation;
  • mobile patients;
  • longevity of information (e.g. 100 years);
  • multi-lingual;
  • data shared and authored by multiple users simultaneously;
  • integrated with knowledge bases such as terminology and clinical guidelines;
  • wide geographical availability of a given record to multiple carers and applications;
  • consent-based, potentially finegrained privacy rules on information use (with exceptions for emergency access);
  • multiple sources of constant change to requirements including medical technology, clinical procedures and guidelines, genomic/proteomic medicine;
  • reliable medico-legal support for all users.

More from Tom Beale here:

http://wolandscat.net/2011/08/10/why-e-health-really-is-hard/

The original post from Grahame Grieve is here:

Healthcare is Special

Posted on May 21, 2011 by Grahame Grieve

Healthcare is special. Things that work in other industries won’t work in healthcare.

If I had a dollar for every time I’ve heard that… well, I could be sitting off a beach somewhere, surfing. Though usually, this statement is immediately followed by its denial, that healthcare is not actually special, that every industry thinks it’s special (and, if every other industry thinks it’s special, doesn’t that make healthcare special all by itself?). But for every person who says that, who wishes to claim it’s not true, there’s ten people who, whether they believe it or not, act like it’s true, behave as if its truth is one of the founding principles of their lives.

But is healthcare so special? In fact, just what is healthcare?

There is a wide scope of IT systems and/or applications that may be included under the banner of “Healthcare”:

  • Patient Administration systems
  • Clinical Tracking and Reporting software
  • Clinical Decision Support Systems
  • Financial transactions for payments related to healthcare
  • Population statistics and forecasting software
  • Specialized variants of standard IT infrastructure
  • Patient-centric healthcare data tracking software
  • Bioanalytical programs or frameworks, both in research and in diagnostics

Within this wide scope, several different factors combine to make healthcare different, and potentially special.

More here:

http://www.healthintersections.com.au/?p=279

Note: to get the full flavour of this all the links really need to be followed and each read in full.

Having read all this I am stimulated to have my two cents worth.

For me the issue is one of how we represent clinical information in electronic form and how that actually relates to how clinicians think about clinical information in the processes of delivering care.

I recognise all that the various writers are saying, and there is much truth there. What I think is missing is what I would term the ‘subtlety’ and non-binary way in which clinical information is gathered and processed by a clinician in the process for formulating a diagnosis and then deciding on treatment.

Underpinning most EHRs there is a data and ultimately information model and, like all models, this can only represent one version of ‘truth’ and like all models is only able to handle a part of reality. As they say no model is complete but some are useful. When it comes to modelling an individual expression of illness residing in an individual patient the complexity is really quite daunting.

Most diagnosis and diagnoses are not utterly clear cut and each diagnosis will usually have - in the clinician’s mind - a degree of uncertainty associated with it. This uncertainty is typically not documented well with a diagnosis being described in words which provide only a limited degree of shades of grey. (Probable, Possible, Highly Likely or Unlikely being the most common). Many physicians also have a final section in their letter headed ‘Impression’ which tries to capture the level of uncertainty.

Similarly extra complexity and difficulty arises when you start to realise that most of the patients we think can be most helped by a clinical record have multiple, interacting illnesses which lead to all sorts of issues as treatment is planned.

As I said to one correspondent “Tell me how we can handle a specialist letter - subtle, complex, unique to the patient and reflecting insights from patient and doctor?” Inevitably there is more that could be said, some things are maybe highlighted while others are de-emphasised - on the conscious decision of a clinician trying to communicate what they see as important. Coding and processing this subtle maybe subtext information is really just not possible as far as I can tell.

With SNOMED seemingly having issues, HL7 V3.0 requiring a ‘fresh look’ after 18 years and the complexity issues I have raised above I really think the strategy should be to approach record sharing (where the clinicians don’t know each other etc.) as sharing of the absolute basics - until these more ‘existential’ issues about EHRs can be worked through. (I don’t expect to see that in my life time I have to say!)

We should aim to do the absolute basics well - and use a very constrained information model, recognising all the limitations, - and when, and only when, that is all working reliably with broad coverage then start on Phase 2!

I would love to know what others think. Are there other ways to skin the transfer of complex, subtle health information cat?

David.

Postscript:

After finishing this I came upon even more relevant commentary.

The previously mentioned Grahame Grieve has written up some thoughtful comments on the value and fate of "HL7 Version 3", the latest Health Level 7 Standard from a pro and con perspective.

HL7 V3 has Failed: www.healthintersections.com.au/?p=476

And

HL7 V3 has Succeeded: www.healthintersections.com.au/?p=482

All this is starting to make my head hurt!

D.

10 comments:

Grahame Grieve said...

hi David

> Tell me how we can handle a specialist letter

The obvious answer is that the letter itself remains as narrative. It's accompanied by the relevant system details that can be and already are represented structurally - adverse reactions, medications, history (diagnoses and procedural detail), diagnostic reports.

As you say, the structured data is limited in it's subtlety and sophistication (particularly with regard to coding), and therefore it's clinical utility is limited. But the clinical utility is good enough that most GP systems are already collecting, tracking, and presenting this information (somewhat at the clinicians discretion). If they're doing that, then it makes sense to share the information as that patient is referred between care providers.

The real concern is the degree to which this crude data is suitable for decision support. You've previously been a fan of decision support...

Dr David More MB PhD FACHI said...

Re Decision Support. I am a fan as there is evidence that well done it can improve safety and reduce errors. (the trick is to do it well) As the clinical information becomes more subtle it becomes harder and harder I suspect.

An on this:

"The real concern is the degree to which this crude data is suitable for decision support."

Not quite - my concern is that such objects may not be sufficiently 'computable' and might be too subtle to be useful in decision support.

David.

Enrico Coiera said...

This little discussion echoes a few things I said at HIC recently, so let me reiterate a 'heretical' viewpoint.

Google seems to find that 'crude data' is just fine for the many wonderful things it does in determining what are concepts or interest, what the terms for them are, and where they occur in a document set. Not a hint of SNOMED or UMLS to be seen - its all done with statistical language processing.

My increasingly strongly held view is that health informatics has missed the last 20 years of computer science and is still building 'semantic' towers of Babel, when we can achieve an awful lot (in a much more lightweight and sustainable way) with more modern approaches to determining meaning in text.

Do you want to find scientific papers that match your patient? That's a statistical text matching question. Do you want to know your patient's BP to trigger a decision algorithm? - well I'd probably prefer that to be a structured data element if at all possible, but text processing would do better at this task than you might imagine from free text.

Personally, I think we need to understand that there are a core set of things we want to 'type' semantically, because getting them wrong is dangerous, and getting them right is worth the effort. Trying to make everything we do fit that bill is not feasible or sustainable, which is where statistical text processing neatly moves in to fill the gap.

And waiting for everything to be semantically typed means we miss doing useful decision support today - more unnecessary delayed gratification waiting for the semantic 'building blocks' (that will never by definition be complete) to be ready. Decision support today I say!!!

Grahame Grieve said...

hi Enrico

I missed hearing you at HIC, I'm afraid. Google finds crude data is fine, because they pass the signal/noise ratio on to humans. And it works not so fine for me. Am I wrong to use the goodness of google search (or not) as a yardstick for the state of the art here?

Enrico Coiera said...

Hi Graham

Google is one example only - and I hear you about the requirement for large volumes of text to build up statistical profiles for the context in which terms occur. But we are exactly moving into that world right now as EHRs become ubiquitous. Look up i2b2 for the state of the art in text mining medical records - not yet 'perfect' but rapidly getting there.

The better yardstick for this argument is the whole modern discipline of natural language processing, starting with speech recognition systems, and modern text understanding systems. All are statistically based.

Today if you are an MIT undergrad in comp sci and take language processing, all you learn for weeks and weeks is Hidden Markov models, conditional random fields, and when they get to Chomsky grammars, it is more as a historical nod. A lot has changed in 15 years.

As an 'old school' AI type myself I find I have to fight with my wonderful NLP engineers to even get them to consider knowledge-based approaches.

E

Grahame Grieve said...

Hi Enrico

It all sounds so good. But I've used speech recognition software...

Andrew McIntyre said...

I am sure that NLP could be used for population based decision support and is desirable to aid automatic coding of clinical notes, as long as a human gets to confirm that its correct. What google does is population based and that obviously works.

While I am sure you could do some really useful Decision support with a single patient and NLP, I doubt it is safe enough to make strong recommendations about an individuals treatment because even a 95% success rate means you could be suggesting inappropriate treatment in 5% of patients and that's pretty concerning.

I don't think its that hard to encode data in an atomic form now, but we have no cohesion wrt direction coming from the top so nothing ever happens. If we could just get a basic patient summary specified, with definite codes and structures and make sure its done properly by current systems we would have the basis of good decision support. The industry has been going around in circles because of a lack of leadership so we have nothing much to show for the last 10 years. There is however no doubt that it is doable with current technology in use today. ie HL7 V2. Instead we are building sandcastles in the air, that are not even designed for decision support, so it is stated.

Peter Jordan said...

There is nothing in HL7 V2, or any other interoperability standard, that is going to solve the issue of coding clinical notes using Primary Care Practice Management Software.

To quote one of the more sagacious posts (from Lloyd McKenzie) on the HL7 Forums, relating to the 'Fresh Look' Taskforce':

"Fully encoded SNOMED with detailed and nuanced data models is great in terms of semantic interoperability. But the combination of users and vendors haven't liked it much. How much of the problem is "Users won't enter that level of detail" vs "Vendors won't build interfaces that make it easy to enter that level of detail" isn't clear, but in the end for interoperability to fly you need vendors to create a system which has an interface that users will use to capture the data desired and view the data received. HL7 standards have tended to focus more on what data is needed than on what users/vendors are willing to do."

Look at the length of the average GP consultation and consider that there is no clinical coder on hand to subsequently translate any of the notes...and if anyone thinks that moving to Web/Cloud-based PMS is going to make this any easier?!

Andrew McIntyre said...

I would agree that the focus on inventing new standards to solve the problem is not required. What is required is some direction so that vendors can invest in the things you speak about. We have SNOMED-CT but almost nothing has been done to encourage its real use. I don't think the policy makers understand the space.

Anonymous said...

Andrew McIntyre said "I don't think the policy makers understand the space."

The fact is the policy makers DO NOT UNDERSTAND the space. When MediConnect and HealthConnect were first conceived it was evident from the way they were approaching the problem(s) they did not understand the space. After a decade of floundering about there is no evidence that I can see to indicate that anything has changed - except the (people) policy makers.

The problems have remained whilst the the politicians and bureaucrats have changed many times, and their approach to solving the problems has been consistently flawed reflecting the seductive patter of the big system vendors eager to sell their nuts and bolts and lots of consulting. Now we have Wave 1 and Wave 2 and a Singaporean model PCEHR leading to an impending tsunami of failed effort on multiple fronts because "they DO NOT UNDERSTAND the space".

The closest anyone has come to demonstrating they do is reflected in the Deloitte National eHealth Strategy - very much a voice in the wilderness.