Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Thursday, March 28, 2024

Do You Think What Is Discussed Here Will Provide Any Real Worthwhile Outcome(s)?

This popped up a few days ago:

MEDIA STATEMENT

March 21 2024

In a significant stride towards advancing digital health leadership across the nation, the Australasian Institute of Digital Health (AIDH) and the Digital Health Cooperative Research Centre (DHCRC) proudly announce the release of the Clinical Informatics Fellowship Stakeholder Engagement Report. This pivotal document outlines the proposed framework for Australia’s Clinical Informatics Fellowship (CIF) Program, highlighting a collective commitment to develop a recognised clinical informatics fellowship pathway.

The report emphasises the critical need to establish a valued and accredited career path for clinicians and clinical informaticians, aiming to bolster the evolution of digital health leadership within Australia. This initiative stems from the collaborative efforts of the AIDH and DHCRC, who, with the support of DHCRC’s funding, embarked on a journey through 2022 and 2023 to conceive a Clinical Informatics Fellowship pathway. This endeavour involved research into international models and consultations with a diverse group of stakeholders, including healthcare leaders, educational institutions, and professional associations.

Feedback gathered over 18 months from a wide array of contributors has been instrumental in shaping the program.

The Report details the governance structure, analytical processes, stakeholder engagement, and consultation efforts, culminating in a summary of stakeholder feedback and adjustments made following these discussions. Additionally, it identifies critical areas requiring further exploration before the initiation of a pilot program for the new fellowship pathway.

Future Directions and Objectives

As we move into 2024 the AIDH and DHCRC are working collaboratively on planning the forthcoming stages of the fellowship program, including a pilot phase with a select group of candidates. This step is crucial for refining the program based on participant feedback, paving the way for its official launch to potential digital health community candidates.

Both organisations reiterate their commitment to continuous engagement with the digital health community, healthcare professionals, educational institutions, and other key stakeholders as they progress through this developmental phase.

About the Clinical Informatics Fellowship

The CIF Program is designed to recognise clinical informatics as a distinguished profession in Australia and internationally. It aims to cultivate a broad and diverse pool of skilled clinical informaticians, aligning with AIDH’s Australian Health Informatics Competency Framework (AHICF). The program will offer clinicians a path to achieve a nationally recognised Fellowship, highlighting their specialised knowledge, skills, and credentials in informatics.

Media contact: media@digitalhealth.org.au

Here is the link:

https://digitalhealth.org.au/career-building/clinical-informatics-fellowship/

Also we have the Media Statement

AIDH MEDIA STATEMENT

An engagement report informing the proposed model for a Clinical Informatics Fellowship (CIF) Program in Australia was released today, with a commitment to progress the build of a clinical informatics fellowship pathway through 2024.

The report emphasises the importance of establishing a recognised and valued professional trajectory for clinicians and clinical informaticians, fostering the growth of digital health leadership in Australia.

During 2022 and 2023, the Australasian Institute of Digital Health (AIDH) and the Digital Health Cooperative Research Centre (DHCRC) partnered on a project to build a Clinical Informatics Fellowship pathway. The project, funded by DHCRC, explored international models and engaged key stakeholders and experts in the design of a new fellowship pathway for Australia.

The Clinical Informatics Fellowship Stakeholder Engagement Report details the project’s governance, analysis undertaken, stakeholder involvement and consultation, which took place on the project during 2022 and 2023. Stakeholder input was received over 18 months from leadership of AIDH and DHCRC, clinicians, healthcare executives, universities, clinical colleges and associations.  AIDH and DHCRC would like to restate their gratitude for the comprehensive feedback that was contributed by the healthcare community.

The Report summarises stakeholder feedback, illustrating the refinements following consultation, and noting the outstanding matters to be worked through prior to commencement of a pilot of the new fellowship pathway.

“We are delighted to publish the Stakeholder Engagement Report in partnership with DHCRC and inform the broader health community on progress towards our goal of a widely recognised career pathway for emerging leaders in digital health”, said AIDH’s Interim CEO, Mark Nevin FAIDH.

Progressing Work through 2024

The DHCRC and the AIDH are discussing the next stages of the program of work to complete the build of the clinical informatics fellowship pathway and undertake a pilot. The pilot is intended to evaluate the new fellowship pathway and provide insights into further improvements. The pathway will then be officially launched to potential candidates within the digital health community.

Annette Schmiede, CEO of DHCRC said, “We are proud to collaborate with AIDH and support the progression of this ambitious program of work. A clinical informatics pathway, open to all health professions, would be a global first and support the digital transformation of our healthcare system”.

AIDH and DHCRC are committed to ongoing consultation with the digital health community, clinicians, healthcare leaders, universities, clinical colleges and other peak bodies as we advance through the next stage of development of the program.

Notes to Editors

The objectives of CIF Program are to:

  • Establish clinical informatics as an acknowledged and recognised profession in Australia, with international credibility and standing
  • Build and foster a large and diverse workforce of skilled and well-networked clinical informaticians who are included in leadership in the digital transformation of the health and social care sectors
  • Align with AIDH’s Australian Health Informatics Competency Framework (AHICF), which outlines essential domains of expertise and corresponding competencies required for proficiency in health informatics.

The Clinical Informatics Fellowship Stakeholder Engagement Report is available here
Further information about AIDH is available here
Further information about DHCRC is available here

Here is a link:

https://digitalhealth.org.au/blog/clinical-informatics-fellowship-stakeholder-engagement-report-released/

The rather sad thing about all this is the pursuit of status rather than just getting on with things and having clarity as to purpose and status come as a natural by-product!

Who knew that digital leadership needed to be "advanced"?

Come on team – show you are useful and worthwhile and you will get the recognition and status you seem to be obsessed by! I note the recognized experts from the US and the UK – and our own as well – don’t seem to need all this recognition etc. Those who matter know who they are and that is all that is needed!

David.

In passing I wonder is the plan to scrap all the present Fellows and Associate Fellows of the AIDH and start again? Might not be a bad idea? Can we also get rid of all those silly "badges" while we are at it?

See here:

https://digitalhealth.org.au/digital-badges/

What madness!

D.

Wednesday, March 27, 2024

Now It Has Become The Third Largest Company On The Planet We Need To Take Some Notice!

This appeared a few days ago:

Nvidia: what’s so good about the tech firm’s new AI superchip?

US firm hopes to lead in artificial intelligence and other sectors – and has built a model that could control humanoid robots

Alex Hern

Wed 20 Mar 2024 02.11 AEDTLast modified on Wed 20 Mar 2024 02.17 AEDT

The chipmaker Nvidia has extended its lead in artificial intelligence with the unveiling of a new “superchip”, a quantum computing service, and a new suite of tools to help develop the ultimate sci-fi dream: general purpose humanoid robotics. Here we look at what the company is doing and what it might mean.

What is Nvidia doing?

The main announcement of the company’s annual develop conference on Monday was the “Blackwell” series of AI chips, used to power the fantastically expensive datacentres that train frontier AI models such as the latest generations of GPT, Claude and Gemini.

One, the Blackwell B200, is a fairly straightforward upgrade over the company’s pre-existing H100 AI chip. Training a massive AI model, the size of GPT-4, would currently take about 8,000 H100 chips, and 15 megawatts of power, Nvidia said – enough to power about 30,000 typical British homes.

With the company’s new chips, the same training run would take just 2,000 B200s, and 4MW of power. That could lead to a reduction in electricity use by the AI industry, or it could lead to the same electricity being used to power much larger AI models in the near future.

What makes a chip ‘super’?

Alongside the B200, the company announced a second part of the Blackwell line – the GB200 “superchip”. It squeezes two B200 chips on a single board alongside the company’s Grace CPU, to build a system which, Nvidia says, offers “30x the performance” for the server farms that run, rather than train, chatbots such as Claude or ChatGPT. That system also promises to reduce energy consumption by up to 25 times, the company said.

Putting everything on the same board improves the efficiency by reducing the amount of time the chips spend communicating with each other, allowing them to devote more of their processing time to crunching the numbers that make chatbots sing – or, talk, at least.

What if I want bigger?

Nvidia, which has a market value of more than $2tn (£1.6tn), would be very happy to provide. Take the company’s GB200 NVL72: a single server rack with 72 B200 chips set up, connected by nearly two miles of cabling. That not enough? Why not look at the DGX Superpod, which combines eight of those racks into one, shipping-container-sized AI datacentre in a box. Pricing was not disclosed at the event, but it’s safe to say that if you have to ask, you can’t afford it. Even the last generation of chips came in at a hefty $100,000 or so a piece.

What about my robots?

Project GR00T – apparently named after, though not explicitly linked to, Marvel’s arboriform alien – is a new foundation model from Nvidia developed for controlling humanoid robots. A foundation model, such as GPT-4 for text or StableDiffusion for image generation, is the underlying AI model on which specific use cases can be built. They are the most expensive part of the whole sector to create, but are the engines of all further innovation, since they can be “fine-tuned” to specific use cases down the line.

Nvidia’s foundation model for robots will help them “understand natural language and emulate movements by observing human actions – quickly learning coordination, dexterity, and other skills in order to navigate, adapt, and interact with the real world”.

GR00T pairs with another piece of Nvidia tech (and another Marvel reference) in Jetson Thor, a system-on-a-chip designed specifically to be the brains of a robot. The ultimate goal is an autonomous machine that can be instructed using normal human speech to carry out general tasks, including ones it hasn’t been specifically trained for.

Quantum?

One of the few buzzy sectors that Nvidia doesn’t have its fingers in is quantum cloud computing. The technology, which remains at the cutting edge of research, has already been incorporated into offerings from Microsoft and Amazon, and now Nvidia’s getting into the game.

More here:

https://www.theguardian.com/business/2024/mar/19/nvidia-tech-ai-superchip-artificial-intelligence-humanoid-robots

There is little doubt this is a company we all have to keep a close eye on.

I fear they are seeking “global domination” of similar!!!

David.

Tuesday, March 26, 2024

Telstra Seems To Think It Is Leading The Digital Healthcare Transition.

19/03/2024

422 - Leading Digital Transformation in Healthcare: Elizabeth Koff, Telstra Health

In the wake of the digital age, the healthcare industry finds itself at the crossroads of opportunity and challenge, poised to enhance patient care and bridge gaps in the system through the power of technology. 

In this episode of Talking HealthTech, Elizabeth Koff, Managing Director of Telstra Health, shared invaluable insights into the critical focus areas and transformative potential of digital health. From safeguarding data security to leveraging AI, the conversation shed light on the pivotal role of digital health in revolutionising healthcare delivery.

The Ripple Effect of Connected Care

The vision for connected care has taken centre stage, with a strong emphasis on elevating patient and clinician experiences while steering towards improved health outcomes. There is a need for real-time information sharing among stakeholders to facilitate seamless care delivery. 

"Connected care makes for a better patient experience and a better clinician experience, ultimately leading to superior health outcomes."

The collaborative nature of healthcare highlights the critical importance of partnerships and ecosystems in fostering the connected care vision. No single provider can meet the full spectrum of connectivity, necessitating a cohesive ecosystem involving clinicians, healthcare providers, digital health solutions, and government bodies. This collaborative approach aims to bridge existing gaps in the healthcare system.

Navigating the Cybersecurity Terrain

The rising prominence of cybercrime in the digital healthcare landscape prompts a dedicated focus on safeguarding both solutions and data. This calls for a robust clinical governance framework, as seen with Telstra Health's commitment to upholding clinical safety across all aspects of digital health

The gravity of cybersecurity risks in healthcare stretches beyond mere breaches of personal information. The often-understated risks of denial of service, whereby digital solutions become ineffective, can pose clinical harm and compromise patient safety.

While acknowledging the omnipresence of cyber threats, there is an imperative to propel innovation responsibly, leveraging the transformative potential of data and AI in healthcare.

"The benefits far outweigh the risks, but we have to proceed responsibly and cautiously."

Unlocking the Potential of AI in Healthcare

The transformative power of AI in healthcare is at the forefront, with its far-reaching potential across the patient care continuum, from preventive measures and personalised medicine to clinical administrative tasks. AI's role in empowering patients to understand and manage their healthcare needs is highlighted as a pivotal advancement.

The burden of administrative tasks on clinicians forms a significant pain point within healthcare, further underscoring the potential of AI in streamlining clinical workflows. AI can be a catalyst in liberating the workforce to focus on what they do best: providing clinical care.

Digital Healthcare: Unveiling New Horizons

There is uncharted potential of digital health, and we need to continue to raise the bar on digital maturity within healthcare, echoing international benchmarks and paving the way for unparalleled advancements in healthcare connectivity.

The innate need for a patient-controlled health record resonates, reflecting a pivotal shift towards empowering consumers to engage with their health information. The dissemination of data across various healthcare settings, driven by an agnostic approach to digital solutions, is poised to strike a harmonious balance between access and security.

The era of superficial digital conversions gives way to the dawn of truly transformative digital health solutions. A future is possible where digital health forms the backbone of a connected, efficient, and patient-centred healthcare ecosystem.

Embracing Innovation: A Collaborative Voyage

The journey ahead for Telstra Health is illuminated by the focus towards platform-centric solutions, with the Telstra Health Cloud poised to harness the wealth of data and information aggregated within the business. This platform-centric approach aligns with the future outlook of connected care and transformative digital health solutions.

The landscape of healthcare is evolving at an exponential pace, fuelled by a myriad of players within the digital health arena. The promise of a healthcare landscape where platform solutions lay the groundwork for connected care initiatives is real, which will propel healthcare into uncharted territory.

Here is the link:

https://www.talkinghealthtech.com/podcast/422-leading-digital-transformation-in-healthcare-elizabeth-koff-telstra-health

I will leave it to others to assess what Ms Koff has to say and how Telstra Health fits into the Digital Health ecosystem.

Anyone with real world experience of their offerings or activities fell free to comment. I would have to say I have never really seen Telstra as a significant Digital Health player.

David.

 

Sunday, March 24, 2024

The Australian Institute Of Digital Health Is Moving To Raise Its Profile – Will It Matter?

This appeared last week:

Opinion: Advancing artificial intelligence in healthcare in Australia

21 March 2024

By Mark Nevin

The rapid acceleration of artificial intelligence (AI) technologies is increasing the urgency to ensure Australia’s digital health community is equipped to lead the adoption of AI in a safe, responsible and effective way.

This AI revolution is already impacting healthcare pathways from screening to diagnosis, treatments and personalised therapeutics, with implications for ethics, patient safety, workforce and industry.

As Australasia’s peak body for digital health, the Australasian Institute of Digital Health (AIDH) has an ambitious program of work planned for 2024 to continue to support the safe adoption of AI in healthcare.

At our board strategy day in February, AI was an agreed key priority for this year and that AIDH should forge ahead with significant plans to promote responsible use of AI in healthcare, including collaborating with other stakeholders to achieve this important goal.

There are multiple reasons why more needs to be done to advance AI in healthcare in Australia, with health and financial benefits at the top of the list. Unlike other developed nations, Australia lags behind in co-ordinated support for the healthcare Al sector.

This is despite achievable healthcare benefits including reducing national health expenditure and improving the quality of life of Australians by contributing to long-term health system safety and effectiveness. A review of the English NHS found productivity improvements from Al of £12.5 billion a year were possible (9.9 per cent of that budget). Similar impacts should be possible in Australia.

The limited penetration of AI into Australian healthcare is due to an unprepared ecosystem.

Supporting roadmap implementation

One pivotal development for AI in Australia is the publication of the National Policy Roadmap for Artificial Intelligence in Healthcare. Founder of the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH) Enrico Coiera launched the roadmap at the AIDH’s AI.Care conference in Melbourne in November 2023.

AIDH has been a central player in the alliance since its inception. It is an invaluable platform for collaboration between academia, industry, healthcare organisations and government.

AIDH is proud to have contributed to the roadmap’s development and will help progress its recommendations. It shows the way for the digital health community, industry and governments to establish guardrails for the implementation of AI so it benefits, rather than harms, patients.

The roadmap identifies gaps in Australia’s capability to translate AI effectively and safely into clinical services. It identifies what is needed to build capabilities of industry, support implementation, enhance regulation and provides guidance on how to address these gaps holistically.

AIDH will support its implementation through workforce upskilling, community engagement and our policy and advocacy work as the peak body for digital health.

Readers will be aware that Australia has several regulatory and government agencies responsible for different aspects of AI, but as the roadmap argues, a coordinated system-wide approach is needed to ensure patients are protected, our health workforce is optimised and Australia can foster a healthcare-specific AI industry.

A critical frontier for the AI industry in healthcare is in accessing the necessary knowhow, technology, curated data and resources to develop, attain approval for and implement real-world AI scalable systems that enhance patient care. That is easier said than achieved. Those capability gaps hinder the successful adoption of recent and anticipated AI advances into tools that enhance frontline health services.

Implementation and scalability of AI in health have also been limited by complexities of care delivery and limited availability of an Al literate workforce. Where AI has been adopted successfully, this involved close collaboration between developers, clinical experts and health providers to address challenges in safety and ethics, and access to IP and quality, curated data.

Preparing the health sector and industry for AI

AI has been a central and recurring theme at digital health conferences and events in recent years, laying important baseline understanding of the opportunity and challenges.

Our inaugural AI.Care conference last year aimed to progress the dialogue to how the digital health community can implement AI safely. Enduring demand for insights in this area means we will reconvene this conference again this year and in future years. The format brings together health experts in AI and industry, with the potential for shared learnings, workshops and fostering of partnerships between small and medium enterprises, larger technology players and the digital health community.

Partnerships are critical in a rapidly changing world of new technologies, allowing firms and providers to quickly source external talent and capabilities, thereby accelerating their ability to solve problems, improve service offerings and achieve scale. AIDH is at the centre of a large and vibrant digital health community. We play a key role in bringing people together and allowing them to share ideas, establish and deepen links with a wide array of experts.

Many of our fellows (FAIDH), certified health informaticians (CHIAs) and other members have many years of experience working with AI, providing an invaluable pool of expertise for new partnerships and to progress the roadmap’s plans and recommendations.

AIDH is also working to deepen our relationships with the alliance and government to place our members at the heart of the health sector’s journey into AI. We look forward to leveraging our collective expertise to support Australia’s use of these new technologies.

Workforce upskilling in digital health and AI

AIDH is also instrumental in delivering national digital health workforce projects that will benefit AI. These were recently highlighted in Australia’s Digital Health Blueprint 2023-2033 and Action Plan and also in Australia’s National Digital Health Strategy 2023-2028 and an accompanying Strategy Delivery Roadmap to support implementation.

One foundational action was the release of an online Digital Health Hub developed by the AIDH in partnership with the Australian Digital Health Agency. The hub is built to assist clinical and non-clinical professionals in building their career pathways, digital health capability and confidence, preparing the health workforce to use digital technologies including AI.

This year, the hub will be enhanced with additional curated digital health workforce content and resources for digital health learning, education and practice. The hub can assess both individual capabilities and organisation-wide workforce readiness for digital adoption.

The adoption of AI technologies at scale requires leadership skills, robust clinical governance and advanced AI expertise in the health workforce. Clinical expertise in AI is required to apply ethical principles to specific use cases and prepare for on-site deployment. Experts will train their colleagues to become competent users of AI: understanding the shortcomings of an AI tool, their clinical responsibilities, how to mitigate risks and explain findings to patients.

In 2024, AIDH will progress work to build and pilot new fellowship pathways for clinical and non-clinical professionals, which include post graduate study, mentorship, practical application of skills and an exit assessment. This work will allow digital health professionals to acquire valuable skills and experience, while tailoring their own journey to become deep subject matter experts. We anticipate many choosing to specialise in the field of AI.

Leadership will be essential to manage the changes ahead as we adopt AI at scale. This year will also see the return of our Digital Health Executive Leadership Program (alongside the HIC conference in August) and Women in Digital Health Leadership. The latter has just opened for applications until mid-April. Both programs support participants to enhance their leadership skills overall and apply those to the unique challenges of digital health.

AIDH looks forward to playing a key role in establishing guardrails and building capabilities for AI in healthcare. We will be very reliant on partnerships, our membership and the whole digital health community to do so. By working together, we can position Australia as a global leader in ethical and safe deployment.

Mark Nevin FAIDH is Interim CEO, Australasian Institute of Digital Health until 31 March, 2024. Mark was awarded a fellowship by AIDH in 2020 in recognition of his inaugural work on telehealth and AI. He has developed frameworks for the safe deployment of AI in clinical care, including standards of practice to establish parameters for governance and quality and safety and guide providers. Mark has been an active contributor to AAAIH since its inception, providing strategic policy input to its projects.

Here is the link:

https://www.pulseit.news/opinion/opinion-advancing-artificial-intelligence-in-healthcare-in-australia/

I guess the question with all this is just what measurable / observable outcomes or improvements are being seen from this.

I find there is a lot of ‘management speak’ here but not much concrete progress, noting that RMIT is expecting to:

“Our strategy is to grow market share and leadership with impactful industry collaboration across the Hub’s key program areas by 2027 through offering our key stakeholders of multinationals and SMEs a university ‘one stop shop’ via a thematic multi-sectorial collaborative ecosystem for complex, interdisciplinary inquiry and innovation with national and international impact pathways.”

Here is the link:

https://digitalhealth.org.au/cm-business/rmit-university/

I have not seen quite so much aggregate “management speak” in a fair while!

I really wonder what all this ‘verbiage’ is adding to anything? Does anyone really think anything would change with or without the AIDH contribution?

Maybe someone from the AIDH could explain to us all what this all actually means in a comment, and what they are actually contributing that will make a difference. Or am I being too hard, or have missed it?

David.

AusHealthIT Poll Number 739 – Results – 24 March, 2024.

Here are the results of the recent poll.

Does The Government Have Any Sensible Idea On What To Do To Manage And Reduce The Vaping Epidemic?

Yes                                                                              3 (9%)

No                                                                             29 (88%)

I Have No Idea                                                           1 (3%)

Total No. Of Votes: 33

People seem to think that the Government is a bit short of workable ideas on how to best manage and reduce vaping!

Any insights on the poll are welcome, as a comment, as usual!

A good number of votes. But also a very clear outcome! 

1 of 33 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 22, 2024

It Seems They Are Still Looking For Something Useful To Do With The myHR.

This appeared last week:

Moving toward a more connected aged health system with My Health Record

By Sean McKeown

By using My Health Record, care providers can gain access to health information that aims to improve continuity of care across the spectrum, from aged care nurses to GPs.

The Aged Care Registration Project, coordinated by the Australian Digital Health Agency, offers support for residential aged care homes to connect to My Health Record.

The project emphasises several key points, including the benefits for providers, carers, and consumers, the availability of extensive records that include vaccination information, diagnostic imaging, advance care plans and GP summaries.

As of February 2024, 35% of residential aged care homes in Australia are connected to My Health Record, a notable increase from 12% just 18 months ago when the project was established.  This growth is attributed to the growing benefits of accessing My Health Record, with a continuing stream of comprehensive health information being added. The capability to upload advance care plans to My Health Record is a significant development, facilitating better-coordinated care in both residential aged care and home care settings.

The Agency has collaborated with numerous software vendors to develop systems that seamlessly integrate with My Health Record. Currently, over 13 software vendors have systems supporting this integration, with plans to engage with additional vendors in the future. This integration enables authorised staff members to access a resident’s comprehensive health record, including vital information such as discharge summaries, pathology results, and medication history.

A share-by-default approach for pathology and diagnostics information would continually add to the current records held by almost 24 million Australians in My Health Record.

Speaking with Inside Ageing, Laura Toyne from the Agency, highlighted My Health Record as the digital solution for streamlining the information transfer from aged care to acute care settings.

“The Aged Care Transfer Summary (ACTS) within My Health Record facilitates the transfer of essential health information when a resident is transferred to acute hospital care. This includes details such as reasons for transfer, current medications, and other relevant records, thereby improving the efficiency and safety of care transitions,” Ms Toyne added.

“Helping providers into the digital sphere has the potential to save them time and money. There are some initial investments to build digital literacy, and once this is done, considerable gains across efficiency and improved care outcomes can be realised.”

Laura Toyne, Branch Manager, National Program Delivery, Australian Digital Health Agency

The Agency is actively engaged in promoting the benefits of digital health and supporting providers in adopting these technologies.

Registration support is available to help you connect

Through tailored registration support and educational resources, the Agency will help aged care providers navigate the transition to digital health solutions.

A registration support team is available to connect residential aged care homes, with tailored, one-on-one registration support available via e-learning modules, webinars, training simulators and more.

Don’t miss this opportunity to join the digital health revolution. Visit the Australian Digital Health website to register your interest and the team will contact you with further information and next steps.

Here is the link:

https://insideageing.com.au/moving-toward-a-more-connected-aged-health-system-with-my-health-record/

This really is one of those instances where connecting the Aged Care Home to the myHealth record clearly leads to the next question of what the Aged Care Home(s) would do with the record – given they already have their own record keeping systems and are run off their feet providing necessary and rather more relevant care than posting to the myHR!

Love this quote from the article:

"The Agency is actively engaged in promoting the benefits of digital health and supporting providers in adopting these technologies."

I have not heard of a huge level of adoption in response to the ADHA and their efforts to date - or have I missed it?

I am looking forward to a post from an Aged Care Provider telling us all just how useful and relevant they are finding the myHR for their patients – but I guess they may be too busy!

Hope springs eternal!

David.

Thursday, March 21, 2024

I Think There Is An Important Message Here About The Application Of AI

This appeared last week:

John Halamka on the risks and benefits of clinical LLMs

At HIMSS24, the president of Mayo Clinic Platform offered some tough truths about the challenges of deploying genAI – touting its enormous potential while spotlighting patient-safety dangers to guard against in provider settings.

By Mike Miliard

March 13, 2024 11:13 AM

ORLANDO – At HIMSS24 on Tuesday, Dr. John Halamka, president of Mayo Clinic Platform, offered a frank discussion about the substantial potential benefits – and very real potential for harm – in both predictive and generative artificial intelligence used in clinical settings.

Healthcare AI has a credibility problem, he said. Mostly because the models so often lack transparency and accountability.

"Do you have any idea what training data was used on the algorithm, predictive or generative, you're using now?" Halamka asked. "Is the result of that predictive algorithm consistent and reliable? Has it been tested in a clinical trial?"

The goal, he said, is to figure out some strategies so "the AI future we all want is as safe as we all need."

It starts with good data, of course. And that's easier discussed than achieved.

"All algorithms are trained on data," said Halamka. "And the data that we use must be curated, normalized. We must understand who gathered it and for what purpose – that part is actually pretty tough."

For instance, "I don't know if any of you have actually studied the data integrity of your electronic health record systems, and your databases and your institutions, but you will actually find things like social determinants of health are poorly gathered, poorly representative," he explained. "They're sparse data, and they may not actually reflect reality. So if you use social determinants of health for any of these algorithms, you're very likely to get a highly biased result."

More questions to be answered: "Who is presenting that data to you? Your providers? Your patients? Is it coming from telemetry? Is it coming from automated systems that extract metadata from images?"

Once those questions are answered satisfactorily, that you've made sure the data has been gathered in a comprehensive enough fashion to develop the algorithm you want, then it's just a question of identifying potential biases and mitigating them. Easy enough, right?

"In the dataset that you have, what are the multimodal data elements? Just patient registration is probably not sufficient to create an AI model. Do you have such things as text, the notes, the history and physical [exam], the operative note, the diagnostic information? Do you have images? Do you have telemetry? Do you have genomics? Digital pathology? That is going to give you a sense of data depth – multiple different kinds of data, which are probably going to be used increasingly as we develop different algorithms that look beyond just structured and unstructured data."

Then it's time to think about data breadth. "How many patients do you have? I talked to several colleagues internationally that say, well, we have a registry of 5,000 patients, and we're going to develop AI on that registry. Well, 5,000 is probably not breadth enough to give you a highly resilient model."

And what about "heterogeneity or spread?" Halamka asked. "Mayo has 11.2 million patients in Arizona, Florida, Minnesota and internationally. But does it offer a representative data of France, or a representative Nordic population?"

As he sees it, "any dataset from any one institution is probably going to lack the spread to create algorithms that can be globally applied," said Halamka.

In fact, you could probably argue there is no one who can create an unbiased algorithm developed in one geography that will work in another geography seamlessly.

What that implies, he said, is you need a global network of federated participants that will help with model creation and model testing and local tuning if we're going to deliver the AI result we want on a global basis."

On that front, one of the biggest challenges is that "not every country on the planet has fully digitized records," said Halamka, who was recently in Davos, Switzerland for the World Economic Forum.

"Why haven't we created an amazing AI model in Switzerland?" he asked. "Well, Switzerland has extremely good chocolate – and extremely bad electronic health records. And about 90% of the data of Switzerland is on paper."

But even with good digitized data. And even after accounting for that data's depth, breadth and the spread, there are still other questions to consider. For instance, what data should be included in the model?

"If you want a fair, appropriate, valid, effective and safe algorithm, should you use race ethnicity as an input to your AI model? The answer is to be really careful with doing that, because it may very well bias the model in ways you don't want," said Halamka.

"If there was some sort of biological reason to have race ethnicity as a data element, OK, maybe it's helpful. But if it's really not related to a disease state or an outcome you're predicting, you're going to find – and I'm sure you've all read the literature about overtreatment, undertreatment, overdiagnosis – these kinds of problems. So you have to be very careful when you decide to build the model, what data to include."

Even more steps: "Then, once you have the model, you need to test it on data that's not the development set, and that may be a segregated data set in your organization, or maybe another organization in your region or around the world. And the question I would ask you all is, what do you measure? How do you evaluate a model to make sure that it is fair? What does it mean to be fair?"

Halamka has been working for some time with the Coalition for Health AI, which was founded with the idea that, "if we're going to define what it means to be fair, or effective, or safe, that we're going to have to do it as a community."

CHAI started with just six organizations. Today, it's got 1,500 members from around the world, including all the big tech organizations, academic medical centers, regional healthcare systems payers, pharma and government.

"You now have a public private organization capable of working as a community to define what it means to be fair, how you should measure what is a testing and evaluation framework, so we can create data cards, what data went into the system and model cards, how do they perform?"

It's a fact that every algorithm will have some sort of inherent bias, said Halamka.

That's why "Mayo has an assurance lab, and we test commercial algorithms and self-developed algorithms," he said. "And what you do is you identify the bias and then you mitigate it. It can be mitigated by returning the algorithm to different kinds of data, or just an understanding that the algorithm can't be completely fair for all patients. You just have to be exceedingly careful where and how you use it.

"For example, Mayo has a wonderful cardiology algorithm that will predict cardiac mortality, and it has incredible predictive, positive predictive value for a body mass index that is low and a really not good performance for a body mass index that is high. So is it ethical to use that algorithm? Well, yes, on people whose body mass index is low, and you just need to understand that bias and use it appropriately."

Halamka noted that the Coalition for Health AI has created an extensive series of metrics and artifacts and processes – available at CoalitionforHealthAI.org. "They're all for free. They're international. They're for download."

Over the next few months, CHAI "will be turning its attention to a lot of generative AI topics," he said. "Because generative AI evaluation is harder.

With predictive models, "I can understand what data went in, what data comes out, how it performs against ground truth. Did you have the diagnosis or not? Was the recommendation used or helpful?

With generative AI, "It may be a completely well-developed technology, but based on the prompt you give it, the answer could either be accurate or kill the patient."

Halamka offered a real example.

"We took a New England Journal of Medicine CPC case and gave it to a commercial narrative AI product. The case said the following: The patient is a 59-year-old with crushing, substantial chest pain, shortness of breath – and left leg radiation.

"Now, for the clinicians in the room, you know that left leg radiation is kind of odd. But remember, our generative AI systems are trained to look at language. And, yeah, they've seen that radiation thing on chest pain cases a thousand times.

"So ask the following question on ChatGPT or Anthropic or whatever it is you're using: What is the diagnosis? The diagnosis came back: 'This patient is having myocardial infarction. Anticoagulate them immediately.'

"But then ask a different question: 'What diagnosis shouldn't I miss?'"

To that query, the AI responded: "'Oh, don't miss dissecting aortic aneurysm and, of course, left leg pain,'" said Halamka. "In this case, this was an aortic aneurysm – for which anticoagulation would have instantly killed the patient.

"So there you go. If you have a product, depending on the question you ask, it either gives you a wonderful bit of guidance or kills the patient. That is not what I would call a highly reliable product. So you have to be exceedingly careful."

At the Mayo Clinic, "we've done a lot of derisking," he said. "We've figured how to de identify data and how to keep it safe, the generation of models, how to build an international coalition of organizations, how to do validation, how to do deployment."

Not every health system is as advanced and well-resourced as Mayo, of course.

"But my hope is, as all of you are on your AI journey – predictive and generative – that you can take some of the lessons that we've learned, take some of the artifacts freely available from the Coalition for Health AI, and build a virtuous life cycle in your own organization, so that we'll get the benefits of all this AI we need while doing no patient harm," he said.

More here:

https://www.healthcareitnews.com/news/john-halamka-risks-and-benefits-clincial-llms

It is well worth reading this article and following up the ideas offered. A really high-value talk I reckon!

David.

 

Wednesday, March 20, 2024

I Suspect We Are Only At The Beginning Of The Changes That Are Coming With AI.

This appeared last week:

New AI tools can record your medical appointment or draft a message from your doctor

By CARLA K. JOHNSON

Updated 1:43 AM GMT+11, March 14, 2024

Don’t be surprised if your doctors start writing you overly friendly messages. They could be getting some help from artificial intelligence.

New AI tools are helping doctors communicate with their patients, some by answering messages and others by taking notes during exams. It’s been 15 months since OpenAI released ChatGPT. Already thousands of doctors are using similar products based on large language models. One company says its tool works in 14 languages.

AI saves doctors time and prevents burnout, enthusiasts say. It also shakes up the doctor-patient relationship, raising questions of trust, transparency, privacy and the future of human connection.

A look at how new AI tools affect patients:

IS MY DOCTOR USING AI?

In recent years, medical devices with machine learning have been doing things like reading mammograms, diagnosing eye disease and detecting heart problems. What’s new is generative AI’s ability to respond to complex instructions by predicting language.

Your next check-up could be recorded by an AI-powered smartphone app that listens, documents and instantly organizes everything into a note you can read later. The tool also can mean more money for the doctor’s employer because it won’t forget details that legitimately could be billed to insurance.

Your doctor should ask for your consent before using the tool. You might also see some new wording in the forms you sign at the doctor’s office.

Other AI tools could be helping your doctor draft a message, but you might never know it.

“Your physician might tell you that they’re using it, or they might not tell you,” said Cait DesRoches, director of OpenNotes, a Boston-based group working for transparent communication between doctors and patients. Some health systems encourage disclosure, and some don’t.

Doctors or nurses must approve the AI-generated messages before sending them. In one Colorado health system, such messages contain a sentence disclosing they were automatically generated. But doctors can delete that line.

“It sounded exactly like him. It was remarkable,” said patient Tom Detner, 70, of Denver, who recently received an AI-generated message that began: “Hello, Tom, I’m glad to hear that your neck pain is improving. It’s important to listen to your body.” The message ended with “Take care” and a disclosure that it had been automatically generated and edited by his doctor.

Detner said he was glad for the transparency. “Full disclosure is very important,” he said.

WILL AI MAKE MISTAKES?

Large language models can misinterpret input or even fabricate inaccurate responses, an effect called hallucination. The new tools have internal guardrails to try to prevent inaccuracies from reaching patients — or landing in electronic health records.

“You don’t want those fake things entering the clinical notes,” said Dr. Alistair Erskine, who leads digital innovations for Georgia-based Emory Healthcare, where hundreds of doctors are using a product from Abridge to document patient visits.

The tool runs the doctor-patient conversation across several large language models and eliminates weird ideas, Erskine said. “It’s a way of engineering out hallucinations.”

Ultimately, “the doctor is the most important guardrail,” said Abridge CEO Dr. Shiv Rao. As doctors review AI-generated notes, they can click on any word and listen to the specific segment of the patient’s visit to check accuracy.

In Buffalo, New York, a different AI tool misheard Dr. Lauren Bruckner when she told a teenage cancer patient it was a good thing she didn’t have an allergy to sulfa drugs. The AI-generated note said, “Allergies: Sulfa.”

The tool “totally misunderstood the conversation,” said Bruckner, chief medical information officer at Roswell Park Comprehensive Cancer Center. “That doesn’t happen often, but clearly that’s a problem.”

WHAT ABOUT THE HUMAN TOUCH?

AI tools can be prompted to be friendly, empathetic and informative.

But they can get carried away. In Colorado, a patient with a runny nose was alarmed to learn from an AI-generated message that the problem could be a brain fluid leak. (It wasn’t.) A nurse hadn’t proofread carefully and mistakenly sent the message.

“At times, it’s an astounding help and at times it’s of no help at all,” said Dr. C.T. Lin, who leads technology innovations at Colorado-based UC Health, where about 250 doctors and staff use a Microsoft AI tool to write the first draft of messages to patients. The messages are delivered through Epic’s patient portal.

The tool had to be taught about a new RSV vaccine because it was drafting messages saying there was no such thing. But with routine advice — like rest, ice, compression and elevation for an ankle sprain — “it’s beautiful for that,” Linn said.

Also on the plus side, doctors using AI are no longer tied to their computers during medical appointments. They can make eye contact with their patients because the AI tool records the exam.

The tool needs audible words, so doctors are learning to explain things aloud, said Dr. Robert Bart, chief medical information officer at Pittsburgh-based UPMC. A doctor might say: “I am currently examining the right elbow. It is quite swollen. It feels like there’s fluid in the right elbow.”

Talking through the exam for the benefit of the AI tool can also help patients understand what’s going on, Bart said. “I’ve been in an examination where you hear the hemming and hawing while the physician is doing it. And I’m always wondering, ‘Well, what does that mean?’”

WHAT ABOUT PRIVACY?

U.S. law requires health care systems to get assurances from business associates that they will safeguard protected health information, and the companies could face investigation and fines from the Department of Health and Human Services if they mess up.

Doctors interviewed for this article said they feel confident in the data security of the new products and that the information will not be sold.

More here:

https://apnews.com/article/chatgpt-ai-health-doctors-microsoft-f63d7fcc4b361cf8073406bf231e2b92

All I can say is don’t say you have not been warned!

David.

Tuesday, March 19, 2024

I Find This A Rather Compelling Case For Not Being A TikTok User And Encouraging Others To Be The Same!

This appeared a few days ago:

TikTok made me write this – and it’s time for it to go

Tiktok’s influence on young Australians goes beyond free speech and into sinister realms of undue influence.

The Parrhesian Columnist

This week the US House of Representatives voted in favour of a bill banning TikTok in the US unless Chinese parent company ByteDance divests the app.

India banned TikTok in 2020. TikTok is also inaccessible in China, along with Facebook, Instagram and Google.

It’s time for Australia to join the bans, too. Every month, there are 8.5 million Australians active on TikTok, who spend an average 58 minutes per day on the platform, which is higher than for any other country.

This skews towards young people who use it as a source of entertainment, news, advice, and commercial recommendations. It is designed to be addictive, with algorithms that feed people more and more of what they crave.

#TikTokMadeMeBuyIt is a trend where young people justify purchases – from the latest haircare products to trips to Bali – based on the influence of TikTok.

The app’s powerful algorithm identifies the most compelling and sensational content, and surfaces it with a frequency and reach that make its recommendations very hard to resist. TikTok says four in 10 users buy a product after seeing it on TikTok, boasting that “investment can be instant with 41 per cent of users immediately purchasing a product after discovering it on TikTok. 

Skews viewers to a Communist Party agenda

“The user shopping experience doesn’t stop at purchase with 79 per cent of users creating videos ... This triggers more users to shop with 92 per cent saying they take action after watching a TikTok video.”

Those numbers are staggering if you compare them with any other form of promotional content whose response, recall, let alone action rates are much lower and 2 per cent to 5 per cent would be considered outperformance.

More than a third, or 34 per cent, of Gen Z also say they get their news from TikTok (it’s unclear if the other 66 per cent get it elsewhere or just do not care to read news at all).

With this level of pervasiveness and persuasiveness, are we doing enough to understand the real influence this platform and its algorithms have on young Australians?

Anthony Goldbloom is an Australian data scientist living in Silicon Valley who founded Kaggle and sold it to Google, and who formerly represented Australia in sailing and worked for the Reserve Bank of Australia.

TikTok is not available in China

He has written an analysis of TikTok’s algorithms that proves the app does not reflect prevailing attitudes of its users but skews viewership to suit what he argues is a Communist Party agenda.

The analysis shows that content consistent with Chinese geopolitical goals, for example #StandWithKashmir, which could undermine stability in India, is amplified relative to other platforms, while content unfriendly to the Chinese agenda, for example #FreeTibet, #FreeUighurs and #FreeHongKong, is disproportionately suppressed.

Another example is that despite an evenly split opinion on the Israel-Hamas war in the US, #FreePalestine content outweighs Israel-supportive content by 80 to 1.

Goldbloom has also exposed how many posts and comments are generated by bots that originate in other countries, such as Indonesia, Malaysia, Pakistan, Egypt and Saudi Arabia, which begs the question whether our children are being unknowingly influenced by an imported worldview or hidden agenda.

It is telling that the TikTok we see in the West is not available in China itself, so one could argue the Chinese are serving Western kids addictive, digital heroin they wouldn’t serve up to their own children.

In the US, with the experience of Russian Facebook influence in a prior election, and an impending one, this topic is the subject of urgent debate.

In Australia, are we just too happy that our kids are safe in their rooms spending hours scrolling what, we believe, are harmless dance videos to pay attention to the real data and demand action?

Even if you set aside the arguments about China dictating content about how our young people engage and what they buy as mere conspiracy theories, it is still problematic that silent bot armies with unknown foreign agendas produce content that normalises ideas in ways that go far beyond free speech.

This is moving into undue influence. And how much concentration of power should one platform have when it commandeers so much time and has demonstrated much higher levels of addictiveness and persuasion than other forms of media and influence that have preceded it?

According to the eSafety Commission, a high proportion of young people in Australia have encountered inappropriate or hateful content online, 57 per cent have seen real disturbing violence, and 33 per cent have seen images or videos promoting terrorism.

There are big questions to be considered. How does a country protect its sovereignty when it comes to values, ideals and culture? And what about safeguarding our children?

We have strict regulations about how sensitive topics, such as violence and death, are depicted and referenced in traditional media, and codes of conduct governing news reporting accuracy and truth in advertising, but none of those seem to apply to the 8 million hours a day Australians are on TikTok.

The fine lines between truth and propaganda, influence and credibility, reality and deep-fakes blur more each day. And the algorithms determining what to serve up are opaque, designed for addiction, and controlled by a non-Australian organisation, possibly influenced by foreign entities who aren’t even willing to consume that content themselves. That doesn’t sound like a recipe that bodes well for Australia’s future.

Are we in the Orwellian fog of 1984 where we are so mollified by the screens that entertain us and tell us how wonderful life is with just one more product and one more like-minded opinion, that we are happy to ignore a future reality where the opinions that form the basis of our social fabric, and the values and ideals of future generations may look very different from what we anticipate?

Or like Winston’s act of rebellion in the book, must we be compelled to say “DOWN WITH TIKTOK”?

Australia cannot ignore the data that has emerged, especially when we spend more time on the platform per person than any other country.

We may not have an immediate election to protect, but we do have our children and our future to consider, and for them TikTok may be a ticking time bomb.

So tick-tock, tick-tock, it’s time for real debate on calling time on TikTok.

The full article is here:

https://www.afr.com/politics/federal/tiktok-made-me-write-this-and-it-s-time-for-it-to-go-20240312-p5fbnv

All this makes me feel we would all be better off without this particular app in our lives, but then I would say that given my dislike of the present array of social media which all seem way too exploitative for my liking. The days of simplicity have really passed with the current generation of social media all working hard to exploit us all. As has often been said if the product is free it is you who are the price being paid!

David.