Large Language Models and Generative AI in Hospital Medicine
Rounds meet generative AI as you critique real LLM use cases for data security, patient safety, and equity, then understand how training and fine-tuning shape performance. We discuss clinical workflows, describe tasks LLMs perform with published examples, and review barriers and research directions, so you can design safer, smarter applications in hospital medicine.
Availability
On-Demand
Expires on Apr 30, 2027
Cost
Member: $0.00
Non-Member: $55.00
Credit Offered
1 CME Credit
1 Participation Credit
  • Overview
  • Faculty
  • Accreditation
  • Recommended
Learning Objectives
After completing this activity, learners should be able to:
  1. Critique a Large Language Model (LLM) clinical application from the perspectives of data security, patient safety, and equitable care amoungst diverse patient populations.
  2. Understand how large language models work. What is meant by training a language model as well as fine tuning a language model.
  3. Describe what tasks LLMs can perform and current published examples.
Faculty
  • Thomas Savage
  • Ashwin Nayak
  • Oluseyi Fayanju
  • Karl Swanson

Faculty Disclosures
The individuals in control of content for this activity have no relevant relationships with ACCME-defined ineligible companies to disclose unless listed here. Any relevant relationships were mitigated prior to the start of this activity.

 

Accreditation Statement
The Society of Hospital Medicine is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.

CME Credit Statement
The Society of Hospital Medicine designates this enduring material for a maximum of 1.00 AMA PRA Category 1 Credit(s)TM. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

IOS App Download Powered By