Assessing artificial intelligence-generated patient discharge information for the emergency department: a pilot study
Dublin Core
Title
Assessing artificial intelligence-generated patient discharge information for the emergency department: a pilot study
Subject
Large Language model, Patient discharge information, Artificial intelligence, Emergency department,
Readability
Readability
Description
Abstract
Background Effective patient discharge information (PDI) in emergency departments (EDs) is vital and often more
crucial than the diagnosis itself. Patients who are well informed at discharge tend to be more satisfied and experience
better health outcomes. The combination of written and verbal instructions tends to improve patient recall. However,
creating written discharge materials is both time-consuming and costly. With the emergence of generative artificial
intelligence (AI) and large language models (LMMs), there is potential for the efficient production of patient discharge
documents. This study aimed to investigate several predefined key performance indicators (KPIs) of AI-generated
patient discharge information.
Methods This study focused on three significant patients’ complaints in the ED: nonspecific abdominal pain,
nonspecific low back pain, and fever in children. To generate the brochures, we used an English query for ChatGPT
using the GPT-4 LLM and DeepL software to translate the brochures to Dutch. Five KPIs were defined to assess these
PDI brochures: quality, accessibility, clarity, correctness and usability. The brochures were evaluated for each KPI by 8
experienced emergency physicians using a rating scale from 1 (very poor) to 10 (excellent). To quantify the readability
of the brochures, frequently used indices were employed: the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple
Measure of Gobbledygook, and Coleman-Liau Index on the translated text.
Results The brochures generated by ChatGPT/GPT-4 were well received, scoring an average of 7 to 8 out of 10 across
all evaluated aspects. However, the results also indicated a need for some revisions to perfect these documents.
Readability analysis indicated that brochures require high school- to college-level comprehension, but this is likely an
overestimation due to context-specific reasons as well as features inherent to the Dutch language.
Conclusion Our findings indicate that AI tools such as LLM could represent a new opportunity to quickly produce
patient discharge information brochures. However, human review and editing are essential to ensure accurate and
reliable information. A follow-up study with more topics and validation in the intended population is necessary to
assess their performance.
Keywords Large Language model, Patient discharge information, Artificial intelligence, Emergency department,
Readability
Background Effective patient discharge information (PDI) in emergency departments (EDs) is vital and often more
crucial than the diagnosis itself. Patients who are well informed at discharge tend to be more satisfied and experience
better health outcomes. The combination of written and verbal instructions tends to improve patient recall. However,
creating written discharge materials is both time-consuming and costly. With the emergence of generative artificial
intelligence (AI) and large language models (LMMs), there is potential for the efficient production of patient discharge
documents. This study aimed to investigate several predefined key performance indicators (KPIs) of AI-generated
patient discharge information.
Methods This study focused on three significant patients’ complaints in the ED: nonspecific abdominal pain,
nonspecific low back pain, and fever in children. To generate the brochures, we used an English query for ChatGPT
using the GPT-4 LLM and DeepL software to translate the brochures to Dutch. Five KPIs were defined to assess these
PDI brochures: quality, accessibility, clarity, correctness and usability. The brochures were evaluated for each KPI by 8
experienced emergency physicians using a rating scale from 1 (very poor) to 10 (excellent). To quantify the readability
of the brochures, frequently used indices were employed: the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple
Measure of Gobbledygook, and Coleman-Liau Index on the translated text.
Results The brochures generated by ChatGPT/GPT-4 were well received, scoring an average of 7 to 8 out of 10 across
all evaluated aspects. However, the results also indicated a need for some revisions to perfect these documents.
Readability analysis indicated that brochures require high school- to college-level comprehension, but this is likely an
overestimation due to context-specific reasons as well as features inherent to the Dutch language.
Conclusion Our findings indicate that AI tools such as LLM could represent a new opportunity to quickly produce
patient discharge information brochures. However, human review and editing are essential to ensure accurate and
reliable information. A follow-up study with more topics and validation in the intended population is necessary to
assess their performance.
Keywords Large Language model, Patient discharge information, Artificial intelligence, Emergency department,
Readability
Creator
Ruben De Rouck1,2* , Evy Wille3 , Allison Gilbert4 and Nick Vermeersch1
Source
https://doi.org/10.1186/s12245-025-00885-5
Date
2025
Contributor
Peri Irawan
Format
pdf
Language
english
Type
text
Files
Collection
Citation
Ruben De Rouck1,2* , Evy Wille3 , Allison Gilbert4 and Nick Vermeersch1, “Assessing artificial intelligence-generated patient discharge information for the emergency department: a pilot study,” Repository Horizon University Indonesia, accessed April 11, 2026, https://repository.horizon.ac.id/items/show/12772.