| マツトモ ノリカズ 松友 紀和 所属 川崎医療福祉大学 医療技術学部 診療放射線技術学科 職種 特任准教授 | |
| 論文種別 | 原著 | 
| 言語種別 | 英語 | 
| 査読の有無 | 査読あり | 
| 表題 | Can interactive artificial intelligence be used for patient explanations of nuclear medicine examinations in Japanese? | 
| 掲載誌名 | 正式名:Annals of nuclear medicine 略 称:Ann Nucl Med ISSNコード:18646433/09147187 | 
| 掲載区分 | 国内 | 
| 著者・共著者 | Norikazu Matsutomo, Mitsuha Fukami, Tomoaki Yamamoto | 
| 発行年月 | 2025/04 | 
| 概要 | OBJECTIVE:This study aimed to evaluate the accuracy and validity of patient explanations about nuclear medicine examinations generated in Japanese using ChatGPT- 3.5 and ChatGPT- 4.METHODS:ChatGPT was used to generate Japanese language explanations for seven single-photon emission computed tomography examinations (bone scintigraphy, brain perfusion imaging, myocardial perfusion imaging, dopamine transporter scintigraphy [DAT scintigraphy], sentinel lymph node scintigraphy, lung perfusion scintigraphy, and renal function scintigraphy) and 18F-fluorodeoxyglucose positron emission tomography. Nineteen board-certified nuclear medicine technologists evaluated the accuracy and validity of the responses using a 5-point scale.RESULTS:ChatGPT- 4 demonstrated significantly higher accuracy and validity than ChatGPT- 3.5, with 77.9% of responses rated as above average or excellent for accuracy, in comparison to 36.3% for ChatGPT- 3.5. For validity, 73.1% of ChatGPT- 4's responses were rated as above average or excellent, in comparison to 19.6% for ChatGPT- 3.5. ChatGPT- 4 outperformed ChatGPT- 3.5 in all examinations, with notable improvements in bone scintigraphy, lung perfusion scintigraphy, and DAT scintigraphy.CONCLUSION:These findings suggest that ChatGPT- 4 can be a valuable tool for providing patient explanations of nuclear medicine examinations. However, its application still requires expert supervision, and further research is needed to address potential risks and security concerns. | 
| DOI | 10.1007/s12149-025-02047-2 | 
| PMID | 40234374 |