Show simple item record

dc.contributor.authorJaiswal, Nikhil
dc.contributor.authorMa, Yuanchao
dc.contributor.authorLebouché, Bertrand
dc.contributor.authorPoenaru, Dan
dc.contributor.authorPomey, Marie-Pascale
dc.contributor.authorAchiche, Sofiane
dc.contributor.authorLessard, David
dc.contributor.authorEngler, Kim
dc.contributor.authorMontiel, Zully
dc.contributor.authorAcevedo, Hector
dc.contributor.authorGameiro, Rodrigo R.
dc.contributor.authorCeli, Leo A.
dc.contributor.authorOsmanlliu, Esli
dc.date.accessioned2026-01-12T20:06:44Z
dc.date.available2026-01-12T20:06:44Z
dc.date.issued2025-12-22
dc.identifier.urihttps://hdl.handle.net/1721.1/164518
dc.description.abstractUses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-based, accountable, accurate, and patient-centred. This conceptual, practice-informed Perspective reflects on engaging patients and non-academic partners for the responsible integration of LLMs, grounded in the co-construction of MARVIN (for people living with HIV) and in an emerging collaboration with MIT Critical Data. Organised by the Software Development Life Cycle, we describe: conception/needs assessment with patient partners to identify use cases, acceptable trade-offs, and privacy expectations; development that prioritises grounding via vetted sources, structured human feedback, and data-validation committees including patient partners; testing and evaluation using patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) chosen in collaboration with patients to capture usability, acceptability, trust, and perceived safety, alongside task performance and harmful-output monitoring; and implementation via diverse governance boards, knowledge-mobilisation materials to set expectations, and risk-management pathways for potentially unsafe outputs. Based on our experience with MARVIN, we recommend early and continuous engagement of patients and non-academic partners, fair compensation, shared decision-making power, transparent decision logging, and inclusive, adaptable governance that can evolve with changing models and standards. These lessons highlight how patient partnership can directly shape chatbot design and oversight, helping teams align LLM-enabled tools with patient-centred goals while building accountable, safe, and equitable systems.en_US
dc.publisherBioMed Centralen_US
dc.relation.isversionofhttps://doi.org/10.1186/s40900-025-00804-1en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceBioMed Centralen_US
dc.titlePerspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbotsen_US
dc.typeArticleen_US
dc.identifier.citationJaiswal, N., Ma, Y., Lebouché, B. et al. Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots. Res Involv Engagem 11, 143 (2025).en_US
dc.contributor.departmentMIT Critical Data (Laboratory)en_US
dc.relation.journalResearch Involvement and Engagementen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-12-28T04:20:02Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.date.submission2025-12-28T04:20:02Z
mit.journal.volume11en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record