Show simple item record

dc.contributor.advisorKagal, Lalana
dc.contributor.advisorLiang, Paul Pu
dc.contributor.authorJung, Minseok
dc.date.accessioned2025-03-24T18:50:49Z
dc.date.available2025-03-24T18:50:49Z
dc.date.issued2025-02
dc.date.submitted2025-02-20T14:30:33.791Z
dc.identifier.urihttps://hdl.handle.net/1721.1/158904
dc.description.abstractRecent advances in generative AI, particularly in producing human-like text, have blurred the lines between human and AI authorship. Since these AI tools rely on stochastic generation rather than traditional scientific reasoning, concerns about misinformation and reliability have emerged, highlighting the need for AI detection tools and policy guidelines. In response, this study proposes a dual approach: (1) the application of adaptive thresholds to improve the use of AI text detectors and (2) an AI policy framework based on user patterns and opinions. To enhance detector performance, we present a threshold optimization algorithm that adapts to diverse subgroups, such as those based on text lengths and stylistic features, thereby reducing discrepancies in error rates. The commonly used method relies on a single universal threshold, which has led to inconsistent results across various text types because of different probability distributions. Our approach addresses these shortcomings by tailoring thresholds to the specific characteristics of each group. In parallel, the study examines the pressing need for comprehensive AI guidelines, given the rise of misinformation and academic integrity issues. While a few institutions have introduced comprehensive policies, many institutes lack approaches grounded in user patterns and opinions. To remedy this problem, we propose a policy framework based on a user study. The findings of this research will provide practical solutions for more effective AI text classification and a reliable framework for the necessity of AI writing policies.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleResponsible Computational Text Generation: AI Content Classification and Policy Framework
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Technology and Policy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record