Show simple item record

dc.contributor.advisorPeter Szolovits.en_US
dc.contributor.authorTrepetin, Stanleyen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2007-07-18T13:19:28Z
dc.date.available2007-07-18T13:19:28Z
dc.date.copyright2006en_US
dc.date.issued2006en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/37975
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.en_US
dc.descriptionIncludes bibliographical references (leaves 131-150).en_US
dc.description.abstractThe American public continues to be concerned about medical privacy. Policy research continues to show people's demand for health organizations to protect patient-specific data. Health organizations need personally identifiable data for unhampered decision making; however, identifiable data are often the basis of information abuse if such data are improperly disclosed. This thesis shows that health organizations may use deidentified data for key routine organizational operations. I construct a technology adoption model and investigate if a for-profit health insurer could use deidentified data for key internal software quality management applications. If privacy-related data are analyzed without rigor, little support is found to incorporate more privacy protections into such applications. Legal and financial motivations appear lacking. Adding privacy safeguards to such software programs apparently doesn't improve policy-holder care quality. Existing technical approaches do not readily allow for data deidentification while permitting key computations within the applications. A closer analysis of data reaches different conclusions. I describe the bills that are currently passing through Congress to mitigate abuses of identifiable data that exist within organizations.en_US
dc.description.abstract(cont.) I create a cost and medical benefits model demonstrating the financial losses to the insurer and medical losses to its policy-holders due to less privacy protection within the routine software applications. One of the model components describes the Predictive Modeling application (PMA), used to identify an insurer's chronically-ill policy-holders. Disease management programs can enhance the care and reduce the costs of such individuals because improving such people's health can reduce costs to the paying organization. The model quantifies the decrease in care and rise in the insurer's claim costs as the PMA must work with suboptimal data due to policy-holders' privacy concerns regarding the routine software applications. I create a model for selecting variables to improve data linkage in software applications in general. An encryption-based approach, which allows for the secure linkage of records despite errors in linkage variables, is subsequently constructed. I test this approach as part of a general data deidentification method on an actual PMA used by health insurers. The PMA's performance is found to be the same as if executing on identifiable data.en_US
dc.description.statementofresponsibilityby Stanley Trepetin.en_US
dc.format.extent150 leavesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titlePrivacy in context : the costs and benefits of a new deidentification methoden_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc148046911en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record