Show simple item record

Exploring the use of natural language systems for fact identification: Towards the automatic construction of healthcare portals

dc.contributor.authorPeck, Frederick A.en_US
dc.contributor.authorBhavnani, Suresh K.en_US
dc.contributor.authorBlackmon, Marilyn H.en_US
dc.contributor.authorRadev, Dragomir R.en_US
dc.date.accessioned2006-04-19T13:40:00Z
dc.date.available2006-04-19T13:40:00Z
dc.date.issued2004en_US
dc.identifier.citationPeck, Frederick A.; Bhavnani, Suresh K.; Blackmon, Marilyn H.; Radev, Dragomir R. (2004)."Exploring the use of natural language systems for fact identification: Towards the automatic construction of healthcare portals." Proceedings of the American Society for Information Science and Technology 41(1): 327-338. <http://hdl.handle.net/2027.42/34561>en_US
dc.identifier.issn0044-7870en_US
dc.identifier.issn1550-8390en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/34561
dc.description.abstractIn prior work we observed that expert searchers follow well-defined search procedures in order to obtain comprehensive information on the Web. Motivated by that observation, we developed a prototype domain portal called the Strategy Hub that provides expert search procedures to benefit novice searchers. The search procedures in the prototype were entirely handcrafted by search experts, making further expansion of the Strategy Hub cost-prohibitive. However, a recent study on the distribution of healthcare information on the web suggested that search procedures can be automatically generated from pages that have been rated based on the extent to which they cover facts relevant to a topic. This paper presents the results of experiments designed to automate the process of rating the extent to which a page covers relevant facts. To automatically generate these ratings, we used two natural language systems, Latent Semantic Analysis and MEAD, to compute the similarity between sentences on the page and each fact. We then used an algorithm to convert these similarity scores to a single rating that represents the extent to which the page covered each fact. These automatic ratings are compared with manual ratings using inter-rater reliability statistics. Analysis of these statistics reveals the strengths and weaknesses of each tool, and suggests avenues for improvement.en_US
dc.format.extent1107047 bytes
dc.format.extent3118 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypetext/plain
dc.language.isoen_US
dc.publisherWiley Subscription Services, Inc., A Wiley Companyen_US
dc.subject.otherComputer Scienceen_US
dc.titleExploring the use of natural language systems for fact identification: Towards the automatic construction of healthcare portalsen_US
dc.typeArticleen_US
dc.rights.robotsIndexNoFollowen_US
dc.subject.hlbsecondlevelInformation and Library Scienceen_US
dc.subject.hlbtoplevelSocial Sciencesen_US
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumSchool of Information, University of Michigan, Ann Arbor, Ml 48109–1092en_US
dc.contributor.affiliationumSchool of Information, University of Michigan, Ann Arbor, Ml 48109–1092en_US
dc.contributor.affiliationumSchool of Information and Department of EECS, University of Michigan, Ann Arbor, Ml 48109–1092en_US
dc.contributor.affiliationotherInstitute of Cognitive Science, University of Colorado, Boulder, CO 80309–0344en_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/34561/1/1450410139_ftp.pdfen_US
dc.identifier.doihttp://dx.doi.org/10.1002/meet.1450410139en_US
dc.identifier.sourceProceedings of the American Society for Information Science and Technologyen_US
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.