On the handling of continuous-valued attributes in decision tree generation
dc.contributor.author | Fayyad, Usama M. | en_US |
dc.contributor.author | Irani, Keki B. | en_US |
dc.date.accessioned | 2006-09-11T18:26:31Z | |
dc.date.available | 2006-09-11T18:26:31Z | |
dc.date.issued | 1992-01 | en_US |
dc.identifier.citation | Fayyad, Usama M.; Irani, Keki B.; (1992). "On the handling of continuous-valued attributes in decision tree generation." Machine Learning 8(1): 87-102. <http://hdl.handle.net/2027.42/46972> | en_US |
dc.identifier.issn | 0885-6125 | en_US |
dc.identifier.issn | 1573-0565 | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/46972 | |
dc.description.abstract | We present a result applicable to classification learning algorithms that generate decision trees or rules using the information entropy minimization heuristic for discretizing continuous-valued attributes. The result serves to give a better understanding of the entropy measure, to point out that the behavior of the information entropy heuristic possesses desirable properties that justify its usage in a formal sense, and to improve the efficiency of evaluating continuous-valued attributes for cut value selection. Along with the formal proof, we present empirical results that demonstrate the theoretically expected reduction in evaluation effort for training data sets from real-world domains. | en_US |
dc.format.extent | 853996 bytes | |
dc.format.extent | 3115 bytes | |
dc.format.mimetype | application/pdf | |
dc.format.mimetype | text/plain | |
dc.language.iso | en_US | |
dc.publisher | Kluwer Academic Publishers; Springer Science+Business Media | en_US |
dc.subject.other | Computer Science | en_US |
dc.subject.other | Computing Methodologies | en_US |
dc.subject.other | Artificial Intelligence (Incl. Robotics) | en_US |
dc.subject.other | Simulation and Modeling | en_US |
dc.subject.other | Language Translation and Linguistics | en_US |
dc.subject.other | Automation and Robotics | en_US |
dc.subject.other | Induction | en_US |
dc.subject.other | Empirical Concept Learning | en_US |
dc.subject.other | Decision Trees | en_US |
dc.subject.other | Information Entropy Minimization | en_US |
dc.subject.other | Discretization | en_US |
dc.subject.other | Classification | en_US |
dc.title | On the handling of continuous-valued attributes in decision tree generation | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Computer Science | en_US |
dc.subject.hlbsecondlevel | Science (General) | en_US |
dc.subject.hlbtoplevel | Engineering | en_US |
dc.subject.hlbtoplevel | Science | en_US |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | Artificial Intelligence Laboratory, Electrical Engineering and Computer Science Department, The University of Michigan, 48109-2110, Ann Arbor, MI; Al Group, M/S 525-3660, Jet Propulsion Lab, California Institute of Technology, 4800 Oak Grove Drive, 91109, Pasadena, CA | en_US |
dc.contributor.affiliationum | Artificial Intelligence Laboratory, Electrical Engineering and Computer Science Department, The University of Michigan, 48109-2110, Ann Arbor, MI | en_US |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/46972/1/10994_2004_Article_BF00994007.pdf | en_US |
dc.identifier.doi | http://dx.doi.org/10.1007/BF00994007 | en_US |
dc.identifier.source | Machine Learning | en_US |
dc.owningcollname | Interdisciplinary and Peer-Reviewed |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.