Show simple item record

INCREMENTAL LEARNING OF PROCEDURAL PLANNING KNOWLEDGE IN CHALLENGING ENVIRONMENTS

dc.contributor.authorPearson, Douglas J.en_US
dc.contributor.authorLaird, John E.en_US
dc.date.accessioned2010-06-01T22:40:25Z
dc.date.available2010-06-01T22:40:25Z
dc.date.issued2005-11en_US
dc.identifier.citationPearson , Douglas J. ; Laird , John E. (2005). "INCREMENTAL LEARNING OF PROCEDURAL PLANNING KNOWLEDGE IN CHALLENGING ENVIRONMENTS." Computational Intelligence 21(4): 414-439. <http://hdl.handle.net/2027.42/75646>en_US
dc.identifier.issn0824-7935en_US
dc.identifier.issn1467-8640en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/75646
dc.format.extent795100 bytes
dc.format.extent3109 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypetext/plain
dc.publisherBlackwell Publishing, Inc.en_US
dc.rights2005 Blackwell Publishing, Inc.en_US
dc.subject.otherProcedural Knowledgeen_US
dc.subject.otherIncremental Learningen_US
dc.subject.otherError Detectionen_US
dc.subject.otherError Recoveryen_US
dc.subject.otherPlanningen_US
dc.subject.otherSymbolicen_US
dc.subject.otherOperatorsen_US
dc.subject.otherTheory Revisionen_US
dc.subject.otherMachine Learningen_US
dc.titleINCREMENTAL LEARNING OF PROCEDURAL PLANNING KNOWLEDGE IN CHALLENGING ENVIRONMENTSen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelComputer Scienceen_US
dc.subject.hlbtoplevelEngineeringen_US
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumDepartment of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USAen_US
dc.contributor.affiliationotherThreePenny Software, Seattle, WAen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/75646/1/j.1467-8640.2005.00280.x.pdf
dc.identifier.doi10.1111/j.1467-8640.2005.00280.xen_US
dc.identifier.sourceComputational Intelligenceen_US
dc.identifier.citedreferenceBaffes, P., and R. Mooney. 1993. Symbolic revision of theories with m-of-n rules. In Proceedings of the International Join Conference on Artificial Intelligence, pp. 1135 – 1140, ChambÉry, France.en_US
dc.identifier.citedreferenceBooker, L. B., D. E. Goldberg, and J. H. Holland. 1989. Classifier systems and genetic algorithms. Artificial Intelligence, 40: 234 – 282.en_US
dc.identifier.citedreferenceFikes, R. E., and N. Nilsson. 1971. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2: 189 – 208.en_US
dc.identifier.citedreferenceGil, Y. 1992. Acquiring domain knowledge for planning by experimentation. Ph.D. Thesis, Carnegie Mellon University.en_US
dc.identifier.citedreferenceGil, Y. 1993. Efficient domain-independent experimentation. In Proceedings of the International Conference on Machine Learning, pp. 128 – 134, Amherst, MA.en_US
dc.identifier.citedreferenceGil, Y. 1994. Learning by experimentation: Incremental refinement of incomplete planning domains. In Proceedings of the International Conference on Machine Learning, pp. 87 – 95, New Brunswick, N.J.en_US
dc.identifier.citedreferenceHolland, J. H. 1986. Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In Machine Learning: An Artificial Intelligence Approach, volume II. Morgan Kaufmann, Los Altos, CA.en_US
dc.identifier.citedreferenceLaird, J. E., A. Newell, and P. S. Rosenbloom. 1987. Soar: An architecture for general intelligence. Artificial Intelligence, 33 ( 1 ): 1 – 64.en_US
dc.identifier.citedreferenceMiller, C. M. 1991. A constraint-motivated model of concept formation. In The Thirteenth Annual Conference of the Cognitive Science Society, pp. 827 – 831, Hillsdale, NJ.en_US
dc.identifier.citedreferenceMiller, C. M. 1993. A model of concept acquisition in the context of a unified theory of cognition. Ph.D. Thesis, The University of Michigan.en_US
dc.identifier.citedreferenceOurston, D., and R. J. Mooney. 1990. Changing the rules: A comprehensive approach to theory refinement. In Proceedings of the National Conference on Artificial Intelligence, pp. 815 – 820, Boston, MA.en_US
dc.identifier.citedreferencePazzani, M. J. 1988. Integrated learning with incorrect and incomplete theories. In Proceedings of the International Machine Learning Conference, pp. 291 – 297, Ann Arbor, MI, USA.en_US
dc.identifier.citedreferencePazzani, M. J., C. A. Brunck, and G. Silverstein. 1991. A knowledge-intensive approach to learning relational concepts. In Proceedings of the Eighth International Workshop on Machine Learning, pp. 432 – 436, Ithaca, NY.en_US
dc.identifier.citedreferencePearson, D. J. 1996. Learning procedural planning knowledge in complex environments. Ph.D. Thesis. University of Michigan.en_US
dc.identifier.citedreferencePearson, D. J., and S. B. Huffman. 1995. Combining learning from instruction with recovery from incorrect knowledge. In Machine Learning Conference Workshop on Agents that learn from other agents. Available from http://www.sunnyhome.org/pubs/mlw95.html.en_US
dc.identifier.citedreferencePearson, D. J., and J. E. Laird. 1999. Toward incremental knowledge correction for agents in complex environments. In Machine Intelligence, volume 15. Oxford University Press, New York.en_US
dc.identifier.citedreferenceQuinlan, J. R. 1990. Learning logical definitions from relations. Machine Learning, 5 ( 3 ): 239 – 266.en_US
dc.identifier.citedreferenceRumelhart, D. E., G. E. Hinton, and R. J. Williams. 1986. Learning internal representations by error propogation. In Parallel Distributed Processing, volume 1. MIT Press, Cambridge, MA.en_US
dc.identifier.citedreferenceSamuel, A. L. 1959. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3: 210 – 229.en_US
dc.identifier.citedreferenceShen, W., and H. A. Simon. 1989. Rule creation and rule learning through environmental exploration. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 675 – 680, Detroit, MI.en_US
dc.identifier.citedreferenceSutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine Learning, 3: 9 – 44.en_US
dc.identifier.citedreferenceTesauro, G. 1992. Temporal difference learning of backgammon strategy. In Proceedings of the Ninth International Conference on Machine Intelligence, pp. 451 – 457, Aberdeen, Scotland.en_US
dc.identifier.citedreferenceWang, X. 1995. Learning by observation and practice: An incremental approach for planning operator acquisition. In Proceedings of the Twelfth International Conference on Machine Learning, pp. 549 – 557, Tahoe City, CA.en_US
dc.identifier.citedreferenceWang, X. 1996. Learning planning operators by observation and practice. Ph.D. Thesis, Carnegie Mellon University.en_US
dc.identifier.citedreferenceWatkins, C. J. C. H., and P. Dayan. 1992. Technical note: Q-learning. Machine Learning, 8: 279 – 292.en_US
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.