Show simple item record

Relating Two Image-Based Diagrammatic Reasoning Architectures

dc.contributor.authorFurnas, George
dc.contributor.authorAnderson, Michael
dc.date.accessioned2010-03-24T00:23:41Z
dc.date.available2010-03-24T00:23:41Z
dc.date.issued2010
dc.identifier.urihttps://hdl.handle.net/2027.42/65116
dc.identifier.urihttp://www.ncbi.nlm.nih.gov/sites/entrez?cmd=retrieve&db=pubmed&list_uids=19818971&dopt=citationen_US
dc.descriptionAdditional supportive materials, most notably video versions of many of the figures, are included in this item.en_US
dc.descriptionThis paper is in press, and will appear in the proceedings of the "Diagrams 2010" conference, published as a volume in the Springer series, Lecture Notes in Computer Science.en_US
dc.description.abstractTo advance the understanding of different diagrammatic reasoning architectures that reason directly with images, we examine the relationship between Anderson's Inter-Diagrammatic Reasoning (IDR) architecture and Furnas' BITPICT architecture using the technique of cross-implementation. Implementing substantial functionality of each in the other, and noting what is easy and what is difficult, yields insights into the two architectures and such systems in general.en_US
dc.description.sponsorshipSupported in part by the National Science Foundation under grant number IIS-9820368en_US
dc.format.extent998914 bytes
dc.format.extent1434969 bytes
dc.format.extent1090417 bytes
dc.format.extent468800 bytes
dc.format.extent559670 bytes
dc.format.extent526699 bytes
dc.format.extent316137 bytes
dc.format.extent1017024 bytes
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.format.mimetypevideo/quicktime
dc.language.isoen_USen_US
dc.subjectDiagrammatic Reasoningen_US
dc.subjectDepictive Representationsen_US
dc.subjectInterDiagrammatic Reasoningen_US
dc.subjectBitpicten_US
dc.titleRelating Two Image-Based Diagrammatic Reasoning Architecturesen_US
dc.typeArticleen_US
dc.typeVideoen_US
dc.subject.hlbsecondlevelInformation and Library Science
dc.subject.hlbtoplevelSocial Sciences
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumInformation, School ofen_US
dc.contributor.affiliationumDepartment of Computer Science and Enginering, College of Engineeringen_US
dc.contributor.affiliationotherDepartment of Computer Science, University of Hartford, West Hartford, CT 06117 USAen_US
dc.contributor.affiliationumcampusAnn Arboren_US
dc.identifier.pmid19818971en_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/9/bitpictIDR-Figure-13-MAP-MAX-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/8/bitpictIDR-Figure-12-FILTER-LT2-v2-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/7/bitpictIDR-Figure-11-AccumulateOverlay-v2-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/6/bitpictIDR-Figure-10-NULLpredicate-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/5/bitpictIDR-Figure-9B-Overlay-NOT-AND-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/4/bitpictIDR-Figure-9A-MoveDataLeft-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/3/bitpictIDR-FIgure-8B-AND-annotated.mov
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/65116/2/bitpict-IDR-Figure-8A-NOT-annotated.mov
dc.owningcollnameInformation, School of (SI)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.