Show simple item record

Function representation and approximation by neural networks.

dc.contributor.authorChen, David Shien_US
dc.contributor.advisorJain, Ramesh C.en_US
dc.date.accessioned2014-02-24T16:29:27Z
dc.date.available2014-02-24T16:29:27Z
dc.date.issued1991en_US
dc.identifier.other(UMI)AAI9208509en_US
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9208509en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/105665
dc.description.abstractIn this dissertation, we have investigated the representational power of multilayer feedforward neural networks both analytically and constructively. Our study reveals that the function representation by multilayer networks is flexible and general since it includes polynomial representation, spline representation, kernel based representation, projection based representation, and Fourier analysis representation. In essence, the representation by a multilayer network is a linear combination of nonlinear basis functions. These basis functions exhibit complex and flexible shape descriptive power. The precise shape, orientation, and location of each basis function are functions of the network weights; therefore, they are adaptively estimated via network learning algorithms. We have provided a new proof for the Universal Approximation Theorem for multilayer feedforward neural networks, based upon tools in the theory of measure and probability. The Theorem states that any multivariate (real valued) Borel function can be approximated to any desired accuracy by a network with three hidden layers. We have developed a robust back propagation algorithm that is able to train a network in the presence of erroneous data. The novelty of the new algorithm is the use of a new objective function whose shape varies with learning time. The knowledge of the noise model revealing the data quality is encoded into the shape of the time-varying objective function. The effect of each training data on the overall learning is weighted according to its quality. It is this novel strategy that allows the new algorithm to be stable under small noise perturbation and robust against gross errors in a training data. We have successfully applied the method of function approximation via multilayer feedforward neural networks to approximate a wide variety of univariate and bivariate functions, and to solve problems involving multivariate functions. Particularly, we have demonstrated that the new method is capable of performing discontinuity-preserving and outlier-removing surface reconstruction from noisy data--a challenging task in computer vision.en_US
dc.format.extent187 p.en_US
dc.subjectArtificial Intelligenceen_US
dc.subjectComputer Scienceen_US
dc.titleFunction representation and approximation by neural networks.en_US
dc.typeThesisen_US
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science and Engineeringen_US
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studiesen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/105665/1/9208509.pdf
dc.description.filedescriptionDescription of 9208509.pdf : Restricted to UM users only.en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.