Show simple item record

Random Features Methods in Supervised Learning

dc.contributor.authorSun, Yitong
dc.date.accessioned2019-10-01T18:27:28Z
dc.date.availableNO_RESTRICTION
dc.date.available2019-10-01T18:27:28Z
dc.date.issued2019
dc.date.submitted2019
dc.identifier.urihttps://hdl.handle.net/2027.42/151635
dc.description.abstractKernel methods and neural networks are two important schemes in the supervised learning field. The theory of kernel methods is well understood, but their performance in practice, particularly on large-size datasets, is not as good as neural networks. In contrast, neural networks are the most popular method in today’s machine learning with a lot of fascinating applications, but even basic theoretical properties about neural networks, such as the universal consistency or sample complexity, have not been understood. Random features methods approximate kernel methods and resolve the scalability issues. Meanwhile, they have a deep connection with neural networks. Some empirical studies demonstrate the competitive performance of random features methods, but the theoretical guarantees on the performance are not available. This thesis presents theoretical results on two aspects of random features method: the generalization performance and the approximation properties. We first study the generalization error of random features support vector machines using tools from the statistical learning theory. Then we establish the fast learning rate of random Fourier features corresponding to the Gaussian kernel, with the number of features far less than the sample size. This justifies the computational advantage of random features over kernel methods from the theoretical aspect. As an effort in exploring the possibility of designing random features, we then study the universality of random features and show that the random ReLU features method can be used in supervised learning tasks as a universally consistent method. The depth separation result and the multi-layer approximation result point out the limitation of random features methods and shed light on the advantage of deep architectures.
dc.language.isoen_US
dc.subjectrandom features
dc.subjectkernel methods
dc.subjectstatistical learning
dc.titleRandom Features Methods in Supervised Learning
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied and Interdisciplinary Mathematics
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberGilbert, Anna Catherine
dc.contributor.committeememberTewari, Ambuj
dc.contributor.committeememberBalzano, Laura Kathryn
dc.contributor.committeememberHero III, Alfred O
dc.contributor.committeememberRudelson, Mark
dc.subject.hlbsecondlevelStatistics and Numeric Data
dc.subject.hlbtoplevelScience
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/151635/1/syitong_1.pdf
dc.identifier.orcid0000-0002-6715-478X
dc.identifier.name-orcidSun, Yitong; 0000-0002-6715-478Xen_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.