Random Features Methods in Supervised Learning
dc.contributor.author | Sun, Yitong | |
dc.date.accessioned | 2019-10-01T18:27:28Z | |
dc.date.available | NO_RESTRICTION | |
dc.date.available | 2019-10-01T18:27:28Z | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/151635 | |
dc.description.abstract | Kernel methods and neural networks are two important schemes in the supervised learning field. The theory of kernel methods is well understood, but their performance in practice, particularly on large-size datasets, is not as good as neural networks. In contrast, neural networks are the most popular method in today’s machine learning with a lot of fascinating applications, but even basic theoretical properties about neural networks, such as the universal consistency or sample complexity, have not been understood. Random features methods approximate kernel methods and resolve the scalability issues. Meanwhile, they have a deep connection with neural networks. Some empirical studies demonstrate the competitive performance of random features methods, but the theoretical guarantees on the performance are not available. This thesis presents theoretical results on two aspects of random features method: the generalization performance and the approximation properties. We first study the generalization error of random features support vector machines using tools from the statistical learning theory. Then we establish the fast learning rate of random Fourier features corresponding to the Gaussian kernel, with the number of features far less than the sample size. This justifies the computational advantage of random features over kernel methods from the theoretical aspect. As an effort in exploring the possibility of designing random features, we then study the universality of random features and show that the random ReLU features method can be used in supervised learning tasks as a universally consistent method. The depth separation result and the multi-layer approximation result point out the limitation of random features methods and shed light on the advantage of deep architectures. | |
dc.language.iso | en_US | |
dc.subject | random features | |
dc.subject | kernel methods | |
dc.subject | statistical learning | |
dc.title | Random Features Methods in Supervised Learning | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Applied and Interdisciplinary Mathematics | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Gilbert, Anna Catherine | |
dc.contributor.committeemember | Tewari, Ambuj | |
dc.contributor.committeemember | Balzano, Laura Kathryn | |
dc.contributor.committeemember | Hero III, Alfred O | |
dc.contributor.committeemember | Rudelson, Mark | |
dc.subject.hlbsecondlevel | Statistics and Numeric Data | |
dc.subject.hlbtoplevel | Science | |
dc.description.bitstreamurl | https://deepblue.lib.umich.edu/bitstream/2027.42/151635/1/syitong_1.pdf | |
dc.identifier.orcid | 0000-0002-6715-478X | |
dc.identifier.name-orcid | Sun, Yitong; 0000-0002-6715-478X | en_US |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.