Show simple item record

MuSP: A multistep screening procedure for sparse recovery

dc.contributor.authorYang, Yuehan
dc.contributor.authorZhu, Ji
dc.contributor.authorGeorge, Edward I.
dc.date.accessioned2021-04-06T02:09:44Z
dc.date.available2023-01-05 22:09:41en
dc.date.available2021-04-06T02:09:44Z
dc.date.issued2021-12
dc.identifier.citationYang, Yuehan; Zhu, Ji; George, Edward I. (2021). "MuSP: A multistep screening procedure for sparse recovery." Stat 10(1): n/a-n/a.
dc.identifier.issn2049-1573
dc.identifier.issn2049-1573
dc.identifier.urihttps://hdl.handle.net/2027.42/167020
dc.publisherSpringer Science & Business Media
dc.publisherWiley Periodicals, Inc.
dc.subject.otherhigh‐dimensional data
dc.subject.otheriterative algorithm
dc.subject.otherLasso
dc.subject.othermultistep method
dc.titleMuSP: A multistep screening procedure for sparse recovery
dc.typeArticle
dc.rights.robotsIndexNoFollow
dc.subject.hlbsecondlevelMathematics
dc.subject.hlbtoplevelScience
dc.description.peerreviewedPeer Reviewed
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/167020/1/sta4352_am.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/167020/2/sta4352.pdf
dc.identifier.doi10.1002/sta4.352
dc.identifier.sourceStat
dc.identifier.citedreferenceHuang, J., Ma, S., & Zhang, C.‐H. ( 2008 ). Adaptive lasso for sparse high‐dimensional regression models. Statistica Sinica, 18 ( 4 ), 1603 – 1618.
dc.identifier.citedreferenceBühlmann, P., & Meier, L. ( 2008 ). Discussion: One‐step sparse estimates in nonconcave penalized likelihood models, by Zou, H. and Li, R. Annals of Statistics, 36, 1534 – 1541.
dc.identifier.citedreferenceBühlmann, P., & Van De Geer, S. ( 2011 ). Statistics for high‐dimensional data: Methods, theory and applications. New York: Springer Science & Business Media.
dc.identifier.citedreferenceCandes, E., & Tao, T. ( 2007 ). The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 35 ( 6 ), 2313 – 2351.
dc.identifier.citedreferenceCandès, E., & Plan, Y. ( 2009 ). Near‐ideal model selection by ℓ 1 minimization. Annals of Statistics, 37 ( 5A ), 2145 – 2177.
dc.identifier.citedreferenceEfron, B., Hastie, T., Johnstone, L., & Tibshirani, R. ( 2004 ). Least angle regression. Annals of Statistics, 32 ( 2 ), 407 – 451.
dc.identifier.citedreferenceFan, J., & Li, R. ( 2001 ). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96 ( 456 ), 1348 – 1360.
dc.identifier.citedreferenceFan, J., & Lv, J. C. ( 2010 ). A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20 ( 1 ), 101.
dc.identifier.citedreferenceFan, J., & Peng, H. ( 2004 ). Nonconcave penalized likelihood with a diverging number of parameters. Annals of Statistics, 32 ( 3 ), 928 – 961.
dc.identifier.citedreferenceFan, J., Xue, L., & Zou, H. ( 2014 ). Strong oracle optimality of folded concave penalized estimation. Annals of Statistics, 42 ( 3 ), 819 – 849.
dc.identifier.citedreferenceFriedman, J., Hastie, T., & Tibshirani, R. J. ( 2010 ). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33 ( 1 ), 1 – 22.
dc.identifier.citedreferenceHastie, T., Tibshirani, R., & Wainwright, M. ( 2015 ). Statistical learning with sparsity: The lasso and generalizations. London: Chapman and Hall/CRC.
dc.identifier.citedreferenceBreheny, P., & Huang, J. ( 2011 ). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Annals of Applied Statistics, 5, 232 – 253.
dc.identifier.citedreferenceHuang, J., & Zhang, C.‐H. ( 2012 ). Estimation and selection via absolute penalized convex minimization and its multistage adaptive applications. Journal of Machine Learning Research, 13 ( 1 ), 1839 – 1864.
dc.identifier.citedreferenceLiu, H., Yao, T., & Li, R. ( 2016 ). Global solutions to folded concave penalized nonconvex learning. Annals of statistics, 44 ( 2 ), 629.
dc.identifier.citedreferenceMeade, N., & Salkin, G. R. ( 1989 ). Index funds‐construction and performance measurement. Journal of the Operational Research Society, 40 ( 10 ), 871 – 879.
dc.identifier.citedreferenceMeinshausen, N., & Bühlmann, P. ( 2006 ). High‐dimensional graphs and variable selection with the lasso. Annals of Statistics, 34 ( 3 ), 1436 – 1462.
dc.identifier.citedreferenceMeinshausen, N., & Yu, B. ( 2009 ). Lasso‐type recovery of sparse representations for high‐dimensional data. Annals of Statistics, 37 ( 1 ), 246 – 270.
dc.identifier.citedreferenceNegahban, S., Ravikumar, P., Wainwright, M. J., & Yu, B. ( 2012 ). A unified framework for high‐dimensional analysis of m ‐estimators with decomposable regularizers. Statistical Science, 27 ( 4 ), 1348 – 1356.
dc.identifier.citedreferenceRaskutti, G., Wainwright, M. J., & Yu, B. ( 2010 ). Restricted eigenvalue properties for correlated Gaussian designs. Journal of Machine Learning Research, 11, 2241 – 2259.
dc.identifier.citedreferenceRočková, V., & George, E. I. ( 2018 ). The spike‐and‐slab lasso. Journal of the American Statistical Association, 113 ( 521 ), 431 – 444.
dc.identifier.citedreferenceTibshirani, R. ( 1996 ). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58 ( 1 ), 267 – 288.
dc.identifier.citedreferenceTibshirani, R. J. ( 2013 ). The lasso problem and uniqueness. Electronic Journal of Statistics, 7, 1456 – 1490.
dc.identifier.citedreferenceWainwright, M. J. ( 2009 ). Sharp thresholds for noisy and high‐dimensional recovery of sparsity using l 1 ‐constrained quadratic programming (lasso). IEEE Transactions on Information Theory, 55 ( 5 ), 2183 – 2202.
dc.identifier.citedreferenceWang, L., Kim, Y., & Li, R. ( 2013 ). Calibrating non‐convex penalized regression in ultra‐high dimension. Annals of statistics, 41 ( 5 ), 2505.
dc.identifier.citedreferenceZhang, C.‐H. ( 2010a ). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38 ( 2 ), 894 – 942.
dc.identifier.citedreferenceZhang, C.‐H., & Huang, J. ( 2008 ). The sparsity and bias of the lasso selection in high‐dimensional linear regression. Annals of Statistics, 36 ( 4 ), 1567 – 1594.
dc.identifier.citedreferenceZhang, C.‐H., & Zhang, T. ( 2012 ). A general theory of concave regularization for high‐dimensional sparse estimation problems. Statistical Science, 27 ( 4 ), 576 – 593.
dc.identifier.citedreferenceZhang, T. ( 2010b ). Analysis of multi‐stage convex relaxation for sparse regularization. Journal of Machine Learning Research, 11, 1081 – 1107.
dc.identifier.citedreferenceZhao, P., & Yu, B. ( 2006 ). On model selection consistency of lasso. Journal of Machine Learning Research, 7, 2541 – 2563.
dc.identifier.citedreferenceZou, H. ( 2006 ). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101 ( 476 ), 1418 – 1429.
dc.identifier.citedreferenceZou, H., & Li, R. ( 2008 ). One‐step sparse estimates in nonconcave penalized likelihood models. Annals of Statistics, 36 ( 4 ), 1509 – 1533.
dc.identifier.citedreferenceBelloni, A., & Chernozhukov, V. ( 2013 ). Least squares after model selection in high‐dimensional sparse models. Bernoulli, 19, 521 – 547.
dc.working.doiNOen
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.