Show simple item record

Sparsity and smoothness via the fused lasso

dc.contributor.authorTibshirani, Roberten_US
dc.contributor.authorSaunders, Michaelen_US
dc.contributor.authorRosset, Saharonen_US
dc.contributor.authorZhu, Jien_US
dc.contributor.authorKnight, Keithen_US
dc.date.accessioned2010-06-01T20:29:41Z
dc.date.available2010-06-01T20:29:41Z
dc.date.issued2005-02en_US
dc.identifier.citationTibshirani, Robert; Saunders, Michael; Rosset, Saharon; Zhu, Ji; Knight, Keith (2005). "Sparsity and smoothness via the fused lasso." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67(1): 91-108. <http://hdl.handle.net/2027.42/73606>en_US
dc.identifier.issn1369-7412en_US
dc.identifier.issn1467-9868en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/73606
dc.format.extent304273 bytes
dc.format.extent3109 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypetext/plain
dc.publisherBlackwell Publishing Ltden_US
dc.rights2005 Royal Statistical Societyen_US
dc.subject.otherFused Lassoen_US
dc.subject.otherGene Expressionen_US
dc.subject.otherLassoen_US
dc.subject.otherLeast Squares Regressionen_US
dc.subject.otherProtein Mass Spectroscopyen_US
dc.subject.otherSparse Solutionsen_US
dc.subject.otherSupport Vector Classifieren_US
dc.titleSparsity and smoothness via the fused lassoen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelStatistics and Numeric Dataen_US
dc.subject.hlbtoplevelScienceen_US
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumUniversity of Michigan, Ann Arbor, USAen_US
dc.contributor.affiliationotherStanford University, USAen_US
dc.contributor.affiliationotherIBM T. J. Watson Research Center, Yorktown Heights, USAen_US
dc.contributor.affiliationotherUniversity of Toronto, Canadaen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/73606/1/j.1467-9868.2005.00490.x.pdf
dc.identifier.doi10.1111/j.1467-9868.2005.00490.xen_US
dc.identifier.sourceJournal of the Royal Statistical Society: Series B (Statistical Methodology)en_US
dc.identifier.citedreferenceAdam, B.-L., Qu, Y., Davis, J. W., Ward, M. D., Clements, M. A., Cazares, L. H., Semmes, O. J., Schellhammer, P. F., Yasui, Y., Feng, Z. and Wright, Jr, G. L. W. ( 2003 ) Serum protein fingerprinting coupled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy mean. Cancer Res., 63, 3609 – 3614.en_US
dc.identifier.citedreferenceBoser, B., Guyon, I. and Vapnik, V. ( 1992 ) A training algorithm for optimal margin classifiers. In Proc. Computational Learning Theory II, Philadelphia. New York: Springer.en_US
dc.identifier.citedreferenceChen, S. S., Donoho, D. L. and Saunders, M. A. ( 2001 ) Atomic decomposition by basis pursuit. SIAM Rev., 43, 129 – 159.en_US
dc.identifier.citedreferenceDonoho, D. and Johnstone, I. ( 1994 ) Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81, 425 – 455.en_US
dc.identifier.citedreferenceEfron, B., Hastie, T., Johnstone, I. and Tibshirani, R. ( 2002 ) Least angle regression. Technical Report. Stanford University, Stanford.en_US
dc.identifier.citedreferenceGeyer, C. ( 1996 ) On the asymptotics of convex stochastic optimization. Technical Report. University of Minnesota, Minneapolis.en_US
dc.identifier.citedreferenceGill, P. E., Murray, W. and Saunders, M. A. ( 1997 ) Users guide for SQOPT 5.3: a Fortran package for large-scale linear and quadratic programming. Technical Report NA 97-4. University of California, San Diego.en_US
dc.identifier.citedreferenceGolub, T., Slonim, D., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J., Coller, H., Loh, M., Downing, J., Caligiuri, M., Bloomfield, C. and Lander, E. ( 1999 ) Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286, 531 – 536.en_US
dc.identifier.citedreferenceHastie, T., Tibshirani, R. and Friedman, J. ( 2001 ) The Elements of Statistical Learning; Data Mining, Inference and Prediction. New York: Springer.en_US
dc.identifier.citedreferenceHoerl, A. E. and Kennard, R. ( 1970 ) Ridge regression: biased estimation for nonorthogonal problems. Technometrics, 12, 55 – 67.en_US
dc.identifier.citedreferenceKnight, K. and Fu, W. ( 2000 ) Asymptotics for lasso-type estimators. Ann. Statist., 28, 1356 – 1378.en_US
dc.identifier.citedreferenceLand, S. and Friedman, J. ( 1996 ) Variable fusion: a new method of adaptive signal regression. Technical Report. Department of Statistics, Stanford University, Stanford.en_US
dc.identifier.citedreferenceLee, Y., Lin, Y. and Wahba, G. ( 2002 ) Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Technical Report. University of Wisconsin, Madison.en_US
dc.identifier.citedreferencePetricoin, E. F., Ardekani, A. M., Hitt, B. A., Levine, P. J., Fusaro, V., Steinberg, S. M., Mills, G. B., Simone, C., Fishman, D. A., Kohn, E. and Liotta, L. A. ( 2002 ) Use of proteomic patterns in serum to identify ovarian cancer. Lancet, 359, 572 – 577.en_US
dc.identifier.citedreferenceRosset, S. and Zhu, J. ( 2003 ) Adaptable, efficient and robust methods for regression and classification via piecewise linear regularized coefficient paths. Stanford University, Stanford.en_US
dc.identifier.citedreferenceRosset, S., Zhu, J. and Hastie, T. ( 2004 ) Boosting as a regularized path to a maximum margin classifier. J. Mach. Learn. Res., 5, 941 – 973.en_US
dc.identifier.citedreferenceStein, C. ( 1981 ) Estimation of the mean of a multivariate normal distribution. Ann. Statist., 9, 1131 – 1151.en_US
dc.identifier.citedreferenceTibshirani, R. ( 1996 ) Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B, 58, 267 – 288.en_US
dc.identifier.citedreferenceTibshirani, R., Hastie, T., Narasimhan, B. and Chu, G. ( 2001 ) Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. Natn. Acad. Sci. USA, 99, 6567 – 6572.en_US
dc.identifier.citedreferenceTibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. ( 2004 ) Sparsity and smoothness via the fused lasso. Technical Report. Stanford University, Stanford.en_US
dc.identifier.citedreferenceVapnik, V. ( 1996 ) The Nature of Statistical Learning Theory. New York: Springer.en_US
dc.identifier.citedreferenceWold, H. ( 1975 ) Soft modelling by latent variables: the nonlinear iterative partial least squares (NIPALS) approach. In Perspectives in Probability and Statistics, in Honor of M. S. Bartlett, pp. 117 – 144.en_US
dc.identifier.citedreferenceZhu, J., Rosset, S., Hastie, T. and Tibshirani, R. ( 2003 ) L1 norm support vector machines. Technical Report. Stanford University, Stanford.en_US
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.