Yahoo Search Búsqueda en la Web

Resultado de búsqueda

  1. 17 de may. de 2021 · Supervised dimensionality reduction for big data. Joshua T. Vogelstein, Eric W. Bridgeford, Minh Tang, Da Zheng, Christopher Douville, Randal Burns & Mauro Maggioni. Nature Communications 12,...

  2. 5 de sept. de 2017 · Supervised Dimensionality Reduction for Big Data. To solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences.

  3. There is a lack of interpretable supervised dimensionality reduc-tion methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach, XOX,to extending principal components analysis by incorporat-ing class-conditional moment estimates into the low-dimensional projection. The simplest ver-

  4. 1 de jul. de 2021 · For supervised or unsupervised dimensionality reduction, combining spectral graph analysis and LRRA can make a global constraint on the subspace. For semi-supervised dimensionality reduction, the proposed method incorporating LRRA can exploit the unlabeled samples more effectively. The experimental results show the effectiveness of ...

  5. 17 de may. de 2021 · There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection.

  6. 12 de ene. de 2022 · SLISEMAP: Supervised dimensionality reduction through local explanations. Anton Björklund, Jarmo Mäkelä, Kai Puolamäki. Existing methods for explaining black box learning models often focus on building local explanations of model behaviour for a particular data item.

  7. 1 de dic. de 2023 · A novel cooperative framework for supervised dimensionality reduction is proposed. •. Minimal parameter tuning is required, while enabling direct error minimization. •. Boost classifier performance by utilizing its optimized latent representations. •. Explainability, image generation, and classification boundaries are studied. •.