topo.layouts

Submodules

Package Contents

Classes

Projector

A scikit-learn compatible class that handles all projection methods.

Functions

optimize_layout_euclidean(head_embedding, ...[, ...])

Improve an embedding using stochastic gradient descent to minimize the

optimize_layout_generic(head_embedding, ...[, gamma, ...])

Improve an embedding using stochastic gradient descent to minimize the

optimize_layout_inverse(head_embedding, ...[, gamma, ...])

Improve an embedding using stochastic gradient descent to minimize the

spectral_layout(graph, dim, random_state[, ...])

Given a graph compute the spectral embedding of the graph. This is

fuzzy_simplicial_set(X[, n_neighbors, metric, ...])

Given a set of data X, a neighborhood size, and a measure of distance

ts()

fast_knn_indices(X, n_neighbors)

A fast computation of knn indices.

make_epochs_per_sample(weights, n_epochs)

Given a set of weights and number of epochs generate the number of

simplicial_set_embedding(graph, n_components, ...[, ...])

Perform a fuzzy simplicial set embedding, using a specified

find_ab_params(spread, min_dist)

Fit a, b params for the differentiable curve used in lower

fuzzy_embedding(graph[, n_components, initial_alpha, ...])

Perform a fuzzy simplicial set embedding, using a specified

Isomap(X[, n_components, n_neighbors, metric, ...])

Isomap embedding of a graph. This is a highly efficient implementation can also operate with landmarks.

Attributes

INT32_MIN

INT32_MAX

topo.layouts.optimize_layout_euclidean(head_embedding, tail_embedding, head, tail, n_epochs, n_vertices, epochs_per_sample, a, b, rng_state, gamma=1.0, initial_alpha=1.0, negative_sample_rate=5.0, parallel=False, verbose=False, densmap=False, densmap_kwds={})

Improve an embedding using stochastic gradient descent to minimize the fuzzy set cross entropy between the 1-skeletons of the high dimensional and low dimensional fuzzy simplicial sets. In practice this is done by sampling edges based on their membership strength (with the (1-p) terms coming from negative sampling similar to word2vec). :param head_embedding: The initial embedding to be improved by SGD. :type head_embedding: array of shape (n_samples, n_components) :param tail_embedding: The reference embedding of embedded points. If not embedding new

previously unseen points with respect to an existing embedding this is simply the head_embedding (again); otherwise it provides the existing embedding to embed with respect to.

Parameters:
  • head (array of shape (n_1_simplices)) – The indices of the heads of 1-simplices with non-zero membership.

  • tail (array of shape (n_1_simplices)) – The indices of the tails of 1-simplices with non-zero membership.

  • n_epochs (int) – The number of training epochs to use in optimization.

  • n_vertices (int) – The number of vertices (0-simplices) in the dataset.

  • epochs_per_samples (array of shape (n_1_simplices)) – A float value of the number of epochs per 1-simplex. 1-simplices with weaker membership strength will have more epochs between being sampled.

  • a (float) – Parameter of differentiable approximation of right adjoint functor

  • b (float) – Parameter of differentiable approximation of right adjoint functor

  • rng_state (array of int64, shape (3,)) – The internal state of the rng

  • gamma (float (optional, default 1.0)) – Weight to apply to negative samples.

  • initial_alpha (float (optional, default 1.0)) – Initial learning rate for the SGD.

  • negative_sample_rate (int (optional, default 5)) – Number of negative samples to use per positive sample.

  • parallel (bool (optional, default False)) – Whether to run the computation using numba parallel. Running in parallel is non-deterministic, and is not used if a random seed has been set, to ensure reproducibility.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

  • densmap (bool (optional, default False)) – Whether to use the density-augmented densMAP objective

  • densmap_kwds (dict (optional, default {})) – Auxiliary data for densMAP

Returns:

embedding (array of shape (n_samples, n_components)) – The optimized embedding.

topo.layouts.optimize_layout_generic(head_embedding, tail_embedding, head, tail, n_epochs, n_vertices, epochs_per_sample, a, b, rng_state, gamma=1.0, initial_alpha=1.0, negative_sample_rate=5.0, output_metric=dist.euclidean, output_metric_kwds=(), verbose=False)

Improve an embedding using stochastic gradient descent to minimize the fuzzy set cross entropy between the 1-skeletons of the high dimensional and low dimensional fuzzy simplicial sets. In practice this is done by sampling edges based on their membership strength (with the (1-p) terms coming from negative sampling similar to word2vec). :param head_embedding: The initial embedding to be improved by SGD. :type head_embedding: array of shape (n_samples, n_components) :param tail_embedding: The reference embedding of embedded points. If not embedding new

previously unseen points with respect to an existing embedding this is simply the head_embedding (again); otherwise it provides the existing embedding to embed with respect to.

Parameters:
  • head (array of shape (n_1_simplices)) – The indices of the heads of 1-simplices with non-zero membership.

  • tail (array of shape (n_1_simplices)) – The indices of the tails of 1-simplices with non-zero membership.

  • weight (array of shape (n_1_simplices)) – The membership weights of the 1-simplices.

  • n_epochs (int) – The number of training epochs to use in optimization.

  • n_vertices (int) – The number of vertices (0-simplices) in the dataset.

  • epochs_per_sample (array of shape (n_1_simplices)) – A float value of the number of epochs per 1-simplex. 1-simplices with weaker membership strength will have more epochs between being sampled.

  • a (float) – Parameter of differentiable approximation of right adjoint functor

  • b (float) – Parameter of differentiable approximation of right adjoint functor

  • rng_state (array of int64, shape (3,)) – The internal state of the rng

  • gamma (float (optional, default 1.0)) – Weight to apply to negative samples.

  • initial_alpha (float (optional, default 1.0)) – Initial learning rate for the SGD.

  • negative_sample_rate (int (optional, default 5)) – Number of negative samples to use per positive sample.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

Returns:

embedding (array of shape (n_samples, n_components)) – The optimized embedding.

topo.layouts.optimize_layout_inverse(head_embedding, tail_embedding, head, tail, weight, sigmas, rhos, n_epochs, n_vertices, epochs_per_sample, a, b, rng_state, gamma=1.0, initial_alpha=1.0, negative_sample_rate=5.0, output_metric=dist.euclidean, output_metric_kwds=(), verbose=False)

Improve an embedding using stochastic gradient descent to minimize the fuzzy set cross entropy between the 1-skeletons of the high dimensional and low dimensional fuzzy simplicial sets. In practice this is done by sampling edges based on their membership strength (with the (1-p) terms coming from negative sampling similar to word2vec). :param head_embedding: The initial embedding to be improved by SGD. :type head_embedding: array of shape (n_samples, n_components) :param tail_embedding: The reference embedding of embedded points. If not embedding new

previously unseen points with respect to an existing embedding this is simply the head_embedding (again); otherwise it provides the existing embedding to embed with respect to.

Parameters:
  • head (array of shape (n_1_simplices)) – The indices of the heads of 1-simplices with non-zero membership.

  • tail (array of shape (n_1_simplices)) – The indices of the tails of 1-simplices with non-zero membership.

  • weight (array of shape (n_1_simplices)) – The membership weights of the 1-simplices.

  • n_epochs (int) – The number of training epochs to use in optimization.

  • n_vertices (int) – The number of vertices (0-simplices) in the dataset.

  • epochs_per_sample (array of shape (n_1_simplices)) – A float value of the number of epochs per 1-simplex. 1-simplices with weaker membership strength will have more epochs between being sampled.

  • a (float) – Parameter of differentiable approximation of right adjoint functor

  • b (float) – Parameter of differentiable approximation of right adjoint functor

  • rng_state (array of int64, shape (3,)) – The internal state of the rng

  • gamma (float (optional, default 1.0)) – Weight to apply to negative samples.

  • initial_alpha (float (optional, default 1.0)) – Initial learning rate for the SGD.

  • negative_sample_rate (int (optional, default 5)) – Number of negative samples to use per positive sample.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

Returns:

embedding (array of shape (n_samples, n_components)) – The optimized embedding.

topo.layouts.spectral_layout(graph, dim, random_state, laplacian_type='normalized', eigen_tol=0.001, return_evals=False)

Given a graph compute the spectral embedding of the graph. This is simply the eigenvectors of the laplacian of the graph. Here we use the normalized laplacian. :param graph: The (weighted) adjacency matrix of the graph as a sparse matrix. :type graph: sparse matrix :param dim: The dimension of the space into which to embed. :type dim: int :param random_state: A state capable being used as a numpy random state. :type random_state: numpy RandomState or equivalent

Returns:

embedding (array of shape (n_vertices, dim)) – The spectral embedding of the graph.

topo.layouts.fuzzy_simplicial_set(X, n_neighbors=15, metric='cosine', backend='nmslib', n_jobs=1, set_op_mix_ratio=1.0, local_connectivity=1.0, apply_set_operations=True, return_dists=False, verbose=False, **kwargs)

Given a set of data X, a neighborhood size, and a measure of distance compute the fuzzy simplicial set (here represented as a fuzzy graph in the form of a sparse matrix) associated to the data. This is done by locally approximating geodesic distance at each point, creating a fuzzy simplicial set for each such point, and then combining all the local fuzzy simplicial sets into a global one via a fuzzy union.

Originally implemented by Leland McInnes at https://github.com/lmcinnes/umap under the BSD 3-Clause License.

Parameters:
  • X (array of shape (n_samples, n_features).) – The data to be modelled as a fuzzy simplicial set.

  • n_neighbors (int.) – The number of neighbors to use to approximate geodesic distance. Larger numbers induce more global estimates of the manifold that can miss finer detail, while smaller values will focus on fine manifold structure to the detriment of the larger picture.

  • backend (str (optional, default 'nmslib').) – Which backend to use for neighborhood search. Options are ‘nmslib’, ‘hnswlib’, ‘pynndescent’,’annoy’, ‘faiss’ and ‘sklearn’.

  • metric (str (optional, default 'cosine').) – Accepted metrics. Defaults to ‘cosine’. Accepted metrics include: -‘sqeuclidean’ -‘euclidean’ -‘l1’ -‘lp’ - requires setting the parameter p - equivalent to minkowski distance -‘cosine’ -‘angular’ -‘negdotprod’ -‘levenshtein’ -‘hamming’ -‘jaccard’ -‘jansen-shan’

  • n_jobs (int (optional, default 1).) – Number of threads to be used in computation of nearest neighbors. Set to -1 to use all available CPUs.

  • knn_indices (array of shape (n_samples, n_neighbors) (optional).) – If the k-nearest neighbors of each point has already been calculated you can pass them in here to save computation time. This should be an array with the indices of the k-nearest neighbors as a row for each data point. Ignored if metric is ‘precomputed’.

  • knn_dists (array of shape (n_samples, n_neighbors) (optional).) – If the k-nearest neighbors of each point has already been calculated you can pass them in here to save computation time. This should be an array with the distances of the k-nearest neighbors as a row for each data point. Ignored if metric is ‘precomputed’.

  • set_op_mix_ratio (float (optional, default 1.0).) – Interpolate between (fuzzy) union and intersection as the set operation used to combine local fuzzy simplicial sets to obtain a global fuzzy simplicial sets. Both fuzzy set operations use the product t-norm. The value of this parameter should be between 0.0 and 1.0; a value of 1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy intersection.

  • local_connectivity (int (optional, default 1)) – The local connectivity required – i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

  • return_dists (bool or None (optional, default none)) – Whether to return the pairwise distance associated with each edge.

  • **kwargs (dict (optional, default {}).) – Additional parameters to be passed to the backend approximate nearest-neighbors library. Use only parameters known to the desired backend library.

Returns:

  • fuzzy_ss (coo_matrix) – A fuzzy simplicial set represented as a sparse matrix. The (i, j) entry of the matrix represents the membership strength of the 1-simplex between the ith and jth sample points.

  • sigmas (array of shape (n_samples,)) – The normalization factor derived from the metric tensor approximation. Equal to the distance

  • rhos (array of shape (n_samples,)) – The distance to the 1st nearest neighbor for each point.

topo.layouts.ts()
topo.layouts.fast_knn_indices(X, n_neighbors)

A fast computation of knn indices. :param X: The input data to compute the k-neighbor indices of. :type X: array of shape (n_samples, n_features) :param n_neighbors: The number of nearest neighbors to compute for each sample in X. :type n_neighbors: int

Returns:

knn_indices (array of shape (n_samples, n_neighbors)) – The indices on the n_neighbors closest points in the dataset.

topo.layouts.INT32_MIN
topo.layouts.INT32_MAX
topo.layouts.make_epochs_per_sample(weights, n_epochs)

Given a set of weights and number of epochs generate the number of epochs per sample for each weight. :param weights: The weights ofhow much we wish to sample each 1-simplex. :type weights: array of shape (n_1_simplices) :param n_epochs: The total number of epochs we want to train for. :type n_epochs: int

Returns:

An array of number of epochs per sample, one for each 1-simplex.

topo.layouts.simplicial_set_embedding(graph, n_components, initial_alpha, a, b, gamma, negative_sample_rate, n_epochs, init, random_state, metric, metric_kwds, densmap, densmap_kwds, output_dens, output_metric=dist.named_distances_with_gradients['euclidean'], output_metric_kwds={}, euclidean_output=True, parallel=True, verbose=False)

Perform a fuzzy simplicial set embedding, using a specified initialisation method and then minimizing the fuzzy set cross entropy between the 1-skeletons of the high and low dimensional fuzzy simplicial sets. :param graph: The 1-skeleton of the high dimensional fuzzy simplicial set as

represented by a graph for which we require a sparse matrix for the (weighted) adjacency matrix.

Parameters:
  • n_components (int) – The dimensionality of the euclidean space into which to embed the data.

  • initial_alpha (float) – Initial learning rate for the SGD.

  • a (float) – Parameter of differentiable approximation of right adjoint functor

  • b (float) – Parameter of differentiable approximation of right adjoint functor

  • gamma (float) – Weight to apply to negative samples.

  • negative_sample_rate (int (optional, default 5)) – The number of negative samples to select per positive sample in the optimization process. Increasing this value will result in greater repulsive force being applied, greater optimization cost, but slightly more accuracy.

  • n_epochs (int (optional, default 0)) – The number of training epochs to be used in optimizing the low dimensional embedding. Larger values result in more accurate embeddings. If 0 is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).

  • init (string) –

    How to initialize the low dimensional embedding. Options are:
    • ’spectral’: use a spectral embedding of the fuzzy 1-skeleton

    • ’random’: assign initial embedding positions at random.

    • A numpy array of initial embedding positions.

  • random_state (numpy RandomState or equivalent) – A state capable being used as a numpy random state.

  • metric (string or callable) – The metric used to measure distance in high dimensional space; used if multiple connected components need to be layed out.

  • metric_kwds (dict) – Key word arguments to be passed to the metric function; used if multiple connected components need to be layed out.

  • densmap (bool) – Whether to use the density-augmented objective function to optimize the embedding according to the densMAP algorithm.

  • densmap_kwds (dict) – Key word arguments to be used by the densMAP optimization.

  • output_dens (bool) – Whether to output local radii in the original data and the embedding.

  • output_metric (function) – Function returning the distance between two points in embedding space and the gradient of the distance wrt the first argument.

  • output_metric_kwds (dict) – Key word arguments to be passed to the output_metric function.

  • euclidean_output (bool) – Whether to use the faster code specialised for euclidean output metrics

  • parallel (bool (optional, default False)) – Whether to run the computation using numba parallel. Running in parallel is non-deterministic, and is not used if a random seed has been set, to ensure reproducibility.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

Returns:

  • embedding (array of shape (n_samples, n_components)) – The optimized of graph into an n_components dimensional euclidean space.

  • aux_data (dict) – Auxiliary output returned with the embedding. When densMAP extension is turned on, this dictionary includes local radii in the original data (rad_orig) and in the embedding (rad_emb).

topo.layouts.find_ab_params(spread, min_dist)

Fit a, b params for the differentiable curve used in lower dimensional fuzzy simplicial complex construction. We want the smooth curve (from a pre-defined family with simple gradient) that best matches an offset exponential decay.

topo.layouts.fuzzy_embedding(graph, n_components=2, initial_alpha=1, min_dist=0.3, spread=1, n_epochs=600, metric='cosine', metric_kwds={}, output_metric='euclidean', output_metric_kwds={}, gamma=1.0, negative_sample_rate=5, init='spectral', random_state=None, euclidean_output=True, parallel=True, verbose=False, a=None, b=None, densmap=False, densmap_kwds={}, output_dens=False)

Perform a fuzzy simplicial set embedding, using a specified initialisation method and then minimizing the fuzzy set cross entropy between the 1-skeletons of the high and low dimensional fuzzy simplicial sets. The fuzzy simplicial set embedding was proposed and implemented by Leland McInnes in UMAP (see umap-learn <https://github.com/lmcinnes/umap>). Here we’re using it only for the projection (layout optimization).

Parameters:
  • graph (sparse matrix) – The 1-skeleton of the high dimensional fuzzy simplicial set as represented by a graph for which we require a sparse matrix for the (weighted) adjacency matrix.

  • n_components (int) – The dimensionality of the euclidean space into which to embed the data.

  • initial_alpha (float) – Initial learning rate for the SGD.

  • min_dist (float (optional, default 0.3)) – The effective minimum distance between embedded points. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even dispersal of points. The value should be set relative to the spread value, which determines the scale at which embedded points will be spread out.

  • spread (float (optional, default 1.0)) – The effective scale of embedded points. In combination with min_dist this determines how clustered/clumped the embedded points are.

  • gamma (float) – Weight to apply to negative samples.

  • negative_sample_rate (int (optional, default 5)) – The number of negative samples to select per positive sample in the optimization process. Increasing this value will result in greater repulsive force being applied, greater optimization cost, but slightly more accuracy.

  • n_epochs (int (optional, default 0)) – The number of training epochs to be used in optimizing the low dimensional embedding. Larger values result in more accurate embeddings. If 0 is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).

  • init (string) –

    How to initialize the low dimensional embedding. Options are:
    • ’spectral’: use a spectral embedding of the fuzzy 1-skeleton

    • ’random’: assign initial embedding positions at random.

    • A numpy array of initial embedding positions.

  • random_state (numpy RandomState or equivalent) – A state capable being used as a numpy random state.

  • metric (string or callable) – The metric used to measure distance in high dimensional space; used if multiple connected components need to be layed out.

  • metric_kwds (dict) – Key word arguments to be passed to the metric function; used if multiple connected components need to be layed out.

  • densmap (bool) – Whether to use the density-augmented objective function to optimize the embedding according to the densMAP algorithm.

  • densmap_kwds (dict) – Key word arguments to be used by the densMAP optimization.

  • output_dens (bool) – Whether to output local radii in the original data and the embedding.

  • output_metric (function) – Function returning the distance between two points in embedding space and the gradient of the distance wrt the first argument.

  • output_metric_kwds (dict) – Key word arguments to be passed to the output_metric function.

  • euclidean_output (bool) – Whether to use the faster code specialised for euclidean output metrics

  • parallel (bool (optional, default False)) – Whether to run the computation using numba parallel. Running in parallel is non-deterministic, and is not used if a random seed has been set, to ensure reproducibility.

  • verbose (bool (optional, default False)) – Whether to report information on the current progress of the algorithm.

  • a (float) – Parameter of differentiable approximation of right adjoint functor

  • b (float) – Parameter of differentiable approximation of right adjoint functor

Returns:

  • embedding (array of shape (n_samples, n_components)) – The optimized of graph into an n_components dimensional euclidean space.

  • aux_data (dict) – Auxiliary output returned with the embedding. When densMAP extension is turned on, this dictionary includes local radii in the original data (rad_orig) and in the embedding (rad_emb).

    Y_initarray of shape (n_samples, n_components)

    The spectral initialization of graph into an n_components dimensional euclidean space.

topo.layouts.Isomap(X, n_components=2, n_neighbors=50, metric='cosine', landmarks=None, landmark_method='kmeans', eig_tol=0, n_jobs=1, **kwargs)

Isomap embedding of a graph. This is a highly efficient implementation can also operate with landmarks.

Parameters:
  • X (array-like or sparse) – The input data.

  • n_components (int (optional, default 2).) – The number of componetns to embed into.

  • n_neighbors (int (optional, default 5).) – The number of neighbors to use for the geodesic distance matrix.

  • metric (str (optional, default 'euclidean').) – The metric to use for the geodesic distance matrix. Can be ‘precomputed’

  • landmarks (int or array of shape (n_samples,) (optional, default None).) – If passed as int, will obtain the number of landmarks. If passed as np.ndarray, will use the specified indexes in the array. Any value other than None will result in only the specified landmarks being used in the layout optimization, and will populate the Projector.landmarks_ slot.

  • landmark_method (str (optional, default 'kmeans').) – The method to use for selecting landmarks. If landmarks is passed as an int, this will be used to select the landmarks. Can be either ‘kmeans’ or ‘random’.

  • eig_tol (float (optional, default 0).) – Stopping criterion for eigendecomposition of the pairwise geodesics matrix.

  • n_jobs (int (optional, default 1).) – The number of jobs to use for the computation.

  • **kwargs (dict (optional, default {}).) – Additional keyword arguments to pass to the kNN function.

Returns:

Y (array of shape (n_samples, n_components)) – The embedding vectors.

class topo.layouts.Projector(n_components=2, projection_method='MAP', metric='euclidean', n_neighbors=10, n_jobs=1, landmarks=None, landmark_method='kmeans', num_iters=800, init='spectral', nbrs_backend='nmslib', keep_estimator=False, random_state=None, verbose=False)

Bases: sklearn.base.BaseEstimator, sklearn.base.TransformerMixin

A scikit-learn compatible class that handles all projection methods. Ideally, it takes in either a orthonormal eigenbasis or a graph kernel learned from such an eigenbasis. It is included in TopOMetry to allow custom TopOGraph-like pipelines (projection is the final step).

Parameters:
  • n_components (int (optional, default 2).) – Number of dimensions to optimize the layout to. Usually 2 or 3 if you’re into visualizing data.

  • projection_method (str (optional, default 'Isomap').) –

    Which projection method to use. Only ‘Isomap’, ‘t-SNE’ and ‘MAP’ are implemented out of the box. ‘t-SNE’ uses and ‘MAP’ relies on code that is adapted from UMAP. Current options are:

    These are frankly quite direct to add, so feel free to make a feature request if your favorite method is not listed here.

  • metric (str (optional, default 'euclidean').) – The metric to use when computing distances. Possible values are: ‘cosine’, ‘euclidean’ and others. Accepts precomputed distances (‘precomputed’).

  • n_neighbors (int (optional, default 10).) – The number of neighbors to use when computing the kernel matrix. Ignored if pairwise set to True.

  • landmarks (int or np.ndarray (optional, default None).) – If passed as int, will obtain the number of landmarks. If passed as np.ndarray, will use the specified indexes in the array. Any value other than None will result in only the specified landmarks being used in the layout optimization, and will populate the Projector.landmarks_ slot.

  • landmark_method (str (optional, default 'kmeans').) – The method to use for selecting landmarks. If landmarks is passed as an int, this will be used to select the landmarks. Can be either ‘kmeans’ or ‘random’.

  • num_iters (int (optional, default 1000).) – Most (if not all) methods optimize the layout up to a limit number of iterations. Use this parameter to set this number.

  • keep_estimator (bool (optional, default False).) – Whether to keep the used estimator as Projector.estimator_ after fitting. Useful if you want to use it later (e.g. UMAP allows inverse transforms and out-of-sample mapping).

__repr__()

Return repr(self).

_parse_backend()
fit(X, **kwargs)

Calls the desired projection method on the specified data.

Parameters:
  • X (array-like, shape (n_samples, n_features) or topo.Kernel() class.) – The set of points to compute the kernel matrix for. Accepts np.ndarrays and scipy.sparse matrices or a topo.Kernel() object. If precomputed, assumed to be a square symmetric semidefinite matrix.

  • kwargs (dict (optional, default {}).) – Additional keyword arguments for the desired projection method.

Returns:

Projector() class with populated Projector.Y_ attribute.

transform(X=None)

Calls the transform method of the desired method. If the desired method does not have a transform method, calls the results from the fit method.

Returns:

Y (np.ndarray (n_samples, n_components).) – Projection results

fit_transform(X, **kwargs)

Calls the fit_transform method of the desired method. If the desired method does not have a fit_transform method, calls the results from the fit method.

Returns:

Y (np.ndarray (n_samples, n_components).) – Projection results