sklearn_lvq.GlvqModel

class sklearn_lvq.GlvqModel(prototypes_per_class=1, initial_prototypes=None, max_iter=2500, gtol=1e-05, beta=2, C=None, display=False, random_state=None)[source]

Generalized Learning Vector Quantization

Parameters:
prototypes_per_class : int or list of int, optional (default=1)

Number of prototypes per class. Use list to specify different numbers per class.

initial_prototypes : array-like, shape = [n_prototypes, n_features + 1],
optional

Prototypes to start with. If not given initialization near the class means. Class label must be placed as last entry of each prototype.

max_iter : int, optional (default=2500)

The maximum number of iterations.

gtol : float, optional (default=1e-5)

Gradient norm must be less than gtol before successful termination of bfgs.

beta : int, optional (default=2)

Used inside phi. 1 / (1 + np.math.exp(-beta * x))

C : array-like, shape = [2,3] ,optional

Weights for wrong classification of form (y_real,y_pred,weight) Per default all weights are one, meaning you only need to specify the weights not equal one.

display : boolean, optional (default=False)

Print information about the bfgs steps.

random_state : int, RandomState instance or None, optional

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Attributes:
w_ : array-like, shape = [n_prototypes, n_features]

Prototype vector, where n_prototypes in the number of prototypes and n_features is the number of features

c_w_ : array-like, shape = [n_prototypes]

Prototype classes

classes_ : array-like, shape = [n_classes]

Array containing labels.

Methods

decision_function(x) Predict confidence scores for samples.
fit(x, y) Fit the LVQ model to the given training data and parameters using l-bfgs-b.
get_params([deep]) Get parameters for this estimator.
phi(x)
Parameters:
phi_prime(x)
Parameters:
predict(x) Predict class membership index for each input sample.
project(x, dims[, print_variance_covered]) Projects the data input data X using the relevance matrix of trained model to dimension dim
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
__init__(prototypes_per_class=1, initial_prototypes=None, max_iter=2500, gtol=1e-05, beta=2, C=None, display=False, random_state=None)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

decision_function(x)[source]

Predict confidence scores for samples.

Parameters:
x : array-like, shape = [n_samples, n_features]
Returns:
T : array-like, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
fit(x, y)

Fit the LVQ model to the given training data and parameters using l-bfgs-b.

Parameters:
x : array-like, shape = [n_samples, n_features]

Training vector, where n_samples in the number of samples and n_features is the number of features.

y : array, shape = [n_samples]

Target values (integers in classification, real numbers in regression)

Returns:
self
get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

phi(x)[source]
Parameters:
x : input value
phi_prime(x)[source]
Parameters:
x : input value
predict(x)[source]

Predict class membership index for each input sample.

This function does classification on an array of test vectors X.

Parameters:
x : array-like, shape = [n_samples, n_features]
Returns:
C : array, shape = (n_samples,)

Returns predicted values.

project(x, dims, print_variance_covered=False)

Projects the data input data X using the relevance matrix of trained model to dimension dim

Parameters:
x : array-like, shape = [n,n_features]

input data for project

dims : int

dimension to project to

print_variance_covered : boolean

flag to print the covered variance of the projection

Returns:
C : array, shape = [n,n_features]

Returns predicted values.

score(X, y, sample_weight=None)

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:
score : float

Mean accuracy of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self