Learning Vector Quantization¶
Learning Vector quantization (LVQ) [1] attempts to construct a highly
sparse model of the data by representing data classes by prototypes.
Prototypes are vectors in the data spaced which are placed such that
they achieve a good nearest-neighbor classification accuracy. More
formally, for a dataset LVQ attempts to
place K prototypes
with
in the data
space, such that as many data points as possible are correctly
classified by assigning the label of the closest prototype.
The number of prototypes K is a hyper-parameter to be specified by the
user. Per default, we use 1 prototype per class.
Contents:
Dimensionality Reducation¶
The relevances learned by a GrlvqModel
,:class:GmlvqModel,:class:LgmlvqModel,:class:MrslvqModel and LmrslvqModel
can be applied for
dimensionality reduction by projecting the data on the eigenvectors of
the relevance matrix which correspond to the largest eigenvalues.
References:
[1] | “Learning Vector Quantization” Kohonen, Teuvo - Self-Organizing Maps, pp. 175-189, 1995. |