Whitening.jl Documentation
Types
Whitening.AbstractWhiteningTransform
— TypeAbstract type which represents a whitening transformation.
Whitening.Chol
— TypeChol{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Cholesky whitening transform.
Given the Cholesky decomposition of the inverse covariance matrix, $Σ⁻¹ = LLᵀ$, we have the whitening matrix, $W = Lᵀ$.
Whitening.Chol
— MethodChol(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a Cholesky transformer from the from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
In order for the resultant covariance matrix to be positive definite, q
must be ≥ n
and none of the variances may be zero.
Whitening.Chol
— MethodChol(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a Cholesky transformer from the from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive definite.
Whitening.GeneralizedPCA
— TypeGeneralizedPCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Principal component analysis (PCA) whitening transform, generalized to support compression based on either
- a pre-determined number of components,
- a fraction of the total squared cross-covariance, or
- a relative tolerance on the number of eigenvalues greater than
rtol*λ₁
whereλ₁
is the largest eigenvalue of the covariance matrix.
Given the eigendecomposition of the $n × n$ covariance matrix, $Σ = UΛUᵀ$, with eigenvalues sorted in descending order, i.e. $λ₁ ≥ λ₂ ⋯ ≥ λₙ$, the first $m$ components are selected according to one or more of the criteria listed above.
If $m = n$, then we have the canonical PCA whitening matrix, $W = Λ^{-\frac{1}{2}}Uᵀ$. Otherwise, for $m < n$, a map from $ℝⁿ ↦ ℝᵐ$ is formed by removing the $n - m$ rows from $W$, i.e. the components with the $n - m$ smallest eigenvalues are removed. This is equivalent to selecting the $m × m$ matrix from the upper left of $Λ$ and the $m × n$ matrix from the top of $Uᵀ$. The inverse transform is then formed by selecting the $n × m$ matrix from the left of $U$ and the same matrix from $Λ$.
Whitening.GeneralizedPCA
— MethodGeneralizedPCA(X::AbstractMatrix{T};
num_components::Union{Int, Nothing}=nothing,
vmin::Union{T, Nothing}=nothing,
rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}
Construct a generalized PCA transformer from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
Whitening.GeneralizedPCA
— MethodGeneralizedPCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T};
num_components::Union{Int, Nothing}=nothing,
vmin::Union{T, Nothing}=nothing,
rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}
Construct a generalized PCA transformer from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive semi-definite.
The output dimension, m
, of the transformer is determined from the optional arguments, where
- 0 ≤
num_components
≤ n is a pre-determined size - 0 ≤
vmin
≤ 1 is the fraction of the total squared cross-covariance, hence,m
is the smallest value such thatsum(λ[1:m]) ≥ vmin*sum(λ)
, where $λᵢ, i=1,…,n$ are the eigenvalues ofΣ
in descending order. rtol
is the relative tolerance on the number of eigenvalues greater thanrtol*λ₁
whereλ₁
is the largest eigenvalue ofΣ
.
If none of the 3 options are provided, the default is rtol = n*eps(T)
. If 2 or more options are provided, the minimum of the resultant sizes will be chosen.
Whitening.GeneralizedPCAcor
— TypeGeneralizedPCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Scale-invariant principal component analysis (PCAcor) whitening transform, generalized to support compression based on either
- a pre-determined number of components,
- a fraction of the total squared cross-correlation, or
- a relative tolerance on the number of eigenvalues greater than
rtol*λ₁
whereλ₁
is the largest eigenvalue of the correlation matrix.
Given the eigendecomposition of the $n × n$ correlation matrix, $P = GΘGᵀ$, with eigenvalues sorted in descending order, i.e. $θ₁ ≥ θ₂ ⋯ ≥ θₙ$, the first $m$ components are selected according to one or more of the criteria listed above.
If $m = n$, then we have the canonical PCA-cor whitening matrix, $W = Θ^{-\frac{1}{2}}GᵀV^{-\frac{1}{2}}$. Otherwise, for $m < n$, a map from $ℝⁿ ↦ ℝᵐ$ is formed by removing the $n - m$ rows from $W$, i.e. the components with the $n - m$ smallest eigenvalues are removed. This is equivalent to selecting the $m × m$ matrix from the upper left of $Θ$ and the $m × n$ matrix from the top of $Gᵀ$. The inverse transform is then formed by selecting the $n × m$ matrix from the left of $G$ and the same matrix from $Θ$.
Whitening.GeneralizedPCAcor
— MethodGeneralizedPCAcor(X::AbstractMatrix{T};
num_components::Union{Int, Nothing}=nothing,
vmin::Union{T, Nothing}=nothing,
rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}
Construct a generalized PCAcor transformer from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
Whitening.GeneralizedPCAcor
— MethodGeneralizedPCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T};
num_components::Union{Int, Nothing}=nothing,
vmin::Union{T, Nothing}=nothing,
rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}
Construct a generalized PCAcor transformer from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive semi-definite.
The decomposition, $Σ = V^{rac{1}{2}} * P * V^{rac{1}{2}}$, where $V$ is the diagonal matrix of variances and $P$ is a correlation matrix, must be well-formed in order to obtain a meaningful result. That is, if the diagonal of Σ
contains 1 or more zero elements, then it is not possible to compute $P = V^{-rac{1}{2}} * Σ * V^{-rac{1}{2}}$.
The output dimension, m
, of the transformer is determined from the optional arguments, where
- 0 ≤
num_components
≤ n is a pre-determined size - 0 ≤
vmin
≤ 1 is the fraction of the total squared cross-covariance, hence,m
is the smallest value such thatsum(λ[1:m]) ≥ vmin*sum(λ)
, where $θᵢ, i=1,…,n$ are the eigenvalues of $P$ in descending order. rtol
is the relative tolerance on the number of eigenvalues greater thanrtol*θ₁
whereθ₁
is the largest eigenvalue of $P$.
If none of the 3 options are provided, the default is rtol = n*eps(T)
. If 2 or more options are provided, the minimum of the resultant sizes will be chosen.
Whitening.PCA
— TypePCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Principal component analysis (PCA) whitening transform.
Given the eigendecomposition of the covariance matrix, $Σ = UΛUᵀ$, we have the whitening matrix, $W = Λ^{-\frac{1}{2}}Uᵀ$.
Whitening.PCA
— MethodPCA(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a PCA transformer from the from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
In order for the resultant covariance matrix to be positive definite, q
must be ≥ n
and none of the variances may be zero.
Whitening.PCA
— MethodPCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a PCA transformer from the from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive definite.
Whitening.PCAcor
— TypePCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Scale-invariant principal component analysis (PCA-cor) whitening transform.
Given the eigendecomposition of the correlation matrix, $P = GΘGᵀ$, and the diagonal variance matrix, $V$, we have the whitening matrix, $W = Θ^{-\frac{1}{2}}GᵀV^{-\frac{1}{2}}$.
Whitening.PCAcor
— MethodPCAcor(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a PCAcor transformer from the from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
In order for the resultant covariance matrix to be positive definite, q
must be ≥ n
and none of the variances may be zero.
Whitening.PCAcor
— MethodPCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a PCAcor transformer from the from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive definite.
Whitening.ZCA
— TypeZCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Zero-phase component analysis (ZCA) whitening transform.
Given the covariance matrix, $Σ$, we have the whitening matrix, $W = Σ^{-\frac{1}{2}}$.
Whitening.ZCA
— MethodZCA(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a ZCA transformer from the from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
In order for the resultant covariance matrix to be positive definite, q
must be ≥ n
and none of the variances may be zero.
Whitening.ZCA
— MethodZCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a ZCA transformer from the from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive definite.
Whitening.ZCAcor
— TypeZCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}
Scale-invariant zero-phase component analysis (ZCA-cor) whitening transform.
Given the correlation matrix, $P$, and the diagonal variance matrix, $V$, we have the whitening matrix, $W = P^{-\frac{1}{2}}V^{-\frac{1}{2}}$.
Whitening.ZCAcor
— MethodZCAcor(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a ZCAcor transformer from the from the q × n
matrix, each row of which is a sample of an n
-dimensional random variable.
In order for the resultant covariance matrix to be positive definite, q
must be ≥ n
and none of the variances may be zero.
Whitening.ZCAcor
— MethodZCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Construct a ZCAcor transformer from the from the mean vector, μ
∈ ℝⁿ, and a covariance matrix, Σ
∈ ℝⁿˣⁿ; Σ
must be symmetric and positive definite.
Functions
Whitening.mahalanobis
— Methodmahalanobis(K::AbstractWhiteningTransform{T}, X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Return the Mahalanobis distance, √((x - μ)' * Σ⁻¹ * (x - μ))
, computed for each row in X
, using the transformation kernel, K
.
Whitening.mahalanobis
— Methodmahalanobis(K::AbstractWhiteningTransform{T}, x::AbstractVector{T}) where {T<:Base.IEEEFloat}
Return the Mahalanobis distance, √((x - μ)' * Σ⁻¹ * (x - μ))
, computed using the transformation kernel, K
.
Whitening.unwhiten
— Methodunwhiten(K::AbstractWhiteningTransform{T}, Z::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Transform the rows of Z
to unwhitened vectors, i.e. X = Z * (W⁻¹)ᵀ .+ μᵀ
, using the provided kernel. That is, Z
is an m × p
matrix and K
is a transformation kernel whose output dimension is p
.
If K
compresses n ↦ p
, i.e. z = Wx : ℝⁿ ↦ ℝᵖ
, then X
is an m × n
matrix.
Whitening.unwhiten
— Methodunwhiten(K::AbstractWhiteningTransform{T}, z::AbstractVector{T}) where {T<:Base.IEEEFloat}
Transform z
to the original coordinate system of a non-whitened vector belonging to the kernel, K
, i.e. x = μ + W⁻¹ * z
. This is the inverse of whiten(K, x)
.
If K
compresses n ↦ p
, then x ∈ ℝⁿ
.
Whitening.whiten
— Methodwhiten(K::AbstractWhiteningTransform{T}, X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}
Transform the rows of X
to whitened vectors, i.e. Z = (X .- μᵀ) * Wᵀ
, using the provided kernel. That is, X
is an m × n
matrix and K
is a transformation kernel whose input dimension is n
.
If K
compresses n ↦ p
, i.e. z = Wx : ℝⁿ ↦ ℝᵖ
, then Z
is an m × p
matrix.
Whitening.whiten
— Methodwhiten(K::AbstractWhiteningTransform{T}, x::AbstractVector{T}) where {T<:Base.IEEEFloat}
Transform x
to a whitened vector, i.e. z = W * (x - μ)
, using the transformation kernel, K
.
If K
compresses n ↦ p
, then z ∈ ℝᵖ
.
Index
Whitening.mahalanobis
Whitening.mahalanobis
Whitening.unwhiten
Whitening.unwhiten
Whitening.whiten
Whitening.whiten
Whitening.AbstractWhiteningTransform
Whitening.Chol
Whitening.Chol
Whitening.Chol
Whitening.GeneralizedPCA
Whitening.GeneralizedPCA
Whitening.GeneralizedPCA
Whitening.GeneralizedPCAcor
Whitening.GeneralizedPCAcor
Whitening.GeneralizedPCAcor
Whitening.PCA
Whitening.PCA
Whitening.PCA
Whitening.PCAcor
Whitening.PCAcor
Whitening.PCAcor
Whitening.ZCA
Whitening.ZCA
Whitening.ZCA
Whitening.ZCAcor
Whitening.ZCAcor
Whitening.ZCAcor