Whitening.jl Documentation

Types

Whitening.CholType
Chol{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Cholesky whitening transform.

Given the Cholesky decomposition of the inverse covariance matrix, $Σ⁻¹ = LLᵀ$, we have the whitening matrix, $W = Lᵀ$.

source
Whitening.CholMethod
Chol(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a Cholesky transformer from the from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

In order for the resultant covariance matrix to be positive definite, q must be ≥ n and none of the variances may be zero.

source
Whitening.CholMethod
Chol(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a Cholesky transformer from the from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive definite.

source
Whitening.GeneralizedPCAType
GeneralizedPCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Principal component analysis (PCA) whitening transform, generalized to support compression based on either

  1. a pre-determined number of components,
  2. a fraction of the total squared cross-covariance, or
  3. a relative tolerance on the number of eigenvalues greater than rtol*λ₁ where λ₁ is the largest eigenvalue of the covariance matrix.

Given the eigendecomposition of the $n × n$ covariance matrix, $Σ = UΛUᵀ$, with eigenvalues sorted in descending order, i.e. $λ₁ ≥ λ₂ ⋯ ≥ λₙ$, the first $m$ components are selected according to one or more of the criteria listed above.

If $m = n$, then we have the canonical PCA whitening matrix, $W = Λ^{-\frac{1}{2}}Uᵀ$. Otherwise, for $m < n$, a map from $ℝⁿ ↦ ℝᵐ$ is formed by removing the $n - m$ rows from $W$, i.e. the components with the $n - m$ smallest eigenvalues are removed. This is equivalent to selecting the $m × m$ matrix from the upper left of $Λ$ and the $m × n$ matrix from the top of $Uᵀ$. The inverse transform is then formed by selecting the $n × m$ matrix from the left of $U$ and the same matrix from $Λ$.

source
Whitening.GeneralizedPCAMethod
GeneralizedPCA(X::AbstractMatrix{T};
               num_components::Union{Int, Nothing}=nothing,
               vmin::Union{T, Nothing}=nothing,
               rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}

Construct a generalized PCA transformer from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

source
Whitening.GeneralizedPCAMethod
GeneralizedPCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T};
               num_components::Union{Int, Nothing}=nothing,
               vmin::Union{T, Nothing}=nothing,
               rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}

Construct a generalized PCA transformer from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive semi-definite.

The output dimension, m, of the transformer is determined from the optional arguments, where

  1. 0 ≤ num_components ≤ n is a pre-determined size
  2. 0 ≤ vmin ≤ 1 is the fraction of the total squared cross-covariance, hence, m is the smallest value such that sum(λ[1:m]) ≥ vmin*sum(λ), where $λᵢ, i=1,…,n$ are the eigenvalues of Σ in descending order.
  3. rtol is the relative tolerance on the number of eigenvalues greater than rtol*λ₁ where λ₁ is the largest eigenvalue of Σ.

If none of the 3 options are provided, the default is rtol = n*eps(T). If 2 or more options are provided, the minimum of the resultant sizes will be chosen.

source
Whitening.GeneralizedPCAcorType
GeneralizedPCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Scale-invariant principal component analysis (PCAcor) whitening transform, generalized to support compression based on either

  1. a pre-determined number of components,
  2. a fraction of the total squared cross-correlation, or
  3. a relative tolerance on the number of eigenvalues greater than rtol*λ₁ where λ₁ is the largest eigenvalue of the correlation matrix.

Given the eigendecomposition of the $n × n$ correlation matrix, $P = GΘGᵀ$, with eigenvalues sorted in descending order, i.e. $θ₁ ≥ θ₂ ⋯ ≥ θₙ$, the first $m$ components are selected according to one or more of the criteria listed above.

If $m = n$, then we have the canonical PCA-cor whitening matrix, $W = Θ^{-\frac{1}{2}}GᵀV^{-\frac{1}{2}}$. Otherwise, for $m < n$, a map from $ℝⁿ ↦ ℝᵐ$ is formed by removing the $n - m$ rows from $W$, i.e. the components with the $n - m$ smallest eigenvalues are removed. This is equivalent to selecting the $m × m$ matrix from the upper left of $Θ$ and the $m × n$ matrix from the top of $Gᵀ$. The inverse transform is then formed by selecting the $n × m$ matrix from the left of $G$ and the same matrix from $Θ$.

source
Whitening.GeneralizedPCAcorMethod
GeneralizedPCAcor(X::AbstractMatrix{T};
                  num_components::Union{Int, Nothing}=nothing,
                  vmin::Union{T, Nothing}=nothing,
                  rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}

Construct a generalized PCAcor transformer from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

source
Whitening.GeneralizedPCAcorMethod
GeneralizedPCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T};
                  num_components::Union{Int, Nothing}=nothing,
                  vmin::Union{T, Nothing}=nothing,
                  rtol::Union{T, Nothing}=nothing) where {T<:Base.IEEEFloat}

Construct a generalized PCAcor transformer from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive semi-definite.

The decomposition, $Σ = V^{ rac{1}{2}} * P * V^{ rac{1}{2}}$, where $V$ is the diagonal matrix of variances and $P$ is a correlation matrix, must be well-formed in order to obtain a meaningful result. That is, if the diagonal of Σ contains 1 or more zero elements, then it is not possible to compute $P = V^{- rac{1}{2}} * Σ * V^{- rac{1}{2}}$.

The output dimension, m, of the transformer is determined from the optional arguments, where

  1. 0 ≤ num_components ≤ n is a pre-determined size
  2. 0 ≤ vmin ≤ 1 is the fraction of the total squared cross-covariance, hence, m is the smallest value such that sum(λ[1:m]) ≥ vmin*sum(λ), where $θᵢ, i=1,…,n$ are the eigenvalues of $P$ in descending order.
  3. rtol is the relative tolerance on the number of eigenvalues greater than rtol*θ₁ where θ₁ is the largest eigenvalue of $P$.

If none of the 3 options are provided, the default is rtol = n*eps(T). If 2 or more options are provided, the minimum of the resultant sizes will be chosen.

source
Whitening.PCAType
PCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Principal component analysis (PCA) whitening transform.

Given the eigendecomposition of the covariance matrix, $Σ = UΛUᵀ$, we have the whitening matrix, $W = Λ^{-\frac{1}{2}}Uᵀ$.

source
Whitening.PCAMethod
PCA(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a PCA transformer from the from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

In order for the resultant covariance matrix to be positive definite, q must be ≥ n and none of the variances may be zero.

source
Whitening.PCAMethod
PCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a PCA transformer from the from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive definite.

source
Whitening.PCAcorType
PCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Scale-invariant principal component analysis (PCA-cor) whitening transform.

Given the eigendecomposition of the correlation matrix, $P = GΘGᵀ$, and the diagonal variance matrix, $V$, we have the whitening matrix, $W = Θ^{-\frac{1}{2}}GᵀV^{-\frac{1}{2}}$.

source
Whitening.PCAcorMethod
PCAcor(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a PCAcor transformer from the from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

In order for the resultant covariance matrix to be positive definite, q must be ≥ n and none of the variances may be zero.

source
Whitening.PCAcorMethod
PCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a PCAcor transformer from the from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive definite.

source
Whitening.ZCAType
ZCA{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Zero-phase component analysis (ZCA) whitening transform.

Given the covariance matrix, $Σ$, we have the whitening matrix, $W = Σ^{-\frac{1}{2}}$.

source
Whitening.ZCAMethod
ZCA(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a ZCA transformer from the from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

In order for the resultant covariance matrix to be positive definite, q must be ≥ n and none of the variances may be zero.

source
Whitening.ZCAMethod
ZCA(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a ZCA transformer from the from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive definite.

source
Whitening.ZCAcorType
ZCAcor{T<:Base.IEEEFloat} <: AbstractWhiteningTransform{T}

Scale-invariant zero-phase component analysis (ZCA-cor) whitening transform.

Given the correlation matrix, $P$, and the diagonal variance matrix, $V$, we have the whitening matrix, $W = P^{-\frac{1}{2}}V^{-\frac{1}{2}}$.

source
Whitening.ZCAcorMethod
ZCAcor(X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a ZCAcor transformer from the from the q × n matrix, each row of which is a sample of an n-dimensional random variable.

In order for the resultant covariance matrix to be positive definite, q must be ≥ n and none of the variances may be zero.

source
Whitening.ZCAcorMethod
ZCAcor(μ::AbstractVector{T}, Σ::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Construct a ZCAcor transformer from the from the mean vector, μ ∈ ℝⁿ, and a covariance matrix, Σ ∈ ℝⁿˣⁿ; Σ must be symmetric and positive definite.

source

Functions

Whitening.mahalanobisMethod
mahalanobis(K::AbstractWhiteningTransform{T}, X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Return the Mahalanobis distance, √((x - μ)' * Σ⁻¹ * (x - μ)), computed for each row in X, using the transformation kernel, K.

source
Whitening.mahalanobisMethod
mahalanobis(K::AbstractWhiteningTransform{T}, x::AbstractVector{T}) where {T<:Base.IEEEFloat}

Return the Mahalanobis distance, √((x - μ)' * Σ⁻¹ * (x - μ)), computed using the transformation kernel, K.

source
Whitening.unwhitenMethod
unwhiten(K::AbstractWhiteningTransform{T}, Z::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Transform the rows of Z to unwhitened vectors, i.e. X = Z * (W⁻¹)ᵀ .+ μᵀ, using the provided kernel. That is, Z is an m × p matrix and K is a transformation kernel whose output dimension is p.

If K compresses n ↦ p, i.e. z = Wx : ℝⁿ ↦ ℝᵖ, then X is an m × n matrix.

source
Whitening.unwhitenMethod
unwhiten(K::AbstractWhiteningTransform{T}, z::AbstractVector{T}) where {T<:Base.IEEEFloat}

Transform z to the original coordinate system of a non-whitened vector belonging to the kernel, K, i.e. x = μ + W⁻¹ * z. This is the inverse of whiten(K, x).

If K compresses n ↦ p, then x ∈ ℝⁿ.

source
Whitening.whitenMethod
whiten(K::AbstractWhiteningTransform{T}, X::AbstractMatrix{T}) where {T<:Base.IEEEFloat}

Transform the rows of X to whitened vectors, i.e. Z = (X .- μᵀ) * Wᵀ, using the provided kernel. That is, X is an m × n matrix and K is a transformation kernel whose input dimension is n.

If K compresses n ↦ p, i.e. z = Wx : ℝⁿ ↦ ℝᵖ, then Z is an m × p matrix.

source
Whitening.whitenMethod
whiten(K::AbstractWhiteningTransform{T}, x::AbstractVector{T}) where {T<:Base.IEEEFloat}

Transform x to a whitened vector, i.e. z = W * (x - μ), using the transformation kernel, K.

If K compresses n ↦ p, then z ∈ ℝᵖ.

source

Index