Submitted by antodima t3_10oxy9j in MachineLearning
Hi all!
Given X ∈ ℝ ^(Nx), Y ∈ ℝ ^(Ny), β ∈ ℝ^(+), so
W = YX^(T)(XX^(T)+βI)^(-1) (with the Moore–Penrose pseudoinverse)
where A = YX^(T) and B = XX^(T)+βI.
If we consider an arbitrary number of indices/units < Nx, and so we consider only some columns of matrix A and some columns and rows (crosses) of B. The rest of A and B are zeros.
The approach above of sparsify A and B will break the ridge regression solution when W=AB^(-1)? If yes, there are ways to avoid it?
Many thanks!
[deleted] t1_j6hg3ri wrote
[removed]