-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use QR decomposition in DD sharp #90
Conversation
Error can be introduced by realizing the inverse computed via the QR decomposition and storing it inside the sharp matrix. We can explore an option that caches the QR decomposition for each triangle, and while using a matrix with the sparsity pattern of However, the interpolating musical operation We could resolve this by supporting all options. |
The
julia> X
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
0.5 1.33333 0.0
-1.0 -0.666667 0.0
-0.5 0.666667 -0.0 Observe that: julia> QRX.R
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
-1.22474 -0.816497 0.0
0.0 -1.41421 0.0
0.0 0.0 0.0
julia> inv(QRX.R)
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
NaN NaN NaN
NaN NaN NaN
NaN NaN NaN The julia> pinv(X'*(X))*(X')
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
-8.63507e-17 -0.666667 -0.666667
0.5 -5.55112e-17 0.5
0.0 0.0 0.0 |
@jpfairbanks Please advise: Do you want to use |
I was expecting that qr produced the reduced Q and R factors. Would that avoid this possibility? I guess X could be rank deficient instead of overconstrained. |
The qr function docs say that you have to enable pivoting to get the thin QR decomposition. I think that using that will give you an R that is always full rank.
|
Ok. I've gone ahead and reduced the error tolerances in appropriate tests. (Note that the sharp matrix computed on |
This does not appear to work. I'm not certain what the formula is to compute a realization of a pseudoinverse matrix using the pivoted QR decomposition that can be spread out into our sharp matrix. julia> X
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
0.5 1.33333 0.0
-1.0 -0.666667 0.0
-0.5 0.666667 -0.0
julia> QRX = qr(X, ColumnNorm())
StaticArrays.QR{MMatrix{3, 3, Float64, 9}, MMatrix{3, 3, Float64, 9}, MVector{3, Int64}}([-0.816496580927726 -9.337025903963966e-18 -0.5773502691896258; 0.4082482904638631
-0.7071067811865475 -0.5773502691896257; -0.4082482904638631 -0.7071067811865475 0.5773502691896257], [-1.632993161855452 -0.6123724356957945 0.0; 0.0 1.0606601717798214 0.0; 0.0 0.0 -0.0], [2, 1, 3])
julia> QRX.R
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
-1.63299 -0.612372 0.0
0.0 1.06066 0.0
0.0 0.0 -0.0
julia> inv(QRX.R)
3×3 MMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
NaN NaN NaN
NaN NaN NaN
NaN NaN NaN |
These authors provide a formula on page 5: |
Commit a6dabb4 realizes the pseudoinverse matrix according to that provided formula. I performed a quick spot-check by computing the julia> pinv(X'*(X))*(X')
2×3 MMatrix{2, 3, Float64, 6} with indices SOneTo(2)×SOneTo(3):
0.21 -1.90099e-17 0.21
0.1 -0.2 -0.1
julia> P' * pinv(QRX.R) * QRX.Q'
2×3 MMatrix{2, 3, Float64, 6} with indices SOneTo(2)×SOneTo(3):
0.21 3.29719e-17 0.21
0.1 -0.2 -0.1 I had to roll my own permutation matrix |
@jpfairbanks This PR is ready to review. |
Instead of creating a permutation matrix and multiplying, you can just index into the rows of the matrix. julia> I(4)
4×4 Diagonal{Bool, Vector{Bool}}:
1 ⋅ ⋅ ⋅
⋅ 1 ⋅ ⋅
⋅ ⋅ 1 ⋅
⋅ ⋅ ⋅ 1
julia> I(4)[[2,1,3,4], :]
4×4 Matrix{Bool}:
0 1 0 0
1 0 0 0
0 0 1 0
0 0 0 1 |
No problem. I've performed the operation in place now. This PR is ready for review. |
Perhaps it's just too late in the day, but I've gone ahead and replaced the calculation of the |
Well, at least we aren't forming the normal equations anymore. SVD of X and using the orthogonal factors to compute the pinv is probably the most accurate thing you can do in general problems. We should still revisit our design choice to use a matrix representation instead of a LinearOperator callable struct. |
Close #89
I only did a quick spot check that the
LLS
matrix computed via thepinv
method matched theLLS
matrix for a single triangle for a single dual 1-form. Those matrices were equivalent (save for a ~1e-17
entry in thepinv
LLS
being replaced by a0.0
entry in theqr
LLS
, which I took as a good sign.)Tests run, but certain tolerance checks no longer pass, including those operators which make use of the
LLSDDSharp
internally.I need to dive into the test outputs and verify whether it is a tolerance issue, or whether it is due to vector fields no longer satisfying the tangent condition.