avatar
Matrix inverse problem# Unix - 噫吁兮,危乎高哉
F*r
1
Does anybody have the idea how to solve
( a )*x = ( b )
where a is a matrix with m rows and n columns, b is an
m-vector, we want to get x.
the matrix a is intended to be large and sparse, has hundred thousands
of rows and thousands of column.
it is parse damped linear least-squares problems
Does anybody know the parallel version library to do this?
I only found the non-parallel version in nag library,F04QAF works.
Still did not the parallel function.
any idea?
Thanks a lot
I can't
avatar
z*c
2
A is N by M? Please check your problem and make sure if A is square or not.
Note only square matrix has inverse matrix. And obviously, you can not
solve M unknowns from N equations.

【在 F****r 的大作中提到】
: Does anybody have the idea how to solve
: ( a )*x = ( b )
: where a is a matrix with m rows and n columns, b is an
: m-vector, we want to get x.
: the matrix a is intended to be large and sparse, has hundred thousands
: of rows and thousands of column.
: it is parse damped linear least-squares problems
: Does anybody know the parallel version library to do this?
: I only found the non-parallel version in nag library,F04QAF works.
: Still did not the parallel function.

avatar
b*i
3
if N<>M, then x is given as the Least Square Solution. For sparse matrix, you
can use the sparse mode in Matlab for it. But no idea about its parallel
version.

【在 z****c 的大作中提到】
: A is N by M? Please check your problem and make sure if A is square or not.
: Note only square matrix has inverse matrix. And obviously, you can not
: solve M unknowns from N equations.

avatar
F*r
4
yes, the matrix is not square
just want to get the least aquare solution.
We already can got the result.
The problem is just time.
now we use the coarse grid, for one iteration it take about 1 day.
usually after 4 or 5 iteration we can see something in the result.
this is just the test version.
we need to try finer grids, it means larger matric inverse.
it may take a few days for iteration fineshed.
that is why we need to find the parallel library.
then we can use multiple processors to do this

【在 b*****i 的大作中提到】
: if N<>M, then x is given as the Least Square Solution. For sparse matrix, you
: can use the sparse mode in Matlab for it. But no idea about its parallel
: version.

avatar
s*t
5
How large is your m and n? In my problem,
m=n=6,000,000, it is a large sparse matrix.
For 1 iteration, I can beat it down to 0.5
minute on my own PC of 2 GHz. I cannot imagine
your matrix be larger. So, the problem is not
to find a parallel library (even parallel, say
10 computers, it can still take 2 hours for
1 iteration, while you need maybe hundreds of
iterations to converge), but to find a better algorithm.
Say, you should use BLAS to do matrix multiplication,
and LAPACK to do least squares

【在 F****r 的大作中提到】
: yes, the matrix is not square
: just want to get the least aquare solution.
: We already can got the result.
: The problem is just time.
: now we use the coarse grid, for one iteration it take about 1 day.
: usually after 4 or 5 iteration we can see something in the result.
: this is just the test version.
: we need to try finer grids, it means larger matric inverse.
: it may take a few days for iteration fineshed.
: that is why we need to find the parallel library.

avatar
F*r
6
well, in my problem, we use the least square solution.
the iteration I said is not just finished one matrax inverse.
It is the program fineshed and get result.
like you said, our program need about a thousand iterations
to converge.
we need to use the new ouput as input.
run severals times.
There should be something designed to solve this problem.
this is very common in tomography.

【在 s*****t 的大作中提到】
: How large is your m and n? In my problem,
: m=n=6,000,000, it is a large sparse matrix.
: For 1 iteration, I can beat it down to 0.5
: minute on my own PC of 2 GHz. I cannot imagine
: your matrix be larger. So, the problem is not
: to find a parallel library (even parallel, say
: 10 computers, it can still take 2 hours for
: 1 iteration, while you need maybe hundreds of
: iterations to converge), but to find a better algorithm.
: Say, you should use BLAS to do matrix multiplication,

avatar
s*s
7
稀疏矩阵的话用LSQR最好了,有现成的源程序。

【在 F****r 的大作中提到】
: well, in my problem, we use the least square solution.
: the iteration I said is not just finished one matrax inverse.
: It is the program fineshed and get result.
: like you said, our program need about a thousand iterations
: to converge.
: we need to use the new ouput as input.
: run severals times.
: There should be something designed to solve this problem.
: this is very common in tomography.

avatar
F*r
8
I can't see chinese

【在 s**s 的大作中提到】
: 稀疏矩阵的话用LSQR最好了,有现成的源程序。
avatar
s*s
9
you can use LSQR method, there are source codes in C & fortran,
1000 iterations is too many, you can increase the daming factor
to reduce the number.

【在 F****r 的大作中提到】
: I can't see chinese
avatar
F*r
10
yes. we are using LSQR method now.
increase the daming factor will not get the good result.

【在 s**s 的大作中提到】
: you can use LSQR method, there are source codes in C & fortran,
: 1000 iterations is too many, you can increase the daming factor
: to reduce the number.

avatar
s*t
11
OK, what I used is exactly LSQR, and the problem what
I solve is exactly tomography.
Say, for one iteration, what I meant is one
A*x and then A^T * y, where ^T means transpose,
and y is the result from the previous iteration.
You know, if you want to speed up your code, it is
to speed up matrix multiplication A*x, and same way
A^T * y. The LSQR itself is not very time-consuming.
If you did not use LSQR, or other inversion code,
I believe your problem remains same, to speed up
matrix multiplicati

【在 s**s 的大作中提到】
: you can use LSQR method, there are source codes in C & fortran,
: 1000 iterations is too many, you can increase the daming factor
: to reduce the number.

avatar
s*t
12
Damping factor cannot replace the number of
iterations, though sometimes it can help a little.

【在 F****r 的大作中提到】
: yes. we are using LSQR method now.
: increase the daming factor will not get the good result.

avatar
F*r
13
yes, if you didn't change the damping parameter magnitude.
it won't matter too much

【在 s*****t 的大作中提到】
: Damping factor cannot replace the number of
: iterations, though sometimes it can help a little.

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。