Redian新闻
>
P P C L Beowulf计算机群测试报告
avatar
P P C L Beowulf计算机群测试报告# Computation - 科学计算
d*w
1
我们目前拥有两套用于分子模拟的Beowulf计算机群。一套是用于教学目的8 CPU
AMD Athlon MP 2000+ (主频1666M Hz) 构架的Beowulf计算机群,用100M Ethernet连接
,代号为ding,搭建于2002年10月。另一套是上个月搭建的用于多相多组份高分子体系模
拟的机群,代号为”m”,拥有46个Intel Xeon 2.4 G CPU,以1000M Ethernet连接。
为了便于和其他Beowulf计算机群进行性能比较,最近我们用通用的Linpack软件包对这两
个系统进行的测试。下面是整个测试过程和结果的报告:
http://pplas.icas.ac.cn/~qiliao/cluster.htm
http://pplas.icas.ac.cn/~qiliao/beowulf/Linpack_benchmark.htm
avatar
a*s
2
结论很有意思,实事求是:)
还有,你能不能把怎么搭的beowulf详细写一下呢?

【在 d*****w 的大作中提到】
: 我们目前拥有两套用于分子模拟的Beowulf计算机群。一套是用于教学目的8 CPU
: AMD Athlon MP 2000+ (主频1666M Hz) 构架的Beowulf计算机群,用100M Ethernet连接
: ,代号为ding,搭建于2002年10月。另一套是上个月搭建的用于多相多组份高分子体系模
: 拟的机群,代号为”m”,拥有46个Intel Xeon 2.4 G CPU,以1000M Ethernet连接。
: 为了便于和其他Beowulf计算机群进行性能比较,最近我们用通用的Linpack软件包对这两
: 个系统进行的测试。下面是整个测试过程和结果的报告:
: http://pplas.icas.ac.cn/~qiliao/cluster.htm
: http://pplas.icas.ac.cn/~qiliao/beowulf/Linpack_benchmark.htm

avatar
g*e
3
Our group just built a 48 INTEL XEON 2.8GHz CPU with 1GHz Ethernet networking.
We are planning to double it at the end of this year. Basically, we ordered
parts from a local vendor and assembly them together. We didn't go with the
rackmount cabinet since it is expensive and we have plenty of room in the lab.

Ethernet连接
系模

这两

【在 a******s 的大作中提到】
: 结论很有意思,实事求是:)
: 还有,你能不能把怎么搭的beowulf详细写一下呢?

avatar
a*s
4
硬件很容易,有钱就行了,关键是你怎么能把这些东西的性能充分发挥出来。

【在 g*********e 的大作中提到】
: Our group just built a 48 INTEL XEON 2.8GHz CPU with 1GHz Ethernet networking.
: We are planning to double it at the end of this year. Basically, we ordered
: parts from a local vendor and assembly them together. We didn't go with the
: rackmount cabinet since it is expensive and we have plenty of room in the lab.
:
: Ethernet连接
: 系模
: 。
: 这两

avatar
g*e
6
Most of the software are freely available for Linux and they are fairly easy
to install. LAM-MPI seems like a popular choice here for parallel computing.
rsh or ssh can do the same job if you set them up properly. Hardware is
relatively easy as long as you select the right parts. But chasis/case is
very important considering the heat generated especially when you have
dual-processors machines. When your system goes up to 100CPU, you have to
make sure that the air-conditioner is strong enough

【在 a******s 的大作中提到】
: 硬件很容易,有钱就行了,关键是你怎么能把这些东西的性能充分发挥出来。
avatar
a*s
8
居然说到传热了,这个是垃圾,呵呵
你知道有什么nfs的替代品么?不用它用啥?

【在 g*********e 的大作中提到】
: Most of the software are freely available for Linux and they are fairly easy
: to install. LAM-MPI seems like a popular choice here for parallel computing.
: rsh or ssh can do the same job if you set them up properly. Hardware is
: relatively easy as long as you select the right parts. But chasis/case is
: very important considering the heat generated especially when you have
: dual-processors machines. When your system goes up to 100CPU, you have to
: make sure that the air-conditioner is strong enough

avatar
g*e
9
Heat is always a problem for large scale clusters, e.g. the earth simulator.
If you are just playing with a few machines, it is no big deal. For beowulf
cluster, you don't have to use NFS at all if you are just using it for
computing. If you look at my post carefully, you will find the answer to your
last two questions.

easy
computing.
you
which is
say
large.

【在 a******s 的大作中提到】
: 居然说到传热了,这个是垃圾,呵呵
: 你知道有什么nfs的替代品么?不用它用啥?

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。