Redian新闻
>
Java Map 存 1 million 记录
avatar
Java Map 存 1 million 记录# Java - 爪哇娇娃
l*0
1
如果物理内存够,比如 8G,这么多 entry 的 map, 会有性能问题吗? Key 就是
String.还是有分到几个小的 map 比较好? 程序中,存到一个 map 里面更方便。
Map 比较合适的最大 size 是多少?
avatar
t*e
2
key partition一下,用两层Maps.
avatar
c*e
3
存到hashtable里?能排序吗?

【在 l******0 的大作中提到】
: 如果物理内存够,比如 8G,这么多 entry 的 map, 会有性能问题吗? Key 就是
: String.还是有分到几个小的 map 比较好? 程序中,存到一个 map 里面更方便。
: Map 比较合适的最大 size 是多少?

avatar
w*z
5
1m doesn't seem to be large. do benchmark and tune it if it becomes a
problem.

【在 l******0 的大作中提到】
: 如果物理内存够,比如 8G,这么多 entry 的 map, 会有性能问题吗? Key 就是
: String.还是有分到几个小的 map 比较好? 程序中,存到一个 map 里面更方便。
: Map 比较合适的最大 size 是多少?

avatar
s*e
6
if possible, using distributed cache will provide you better QoS of
availability, scalability and performance.
If you have to store them in one single jvm, you'd better write your own
hash function by extending hashmap. The default implementation of hashmap is
not designed for the big data like that.
avatar
g*g
7
1M is nothing, just do it, don't even bother with benchmark.
avatar
l*0
9
你是说分作两个 map?

【在 t*******e 的大作中提到】
: key partition一下,用两层Maps.
avatar
l*0
10
"The default implementation of hashmap is not designed for the big data like
that."
Any documentation or experiments on this point?

is

【在 s******e 的大作中提到】
: if possible, using distributed cache will provide you better QoS of
: availability, scalability and performance.
: If you have to store them in one single jvm, you'd better write your own
: hash function by extending hashmap. The default implementation of hashmap is
: not designed for the big data like that.

avatar
l*0
11
我记得好像有次一个 map 里面的数目超过 20万,速度就变慢了,但我不肯定是不是还
有别的原因造成速度慢。
我的理解是,就像一个房子里面,房子越大,每次要从中找出一个人的时间就要更多?
hash bucket 越多,是不是 search 的时间就越长?但不知100 万会不会成为瓶颈。

【在 g*****g 的大作中提到】
: 1M is nothing, just do it, don't even bother with benchmark.
avatar
t*e
12
When key space grows big,the probability of hash collision enlarges。Java
handles hash collision in a less-efficient manner, which could lead to
slowness.



【在 l******0 的大作中提到】
: 我记得好像有次一个 map 里面的数目超过 20万,速度就变慢了,但我不肯定是不是还
: 有别的原因造成速度慢。
: 我的理解是,就像一个房子里面,房子越大,每次要从中找出一个人的时间就要更多?
: hash bucket 越多,是不是 search 的时间就越长?但不知100 万会不会成为瓶颈。

avatar
t*e
13
Figure out an algorithm to evenly distribute all the keys into 100 slots (
Maps). Each slot contains 10K records. Consistent Hash in BigTable might
give you some inspiration.

【在 l******0 的大作中提到】
: 你是说分作两个 map?
avatar
w*z
14
I don't think you are smarter than the people who implemented Java string
hash function. don't over engineer things. if you think you have performance
issue, run some benchmark and see where is the bottle neck and go from
there. as I said, 1m is nothing nowadays .

【在 t*******e 的大作中提到】
: When key space grows big,the probability of hash collision enlarges。Java
: handles hash collision in a less-efficient manner, which could lead to
: slowness.
:
: 。

avatar
t*e
15
http://openjdk.java.net/jeps/180
Obviously they are smarter than you.

performance

【在 w**z 的大作中提到】
: I don't think you are smarter than the people who implemented Java string
: hash function. don't over engineer things. if you think you have performance
: issue, run some benchmark and see where is the bottle neck and go from
: there. as I said, 1m is nothing nowadays .

avatar
w*z
16
I am talking about String hash function, not handling of hash map collision
. by the way, if you run openjdk in production, then good luck. Of course
, they are smarter than me.

【在 t*******e 的大作中提到】
: http://openjdk.java.net/jeps/180
: Obviously they are smarter than you.
:
: performance

avatar
t*e
17
Same issue with Oracle JDK. BTW, you sound pretty naive and emotional.
Please be professional not personal when discussing technical subjects.

collision
course

【在 w**z 的大作中提到】
: I am talking about String hash function, not handling of hash map collision
: . by the way, if you run openjdk in production, then good luck. Of course
: , they are smarter than me.

avatar
w*z
18
which part is emotional and not professional?

【在 t*******e 的大作中提到】
: Same issue with Oracle JDK. BTW, you sound pretty naive and emotional.
: Please be professional not personal when discussing technical subjects.
:
: collision
: course

avatar
s*e
19
rehash and collision are two major issues.

like

【在 l******0 的大作中提到】
: "The default implementation of hashmap is not designed for the big data like
: that."
: Any documentation or experiments on this point?
:
: is

avatar
c*e
20
hashtable发生collision是常事,换个算法。

【在 s******e 的大作中提到】
: rehash and collision are two major issues.
:
: like

avatar
s*e
21
read the context please. Do not jump into the topic too fast without knowing
the situation... we were talking about big number of string keys.

【在 c*********e 的大作中提到】
: hashtable发生collision是常事,换个算法。
avatar
t*e
22
Judge yourself before you judge others.

【在 w**z 的大作中提到】
: which part is emotional and not professional?
avatar
w*z
23
搞得那么悬呼? 人用一个1m hashmap , 还big table, consistent hashing. 你phd
搞研究的吧?

【在 t*******e 的大作中提到】
: Judge yourself before you judge others.
avatar
g*g
24
1M hashmap,屁大的事情。我这几天把几个Oracle的表转到MySQL上,有一些合并和分
解的操作。
几个1-2M记录的表,都是直接放到内存里,用来做id->object lookup,完全没有压力
。这是在我8G的laptop上跑的,不是什么大服务器。

phd

【在 w**z 的大作中提到】
: 搞得那么悬呼? 人用一个1m hashmap , 还big table, consistent hashing. 你phd
: 搞研究的吧?

avatar
s*e
25
well. seems that people have the different assumptions.
If the initial loading time is not counted and your string key is db id type
of string, then hashmap might be fine since both rehashing and collision
will not be issue any more.
But according to the original post, it seems that we are talking about
building the hashmap over the time on the fly with random and big strings as
the key. if so, rehashing and collision will be issues.
avatar
g*g
26
1M is such a small dataset it doesn't matter how it's built.
A 32 bits integer gives your 4 billion buckets, 1M is just 1/4000 of them,
you'll see a few collisions here and there, but not enough to cause
performance issue.

type
as

【在 s******e 的大作中提到】
: well. seems that people have the different assumptions.
: If the initial loading time is not counted and your string key is db id type
: of string, then hashmap might be fine since both rehashing and collision
: will not be issue any more.
: But according to the original post, it seems that we are talking about
: building the hashmap over the time on the fly with random and big strings as
: the key. if so, rehashing and collision will be issues.

avatar
s*e
27
you misunderstand the relationship between #buckets and collision. Having
enough buckets does not necessarily transfer into less collisions. As said,
db id type string (especially surrogate id), due to the constraint applied,
is more likely to be uniformly distributed than random big string keys.
rehashing is another issue because of the way how the default implementation
of hashmap works. if you build the map on the fly, I remember that somebody
reported after 200 thousand string key entries, rehashing could cause some
users to suffer quite a bit. Even not all the users will suffer, but it
should be avoided for any robust systems.

【在 g*****g 的大作中提到】
: 1M is such a small dataset it doesn't matter how it's built.
: A 32 bits integer gives your 4 billion buckets, 1M is just 1/4000 of them,
: you'll see a few collisions here and there, but not enough to cause
: performance issue.
:
: type
: as

avatar
g*g
28
No, you misunderstand. The number of available buckets is constant, the less
data, the less chance for collision, simple as that.
It is possible 1M data causes heavy collision? Possible, but unlikely for
most use cases. And when it's case, it's easy to tune the hashing function.
OP didn't mention any random big string to begin with, try a simple approach
and don't overengineer before you have to.

,
,
implementation
somebody
some

【在 s******e 的大作中提到】
: you misunderstand the relationship between #buckets and collision. Having
: enough buckets does not necessarily transfer into less collisions. As said,
: db id type string (especially surrogate id), due to the constraint applied,
: is more likely to be uniformly distributed than random big string keys.
: rehashing is another issue because of the way how the default implementation
: of hashmap works. if you build the map on the fly, I remember that somebody
: reported after 200 thousand string key entries, rehashing could cause some
: users to suffer quite a bit. Even not all the users will suffer, but it
: should be avoided for any robust systems.

avatar
w*z
29
Java Hashmap implementation does a supplement hash on the hasCode caller
provides.
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/
It's not likely you will have a lot of hash collisions for random String Key
.
Another way to avoid rehashing is to set a bigger initial size if you are
sure you will have 1M records.
I agree with goodbug, at least do some benchmarking before trying to tune
anything.

less
approach

【在 g*****g 的大作中提到】
: No, you misunderstand. The number of available buckets is constant, the less
: data, the less chance for collision, simple as that.
: It is possible 1M data causes heavy collision? Possible, but unlikely for
: most use cases. And when it's case, it's easy to tune the hashing function.
: OP didn't mention any random big string to begin with, try a simple approach
: and don't overengineer before you have to.
:
: ,
: ,
: implementation

avatar
l*m
30
hashmap will increase the memory allocation when the current allocation is
not large enough

Key

【在 w**z 的大作中提到】
: Java Hashmap implementation does a supplement hash on the hasCode caller
: provides.
: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/
: It's not likely you will have a lot of hash collisions for random String Key
: .
: Another way to avoid rehashing is to set a bigger initial size if you are
: sure you will have 1M records.
: I agree with goodbug, at least do some benchmarking before trying to tune
: anything.
:

avatar
w*z
31
At that time, rehashing happens... If you know your size in advance, just
set to a bigger number to avoid rehashing.

【在 l*******m 的大作中提到】
: hashmap will increase the memory allocation when the current allocation is
: not large enough
:
: Key

avatar
s*e
32
First, the number of buckets is never constant for the default
implementation of HashMap. That is why you have rehashing.
Second, the collision will depend on all three factors of the input, hash
function and the number of buckets. Even you can use all 64 bits of memory
for your bucket, you might still have heavy collision for a biased input.
That is why I said db id type string key might work fine for the default
hashmap. I believe in such a case, the default hash function will distribute
them uniformly.

less
approach

【在 g*****g 的大作中提到】
: No, you misunderstand. The number of available buckets is constant, the less
: data, the less chance for collision, simple as that.
: It is possible 1M data causes heavy collision? Possible, but unlikely for
: most use cases. And when it's case, it's easy to tune the hashing function.
: OP didn't mention any random big string to begin with, try a simple approach
: and don't overengineer before you have to.
:
: ,
: ,
: implementation

avatar
s*e
33
When you build your map dynamically, that is a big if. If you can preload, I
think even several million records should be fine if the input is not that
biased.

【在 w**z 的大作中提到】
: At that time, rehashing happens... If you know your size in advance, just
: set to a bigger number to avoid rehashing.

avatar
w*z
34
That is true, it's the tradeoff between time and space. Only lz knows his
use case.

I
that

【在 s******e 的大作中提到】
: When you build your map dynamically, that is a big if. If you can preload, I
: think even several million records should be fine if the input is not that
: biased.

avatar
l*0
35
In my case,
1) The "keys" are English words or phrases, no repetitions.
2) I have to build this map in a static initialization function.
BTW, what do you mean by 'preload', is it an initialization?

I
that

【在 s******e 的大作中提到】
: When you build your map dynamically, that is a big if. If you can preload, I
: think even several million records should be fine if the input is not that
: biased.

avatar
l*0
36
Sounds like good experiments. I will use one map to try.

【在 g*****g 的大作中提到】
: 1M hashmap,屁大的事情。我这几天把几个Oracle的表转到MySQL上,有一些合并和分
: 解的操作。
: 几个1-2M记录的表,都是直接放到内存里,用来做id->object lookup,完全没有压力
: 。这是在我8G的laptop上跑的,不是什么大服务器。
:
: phd

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。