avatar
诡异!Google账户登陆不进# Internet - 有缘千里一线牵
s*e
1
【 以下文字转载自 EB23 讨论区 】
发信人: sanduolee (san), 信区: EB23
标 题: G325 A question
发信站: BBS 未名空间站 (Tue Nov 15 14:42:28 2011, 美东)
Does anyone know how to fill "Applicant's last address outside the United
States of more than one year." in G325A?
According to PJiang's blog, he says "填出国之前最后住的超过一年的地址。这个
地址不一定是户口地址,也不一定是单位地址,但一定是那时你住的地址。即使你出国
之后,那儿拆了,没了,也填它。"
The problem is that I never had a real place to stay before I came to the US
. Can I use my partents' address? Does this really matters? Thanks a lot.
avatar
s*1
2
请教哪位高人:
mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
" 有关吗?是O(LOG(N))的关系吗?
谢谢。
avatar
s*v
3
昨天晚上开始突然因为密码不对而登陆不进Google账户了,而之前我都是默认登陆,从
未更改密码。
经过两次重新改密码,再立即用新密码尝试登陆,但很诡异的是即使这样都登不进,太
诡异了,哪位大
侠遇到过这种情况或者知道怎么解决么?感觉我中了魔咒了,现在完全连接不进我的
google账户。
avatar
s*t
4
没问题。这项不重要。
avatar
v*r
5
Assuming index is maintained properly,it is O(LOG(N))

N

【在 s********1 的大作中提到】
: 请教哪位高人:
: mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
: " 有关吗?是O(LOG(N))的关系吗?
: 谢谢。

avatar
s*e
6
Thanks a lot.
avatar
g*l
7
哇,很专业啊
avatar
s*1
8
请教哪位高人:
mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
" 有关吗?是O(LOG(N))的关系吗?
谢谢。
avatar
v*r
9
Assuming index is maintained properly,it is O(LOG(N))

N

【在 s********1 的大作中提到】
: 请教哪位高人:
: mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
: " 有关吗?是O(LOG(N))的关系吗?
: 谢谢。

avatar
g*l
10
哇,很专业啊
avatar
s*o
11
MSSQL,ORACLE 的也有同样关系么?
avatar
y*9
12
for binary search a sorted list with lengh N it is O(log2(N)) . For index
access a key, it should not related to N in this way. Based on common sense,
it is related to index depth. index depth is related to N and how many
entries can a block holds. it probably is O(logb(N)) where b is the
number of entries in the block.
see: http://en.wikipedia.org/wiki/B-tree#Search

N

【在 s********1 的大作中提到】
: 请教哪位高人:
: mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
: " 有关吗?是O(LOG(N))的关系吗?
: 谢谢。

avatar
v*r
13
Don't get confused.
Log2(N) is the best case of height value for a N node self-balancing b-tree.
Logb(N) is the best case of height value for a N node regular b-tree with
each node containing b number of elements.
For search/insert/delete efficiency of any b-tree in O notation, it is O(log
N). For algorithm with logarithm efficiency, logarithm base doesn't matter.

index
sense,

【在 y****9 的大作中提到】
: for binary search a sorted list with lengh N it is O(log2(N)) . For index
: access a key, it should not related to N in this way. Based on common sense,
: it is related to index depth. index depth is related to N and how many
: entries can a block holds. it probably is O(logb(N)) where b is the
: number of entries in the block.
: see: http://en.wikipedia.org/wiki/B-tree#Search
:
: N

avatar
y*9
14

tree.
log
YOu are probably right. this maybe the first time I need to think in big O
notation after school. Even in school I was not goot at algorithm anaysis.
If we put therotical algorithm complexity anaysis aside, from a practical
point of view,Is it safe to say the speed of select based on PK index is
almost same regardless of table lengh? I just checked the bigest table in
one of my Oracle db, it has 2 billion row. the consistent gets is 4.
So it should be index root block, branch block, leaf block and data block.
As long as we do 4 gets for a typical index, the speed should be same.
I know one can argue the differenes among 1,2,3,4 gets.But really
practically does it really matter when talking about the speed of select
based on index wrt table length ?

【在 v*****r 的大作中提到】
: Don't get confused.
: Log2(N) is the best case of height value for a N node self-balancing b-tree.
: Logb(N) is the best case of height value for a N node regular b-tree with
: each node containing b number of elements.
: For search/insert/delete efficiency of any b-tree in O notation, it is O(log
: N). For algorithm with logarithm efficiency, logarithm base doesn't matter.
:
: index
: sense,

avatar
v*r
15
It is not right to assume speed is the same for the same number of
consistent gets, which is determined by the height of index. You just
ignored the cpu time even though it is only a fraction of disk IO. (that is
why we never need to worry about big o in db world as we have other more
practical things to concern)
Also if we have to consider a real world case, there are usually more than
one algorithms involved. Like in your case, the first three consistent gets
are related to index lookup, which is O(logn) in theory. The last consistent
get is to getting table row using rowid obtained from index. It is using a
separate algorithm, which is O(1) if thinking about big O again.

.

【在 y****9 的大作中提到】
:
: tree.
: log
: YOu are probably right. this maybe the first time I need to think in big O
: notation after school. Even in school I was not goot at algorithm anaysis.
: If we put therotical algorithm complexity anaysis aside, from a practical
: point of view,Is it safe to say the speed of select based on PK index is
: almost same regardless of table lengh? I just checked the bigest table in
: one of my Oracle db, it has 2 billion row. the consistent gets is 4.
: So it should be index root block, branch block, leaf block and data block.

avatar
y*9
16
Original question:

mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
" 有关吗?是O(LOG(N))的关系吗?

If we understand the original question as the relationship between the cost
of index access and the table length, my opinion is that they do not have
direct relashionship. Big-O notation is used to analyze algorithm's
efficiency and complexity, it seems not appropriate to be used to describe
such a relathiship
based on Jonathan Lewis's CBO book, the cost of a simple B-tree access is
cost =
blevel +
ceiling(leaf_blocks * effective index selectivity) +
ceiling(clustering_factor * effecitive table selectivity).
Imagining we have many books with different pages but same size, each book
has exactly one page refer to the word "DBA", we want to know the speed of
find this page for different books. In this case the speed really means the
cost - the cost can expressed by the total pages you turn.
1. Assuming every one's firt turn can find the exact the index page which
contains D - this is one turn
2. Assuming we need to turn to next page to get the "DBA" entry under D
section
3. Go to the book page - the final turn
Above cost is 3. As how faster your eye can locate the "D" "DBA" within a
page should be not relevant. No matter how thick or thin your book is, as
long as the number of pages turned are same, we should say we have the same
speed(cost) to find the page contains "DBA"
Return to the real case, we do one consiste get for a block, then the CPU
time to examin a block should be not in the equation of index access cost.
My arugument is just trying to clarify the concept for every one's benefit,
not necessarily limited to original question ( first for myself of cours). I
do agree that the algorimethic efficiency of index access is O(logN) - it
is good for me to take the chance to revisit the big-O notation.

is
gets
consistent
a

【在 v*****r 的大作中提到】
: It is not right to assume speed is the same for the same number of
: consistent gets, which is determined by the height of index. You just
: ignored the cpu time even though it is only a fraction of disk IO. (that is
: why we never need to worry about big o in db world as we have other more
: practical things to concern)
: Also if we have to consider a real world case, there are usually more than
: one algorithms involved. Like in your case, the first three consistent gets
: are related to index lookup, which is O(logn) in theory. The last consistent
: get is to getting table row using rowid obtained from index. It is using a
: separate algorithm, which is O(1) if thinking about big O again.

avatar
v*r
17
Use your example, we need to locate "DBA" word in a book, "DBA" is indexed
and occurs once in the book.
1. Assuming every one's first turn can find the exact the index page which
contains D - this is one turn
Comment: In big O, this turn's efficiency is O (log n).
2. Assuming we need to turn to next page to get the "DBA" entry under D
section
Comment: In big O, this turn's efficiency is O (n), unless you have a fancy
structure implemented here within each index directory (for "D").
3. Go to the book page - the final turn
Comment: In big O, this final turn's efficiency is O (1).
As you can see, there are three different algorithms being involved in your
example. What's the point of discussing big O here other than going through
basics of algorithms. Practically speaking there is no point. Why? One
reason is the physical device speed limitation, like logcial/physical IOs.
Those factors in Big O should be ignored. But in practice, how can you
ignore speed factors that accounts for 99% percent of time? That is why
Optimizer calculate Costs using index height/clustering factor/selectivity
etc and DBAs looks at real factors that impacts performance.
Even in programming world, other than algorithm analysis, I don't think
there are real practical usages of Big O on performance area. But
programmers needs to worry about Big O simply on interviews there are Big O
questions.

N
cost

【在 y****9 的大作中提到】
: Original question:
:
: mySQL中"select by primary/unique key的速度" 与 "number of rows in a table= N
: " 有关吗?是O(LOG(N))的关系吗?
:

: If we understand the original question as the relationship between the cost
: of index access and the table length, my opinion is that they do not have
: direct relashionship. Big-O notation is used to analyze algorithm's
: efficiency and complexity, it seems not appropriate to be used to describe
: such a relathiship

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。