avatar
c*d
1
面试职位是后端
遇到个设计题:设计fb like button
讨论到存储、实时更新等等。请教大家这个应该怎么回答最好
avatar
r*n
2
实时更新那就Long poll 咯
存储主要是考虑到大V发了个帖子,一堆人点赞会有瓶颈吧。
那就存内存里咯,顺便搞个write ahead log, 异步更新DB,如何?

【在 c********d 的大作中提到】
: 面试职位是后端
: 遇到个设计题:设计fb like button
: 讨论到存储、实时更新等等。请教大家这个应该怎么回答最好

avatar
r*s
3
#1 QPS:
Probably 10k~100k concurrent access (my guess)
Meaning we need to heavily cache the data. We may need some write through
cache in place. Well sharded.
resource wise, I'm not worried at all. for cache https://redis.io/topics/
benchmarks) redis can be super fast. for persistency with nosql we can
achieve almost infinite scalability.
our system should provide the following APIs:
- Like(user, post)
- Unlike(user, post)
- Liked(user, post)
- CountLikes(post)
Extra:
- RecentLikes(user)
- FriendsLiked(user, post)
If we only consider the first 4 APIs, it is obvious: post id should be hash/
partition key and user id should be range/sort key.
The extras can have longer latencies - by using offline process or secondary
global index.
Or, we can store recent likes in user's metadata. Or it could be just
published as an implicit timeline entry.
#2 access pattern:
Read dominant. can be 10x-50x comparing to writes.
#3 consistency:
Eventual consistency ok, but the writer clients should read back same value.
We may need to consider sticky sessions (which can be bad for load-
balancing. but luckily we don't need to worry too much about server going
down.)
Another approach we can consider/combine is local (client) cache. This might
be hard on browser (because user could use several browsers on same
computer!), but in mobile app it is easy. As far as I know FB uses local
cache trick on several products to offload their services.
#4 availability:
very important. if you do not allow people to like posts, some attention
whore may DIE because of this unavailability.
combining #3 and #4 we decide it should be an AP system. in network
partition, we store Like data in each partition, and after recovering, we
merge as best as we can.
#5 how to shard?
I always prefer quorum in loosely consistent systems. it has better fault
tolerance than master-slave or leader/leaders systems. Availability is
important in our use case. It doesn't mean we need to implement it, but when
choosing our tools, we should use cassandra / dynamodb other than mongodb /
mysql
#7 improve latency?
latency is important too. however we still need to confirm the write. (
immediate notice that the like didn't take effect vs find it after 1 hour or
never find out)
as mentioned above, use write thru cache. also tune the w/r values in quorum
so we have better write performance with acceptable fault tolerance.
#6 how to balance load in app layer?
load balancer....
在下只是餐馆洗碗的,设计功力有限。。暂时想到这么多,大牛们不要见笑。。。
avatar
r*s
4
我还是倾向用write through。。。不会太慢的。。。
write back的话有一点风险。不过可以商量。。。


: 实时更新那就Long poll 咯

: 存储主要是考虑到大V发了个帖子,一堆人点赞会有瓶颈吧。

: 那就存内存里咯,顺便搞个write ahead log, 异步更新DB,如何?



【在 r*********n 的大作中提到】
: 实时更新那就Long poll 咯
: 存储主要是考虑到大V发了个帖子,一堆人点赞会有瓶颈吧。
: 那就存内存里咯,顺便搞个write ahead log, 异步更新DB,如何?

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。