Don't get this part, MySql is a DB, Lucene is a search engine. how is this replacible?
【在 r*****l 的大作中提到】 : "changing our back-end from MySQL to a real-time version of Lucene" : This may contribute quite a lot to the performance gain.
g*g
4 楼
Reading the blog, it seems they get this by changing the architecture from synchronous mode to asynchrnous mode, that's where the most gain is coming from. They also imply Ruby on Rail is getting unmaintainable to do this kind of change, or lack of NIO libraries. I am surprised they didn't do it using Scala though.
【在 g*****g 的大作中提到】 : Don't get this part, MySql is a DB, Lucene is a search engine. : how is this replacible?
i*e
6 楼
NIO是异步process web requests。有什么web server能异步 到后台取data,然后回到原来的socket connection去serve page?
【在 g*****g 的大作中提到】 : Reading the blog, it seems they get this by changing the architecture : from synchronous mode to asynchrnous mode, that's where the most : gain is coming from. They also imply Ruby on Rail is getting unmaintainable : to do this kind of change, or lack of NIO libraries. I am surprised they : didn't do it using Scala though.
F*n
7 楼
If you think of DBMS as nothing but indexing, Lucene has its own indexing managing & access mechanism, which is much faster than other DBs for Lucene' s own specific tasks.
【在 g*****g 的大作中提到】 : Don't get this part, MySql is a DB, Lucene is a search engine. : how is this replacible?
r*l
8 楼
Yes. My feeling is that the index engine and new architecture help directly. The title implies Java is the main reason though.
unmaintainable
【在 g*****g 的大作中提到】 : Reading the blog, it seems they get this by changing the architecture : from synchronous mode to asynchrnous mode, that's where the most : gain is coming from. They also imply Ruby on Rail is getting unmaintainable : to do this kind of change, or lack of NIO libraries. I am surprised they : didn't do it using Scala though.
In java's term, they create a Future in servlet, and block on Future to return. In the Future, they do all kinds of async processing. On a loaded system, there'll be less CPU cycles blocking on IO, and they can achieve better throughput. Though they don't really use servlet, that part is in RoR.
嗯。应该是你说的这样。我是想知道现在有哪个(open source) web framework implement了这个
【在 g*****g 的大作中提到】 : In java's term, they create a Future in servlet, and block : on Future to return. In the Future, they do all kinds of : async processing. On a loaded system, there'll be less CPU : cycles blocking on IO, and they can achieve better throughput. : Though they don't really use servlet, that part is in RoR.
你说的这个貌似仍然是block的,只不过blocked on Future? 这个跟在memory/thread里面block有啥区别捏?
【在 g*****g 的大作中提到】 : In java's term, they create a Future in servlet, and block : on Future to return. In the Future, they do all kinds of : async processing. On a loaded system, there'll be less CPU : cycles blocking on IO, and they can achieve better throughput. : Though they don't really use servlet, that part is in RoR.
c*n
15 楼
this is essentially the thread-vs-msg processing argument Cassandra does exactly what you said: every request creates a handler and Cassandra shoves it in a huge map , with that request ID, when reply msg comes back, the ID is used to look up the request handler. so overall there are very few "processor " threads, but there can be many many more requests on the queue
You can use plain servlet to hook up netty or mina. They use netty here.
【在 i**e 的大作中提到】 : 嗯。应该是你说的这样。我是想知道现在有哪个(open source) : web framework implement了这个
i*e
17 楼
Hmm. I must be missing something. In the case we are discussing, there are two web servers involve, one front-end server serving web requests, which in turn calls a back-end server for mashing up data. I thought netty or mina used async network handling. But for the servlet running on the front-end server, the requests going to back-end servers are still blocking?
【在 g*****g 的大作中提到】 : You can use plain servlet to hook up netty or mina. They use netty : here.
g*g
18 楼
Http is a request/response protocol, unless you are using a long poll (comet like framework) in web layer, it has to be blocking in front end. You can, however,do the heavy lifting in another component.
【在 i**e 的大作中提到】 : Hmm. I must be missing something. In the case we are : discussing, there are two web servers involve, one : front-end server serving web requests, which in turn : calls a back-end server for mashing up data. : I thought netty or mina used async network handling. : But for the servlet running on the front-end server, : the requests going to back-end servers are still : blocking?
i*e
19 楼
Isn't this what they did at twitter? I think they made the front-end async. When a request is received by front-end, it sends a request to back-end service and continue on. When the back-end response is back, someone picks up the response and mash it up and send to the original front-end client. "Creating a fully asynchronous aggregation service. No thread waits on network I/O to complete."
【在 g*****g 的大作中提到】 : Http is a request/response protocol, unless you are using a long : poll (comet like framework) in web layer, it has to be blocking : in front end. You can, however,do the heavy lifting in another : component.
g*g
20 楼
They made the heavy lifting part async, that's all. Http protocol is a synchronous protocol and you can't change that. It's not like there's a connection open, and the server can push data to client whenever it wants.
【在 i**e 的大作中提到】 : Isn't this what they did at twitter? I think they : made the front-end async. When a request is received : by front-end, it sends a request to back-end service : and continue on. When the back-end response is back, : someone picks up the response and mash it up and : send to the original front-end client. : "Creating a fully asynchronous aggregation service. : No thread waits on network I/O to complete."
i*e
21 楼
The request handler code can be async, though. Traditionally the request handling thread is blocked (as in servlets) waiting for back-end I/O (file system or network). It sounds like twister has made this non-blocking, which means the thread is freed to do other things. When back-end I/O is done, the back-end response thread sends data back to the front-end client.
【在 g*****g 的大作中提到】 : They made the heavy lifting part async, that's all. : Http protocol is a synchronous protocol and you can't : change that. It's not like there's a connection open, : and the server can push data to client whenever it wants.