No. You can't compact for every writes. It will kill your performance if you do that. Again, I am not so sure about HBase. In Cassandra, there are different compaction strategies. If the compactions don't happen often, the read performance will suffer since the read will have to go through Mulitple SSTables.
这个我倒是知道。问题是不同的schema导致compaction时要处理的文件是否有区别? 比如一个store在compaction之后只有一个big store file了,那么当你再次在每个row 中都加入了新的column以后,这个文件是不是再次需要修改?修改后是不是又要进行 region splitting? 相反,如果已经写入的row不需要再加新的column,那么在一个store 只含有一个store file的情况下做compaction,包含这部分数据的文件可能就不需要再处理了?
【在 w**z 的大作中提到】 : compact doesn't happen for every writes. 仔细看看我上面的comments. spend : some time understanding it please. you need to get the basics.
C* is column family, adding new column doesn't mean schema change. Compaction happens when new data is written to the database and flushed to the disk as immutable File. So one row could live in different SSTable, in order to perform the reads more efficiently, compaction needs to be performed. It doesn't seem like you understand the fundamental of the column family based nosql database.
【在 w**z 的大作中提到】 : C* is column family, adding new column doesn't mean schema change. : Compaction happens when new data is written to the database and flushed to : the disk as immutable File. So one row could live in different SSTable, in : order to perform the reads more efficiently, compaction needs to be : performed. : It doesn't seem like you understand the fundamental of the column family : based nosql database. : : compaction : region
w*z
28 楼
The reason to have compaction is to reduce the number of disk seeks during reads. SStables are immutable, so one row could reside in multiple SStables depending when it is written. For read, it has to merge the columns from all the SSTable where that rowkey is found. Compaction merge the row in one SSTable from all the SSTables compacted. But which tables are compacted depends on the compaction strategy. It's fine even there is no compaction at all, but the reads will become slower and slower.