看看kafka的source code, 哪儿像scala? https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/api/ ApiUtils.scala def readShortString(buffer: ByteBuffer): String = { val size: Int = buffer.getShort() if(size < 0) return null val bytes = new Array[Byte](size) buffer.get(bytes) new String(bytes, ProtocolEncoding) } def writeShortString(buffer: ByteBuffer, string: String) { if(string == null) { buffer.putShort(-1) } else { val encodedString = string.getBytes(ProtocolEncoding) if(encodedString.length > Short.MaxValue) { throw new KafkaException("String exceeds the maximum size of " + Short.MaxValue + ".") } else { buffer.putShort(encodedString.length.asInstanceOf[Short]) buffer.put(encodedString) } } }
c*o
18 楼
说实话,这样写scala,还是用回 java好。
N*m
19 楼
老赵,你给refactor一下?
【在 c******o 的大作中提到】 : 看看kafka的source code, 哪儿像scala? : https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/api/ : ApiUtils.scala : def readShortString(buffer: ByteBuffer): String = { : val size: Int = buffer.getShort() : if(size < 0) : return null : val bytes = new Array[Byte](size) : buffer.get(bytes) : new String(bytes, ProtocolEncoding)
p*2
20 楼
Scala在performance面前会被瓦解的支离破碎
c*o
21 楼
这个scala days 我的感觉就是 都是big data analysis. 基本上一半以上是spark/akka scala还是有点门路的。apache newest top level project (为啥要再有一个和spark 一样用处的东东?) https://flink.apache.org/也是先有 java/scala api flink基本是java,很少量scala
g*e
22 楼
kafka in github is almost java like They read like articles written by primary school students.
g*9
23 楼
看过kafka的原代码,那叫一痛苦,indent居然只有两个空格,真不像专业人士写的。 代码质量很一般,也就搞big data的用用还行,丢点数据无所谓,和tibco EMS的可靠 性没法比。 big data里可能hadoop核心代码质量最高。
【在 g*********e 的大作中提到】 : kafka in github is almost java like : They read like articles written by primary school students.