I think Lior is extraordinary mostly in the sense that he is not afraid of speaking out. There are so many nonsense in this field and most people seems okay with it. He is boycotting the RECOMB after he is convinced that this meeting is more about politics than science. Like Lior said: Kellis is not one of those scientists but is playing leading role in modENCODE, ENCODE, and is serving committee roles in NHGRI, et al. He has been spending so much time on this if you read back his last two posts "why I read those nonsense network papers", all the comments, back and forth arguing between him and Kellis, and people from Kellis group. It is clear Kellis is fraudulent and has been fraudulent since 8 years ago at the least. I do believe this two posts from Lior have gained enough attention. Again! We will see what MIT has to do on this.
M*P
8 楼
Manolis has no background in networks. It is likely that he is over trusting his postdoc on this matter.
you can count how many transcripts they validate in that paper with one hand. and the assembly method is total bullshit. I believe GO better, because it is at least hard to cherrypick genes to make an GOTerm enrichment.
wet-experiment validation of a handful genes is the standard way in biology research, just pick up any nature or science paper. I would definitely prefer "Wet-experiment validation of a single gene" than "GO validation". The former can easily boost your paper impact factor by 3. I happen to developed a tool for GO. It is not as robust as it seems. The work in 2010 Cufflinks paper is enough for nature biotechonology. It is not a pure computation paper anyway.
make
【在 M*P 的大作中提到】 : you can count how many transcripts they validate in that paper with one : hand. : and the assembly method is total bullshit. : I believe GO better, because it is at least hard to cherrypick genes to make : an GOTerm enrichment. : : biotechnology
M*P
15 楼
I suspect that the problem of robustness of GO analysis comes from the test everyone is using. can you share the paper with me? my point is that validating one or two genes from thousands of genes predicted is no different from pick one or two go terms from a list of enriched term.
biology than . is
【在 l****m 的大作中提到】 : wet-experiment validation of a handful genes is the standard way in biology : research, just pick up any nature or science paper. : I would definitely prefer "Wet-experiment validation of a single gene" than : "GO validation". The former can easily boost your paper impact factor by 3. : I happen to developed a tool for GO. It is not as robust as it seems. : The work in 2010 Cufflinks paper is enough for nature biotechonology. It is : not a pure computation paper anyway. : : make
M*P
16 楼
how about the second cufflinks paper? you believe the figure 1 BS there? how many validation did they do on the second paper? lior is no different from Barabasi laughing at Barabasi'a linear algebra does not prove lior saw the big picture problem first.
" I suspect that the problem of robustness of GO analysis comes from the test everyone is using. can you share the paper with me? " Just pick up 1000 genes by random, and do a GO analysis, you will understand what I meant. GO always returns lots of result, leaving the authors plenty of space to pick whatever he wants. " my point is that validating one or two genes from thousands of genes predicted is no different from pick one or two go terms from a list of enriched term. " There is actually a huge difference in cost. The cost in wet-experiment prohibit the number of experiments you can do, and to some degree prevent selective results.
test
【在 M*P 的大作中提到】 : I suspect that the problem of robustness of GO analysis comes from the test : everyone is using. : can you share the paper with me? : my point is that validating one or two genes from thousands of genes : predicted is no different from pick one or two go terms from a list of : enriched term. : : biology : than : .
A*n
18 楼
问题是Manolis Kellis最后用来说故事的两个function term在GO analysis里面连 top10或者top20都算不上啊,这个就有点问题了。 "In fact, retinal cone cell development and nerve growth factor were 33 and 34 out of the 55 listed GO categories when sorted by the DAF p-value (42 and 54 when sorted by heterozygosity p-value)"
" how about the second cufflinks paper? you believe the figure 1 BS there? " Which one you are talking about? " lior is no different from Barabasi" Your previous posts show only inadequate experience in bioinformatics, which is totally fine. Nobody is expert in all field. However, this single one suggests who you are. You don't really care what is right. Barabasi apparently published too many "big picture problem" in the form of "original research article". If his paper is "perspective/editorial/ communication", then nobody (including lior) will question him.