做Clojure项目基本上都要从lein开始,下载lein脚本https://leiningen.org/,lein new myproject,就行了。 然后就看你想做什么项目。 Clojure主要被用来做Web,这方面的资源比较多。如果对Web有经验,从这儿入手比较 好。做Web项目的话,用luminus微框架,照着做就行了, http://www.luminusweb.net/ $ lein new luminus my-app $ cd my-app $ lein run Started server on port 3000 作者还写了本书,喜欢读书的话,可以看看https://pragprog.com/book/dswdcloj2/ web-development-with-clojure-second-edition 如果没做过Web,那可以从你熟悉的领域入手。一般能找到有关的资源的。
【在 h*i 的大作中提到】 : 做Clojure项目基本上都要从lein开始,下载lein脚本https://leiningen.org/,lein : new myproject,就行了。 : 然后就看你想做什么项目。 : Clojure主要被用来做Web,这方面的资源比较多。如果对Web有经验,从这儿入手比较 : 好。做Web项目的话,用luminus微框架,照着做就行了, http://www.luminusweb.net/ : $ lein new luminus my-app : $ cd my-app : $ lein run : Started server on port 3000 : 作者还写了本书,喜欢读书的话,可以看看https://pragprog.com/book/dswdcloj2/
【在 h*i 的大作中提到】 : 做Clojure项目基本上都要从lein开始,下载lein脚本https://leiningen.org/,lein : new myproject,就行了。 : 然后就看你想做什么项目。 : Clojure主要被用来做Web,这方面的资源比较多。如果对Web有经验,从这儿入手比较 : 好。做Web项目的话,用luminus微框架,照着做就行了, http://www.luminusweb.net/ : $ lein new luminus my-app : $ cd my-app : $ lein run : Started server on port 3000 : 作者还写了本书,喜欢读书的话,可以看看https://pragprog.com/book/dswdcloj2/
谢谢 The big surprise with Flare has been that performance is relatively strong compared to PyTorch, about 2x-3x faster for the models I’ve built in Flare. While I’ve optimized the obvious things in Flare, there is still a lot more low hanging fruit. I suspect some of the performance wins relative to PyTorch are coming from graph construction itself. While PyTorch and Flare both fallback to Intel native MKL on CPU, graph construction happens in the host language (Python or Clojure) and this is where PyTorch and Flare can differ performance-wise; this makes a large difference for dynamic neural nets where graph construction happens for each input. 如果这是真的话,语言的raw performance对data processing还是挺重要的
【在 n******7 的大作中提到】 : 谢谢 : The big surprise with Flare has been that performance is relatively strong : compared to PyTorch, about 2x-3x faster for the models I’ve built in Flare. : While I’ve optimized the obvious things in Flare, there is still a lot : more low hanging fruit. I suspect some of the performance wins relative to : PyTorch are coming from graph construction itself. While PyTorch and Flare : both fallback to Intel native MKL on CPU, graph construction happens in the : host language (Python or Clojure) and this is where PyTorch and Flare can : differ performance-wise; this makes a large difference for dynamic neural : nets where graph construction happens for each input.
【在 n***p 的大作中提到】 : clojure需要记忆的rule不到python的十分之一,build in feature(stm) for : concurrent programming. 单线程速度也比python快很多,用的人不多找谁说理去
s*M
47 楼
bless! I never thought 民女 is FELV kid!!! I thought she is just a lucky healthy baby! bless both you and 民女!
n*p
48 楼
1 + 1 + 1 + 1 + 1 Vs (+ 1 1 1 1 1) Again, which one is better depends on personal preference Another example, 1 + 2 * 3 You need to know operator precedence. C++ has 17 rules about precedence In lisp you don't need to know such things (+ 1 (* 2 3))
你如何证明人脑处理(+ 1 1)需要“栈”处理, 而1+1就神奇的不是“栈”处理了? 或者说,为什么人处理(+ 1 1)就不是并行低速处理了? Like I said many times before, you keep regurgitating things you have no idea about. I have no tolerance for fools like you.
【在 h*i 的大作中提到】 : 你如何证明人脑处理(+ 1 1)需要“栈”处理, 而1+1就神奇的不是“栈”处理了? : 或者说,为什么人处理(+ 1 1)就不是并行低速处理了? : Like I said many times before, you keep regurgitating things you have no : idea about. : I have no tolerance for fools like you.
h*i
72 楼
Nobody, nobody knows a fucking thing about how brain works. Anyone who talks as if they know are either a fool or have something to sell you. People who actually know something about brain won't talk like this fool xiaoju. Here are some videos how they talk about brains: https://nancysbraintalks.mit.edu/course/9-11-the-human-brain
【在 h*i 的大作中提到】 : Nobody, nobody knows a fucking thing about how brain works. Anyone who talks : as if they know are either a fool or have something to sell you. : People who actually know something about brain won't talk like this fool : xiaoju. : Here are some videos how they talk about brains: : https://nancysbraintalks.mit.edu/course/9-11-the-human-brain
h*i
74 楼
Don't switch topic. Just show how you know human processing of (+ 1 1) is "stack processing". Show us.