Disclaimer: I don't have good knowledge of Linux. I mostly work with Windows. Usually, it is based on interpolating some hardware-based time with the CPU built-in RDTSC instruction. The hardware time is stored in some kernel memory. It is updated by the hardware (through memory-mapped hardware I/O, or through DMA) periodically. It may have very poor resolution (lower than 1000 Hz, for example.) There might be some other hardware timing sources with higher frequency. However, when you call the software API, it will read both the most-recent hardware-updated time, the value of RDTSC when that update happened, and the current RDTSC value. Then it will do some interpolation, in order to give some decent timing resolution to the caller. The last time I benchmarked, RDTSC itself has a latency 24 clocks on Nehalem. (Repeat: Disclaimer: I don't have good knowledge of Linux. I mostly work with Windows.) Some web search results. http://stackoverflow.com/questions/5165108/high-precision-timin http://stackoverflow.com/questions/88/is-gettimeofday-guarantee
【在 f******y 的大作中提到】 : 说的形象一点,想当于做了多少次double类型的加法?
h*c
7 楼
open threads total more than total system cores each thread call one million times the given system call average the ticks.