我正在开发Sdio UART
Linux /
Android驱动程序的性能基准测试,并在待分析的读取,写入函数实现的开始和结束时使用current_kernel_time(),然后打印时差.
大多数时候我得到时间差为0(零)纳秒(不管读/写数据的大小:16-2048字节),从逻辑上我认为是不正确的,只有极少数我得到一些值,希望那些是正确.
current_kernel_time()的可靠性如何?
为什么我大多数时间都会获得0ns?
我打算在内核级别进行分析以获得更多细节.在此之前,有人可以对这种行为有所了解……之前有人观察过这样的事情……
此外,欢迎任何帮助/纠正我的基准测试方法的建议!
谢谢.
编辑:
这是Linux内核版本2.6.32.9的读取代码.我在#ifdef-endif下添加了current_kernel_time(),如下所示:
static void sdio_uart_receive_chars(struct sdio_uart_port *port,unsigned int *status) { #ifdef Sdio_UART_DEBUG struct timespec time_spec1,time_spec2; time_spec1 = current_kernel_time(); #endif struct tty_struct *tty = port->tty; unsigned int ch,flag; int max_count = 256; do { ch = sdio_in(port,UART_RX); flag = TTY_NORMAL; port->icount.rx++; if (unlikely(*status & (UART_LSR_BI | UART_LSR_PE | UART_LSR_FE | UART_LSR_OE))) { /* * For statistics only */ if (*status & UART_LSR_BI) { *status &= ~(UART_LSR_FE | UART_LSR_PE); port->icount.brk++; } else if (*status & UART_LSR_PE) port->icount.parity++; else if (*status & UART_LSR_FE) port->icount.frame++; if (*status & UART_LSR_OE) port->icount.overrun++; /* * Mask off conditions which should be ignored. */ *status &= port->read_status_mask; if (*status & UART_LSR_BI) { flag = TTY_BREAK; } else if (*status & UART_LSR_PE) flag = TTY_PARITY; else if (*status & UART_LSR_FE) flag = TTY_FRAME; } if ((*status & port->ignore_status_mask & ~UART_LSR_OE) == 0) tty_insert_flip_char(tty,ch,flag); /* * Overrun is special. Since it's reported immediately,* it doesn't affect the current character. */ if (*status & ~port->ignore_status_mask & UART_LSR_OE) tty_insert_flip_char(tty,TTY_OVERRUN); *status = sdio_in(port,UART_LSR); } while ((*status & UART_LSR_DR) && (max_count-- > 0)); tty_flip_buffer_push(tty); #ifdef Sdio_UART_DEBUG time_spec2 = current_kernel_time(); printk(KERN_INFO "\n MY_DBG : read took: %ld nanoseconds",(time_spec2.tv_sec - time_spec1.tv_sec) * 1000000000 + (time_spec2.tv_nsec - time_spec1.tv_nsec)); #endif }