2017-01-13 20 views
9

readseekの間のトレードオフを理解しようとしています。小さな "ジャンプ"の場合は、不要なデータを読む方が、seekでスキップするほうが速いです。ファイルから1,2,3,4、...バイトを読み込むより1バイト読み込みが20倍遅いのはなぜですか?

異なる読み出しタイミング間/チャンクを求める転換点を見つけるために、大きさ、Iは、奇数現象に出くわした:read(1)この効果は、例えば、異なる読み出し方法についても同様であるread(2)read(3)、等よりも約20倍遅いですread()およびreadinto()

これはなぜですか?

2 x buffered 1 byte readinto bytearray 

環境:

Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:45:57) [MSC v.1900 32 bit (Intel)] 

タイミング結果:

Non-cachable binary data ingestion (file object blk_size = 8192): 
- 2 x buffered 0 byte readinto bytearray: 
     robust mean: 6.01 µs +/- 377 ns 
     min: 3.59 µs 
- Buffered 0 byte seek followed by 0 byte readinto: 
     robust mean: 9.31 µs +/- 506 ns 
     min: 6.16 µs 
- 2 x buffered 4 byte readinto bytearray: 
     robust mean: 14.4 µs +/- 6.82 µs 
     min: 2.57 µs 
- 2 x buffered 7 byte readinto bytearray: 
     robust mean: 14.5 µs +/- 6.76 µs 
     min: 3.08 µs 
- 2 x buffered 2 byte readinto bytearray: 
     robust mean: 14.5 µs +/- 6.77 µs 
     min: 3.08 µs 
- 2 x buffered 5 byte readinto bytearray: 
     robust mean: 14.5 µs +/- 6.76 µs 
     min: 3.08 µs 
- 2 x buffered 3 byte readinto bytearray: 
     robust mean: 14.5 µs +/- 6.73 µs 
     min: 2.57 µs 
- 2 x buffered 49 byte readinto bytearray: 
     robust mean: 14.5 µs +/- 6.72 µs 
     min: 2.57 µs 
- 2 x buffered 6 byte readinto bytearray: 
     robust mean: 14.6 µs +/- 6.76 µs 
     min: 3.08 µs 
- 2 x buffered 343 byte readinto bytearray: 
     robust mean: 15.3 µs +/- 6.43 µs 
     min: 3.08 µs 
- 2 x buffered 2401 byte readinto bytearray: 
     robust mean: 138 µs +/- 247 µs 
     min: 4.11 µs 
- Buffered 7 byte seek followed by 7 byte readinto: 
     robust mean: 278 µs +/- 333 µs 
     min: 15.4 µs 
- Buffered 3 byte seek followed by 3 byte readinto: 
     robust mean: 279 µs +/- 333 µs 
     min: 14.9 µs 
- Buffered 1 byte seek followed by 1 byte readinto: 
     robust mean: 279 µs +/- 334 µs 
     min: 15.4 µs 
- Buffered 2 byte seek followed by 2 byte readinto: 
     robust mean: 279 µs +/- 334 µs 
     min: 15.4 µs 
- Buffered 4 byte seek followed by 4 byte readinto: 
     robust mean: 279 µs +/- 334 µs 
     min: 15.4 µs 
- Buffered 49 byte seek followed by 49 byte readinto: 
     robust mean: 281 µs +/- 336 µs 
     min: 14.9 µs 
- Buffered 6 byte seek followed by 6 byte readinto: 
     robust mean: 281 µs +/- 337 µs 
     min: 15.4 µs 
- 2 x buffered 1 byte readinto bytearray: 
     robust mean: 282 µs +/- 334 µs 
     min: 17.5 µs 
- Buffered 5 byte seek followed by 5 byte readinto: 
     robust mean: 282 µs +/- 338 µs 
     min: 15.4 µs 
- Buffered 343 byte seek followed by 343 byte readinto: 
     robust mean: 283 µs +/- 340 µs 
     min: 15.4 µs 
- Buffered 2401 byte seek followed by 2401 byte readinto: 
     robust mean: 309 µs +/- 373 µs 
     min: 15.4 µs 
- Buffered 16807 byte seek followed by 16807 byte readinto: 
     robust mean: 325 µs +/- 423 µs 
     min: 15.4 µs 
- 2 x buffered 16807 byte readinto bytearray: 
     robust mean: 457 µs +/- 558 µs 
     min: 16.9 µs 
- Buffered 117649 byte seek followed by 117649 byte readinto: 
     robust mean: 851 µs +/- 1.08 ms 
     min: 15.9 µs 
- 2 x buffered 117649 byte readinto bytearray: 
     robust mean: 1.29 ms +/- 1.63 ms 
     min: 18 µs 

スルーウェイの次の行2/3のタイミング結果に

検索ベンチマーキングコード:

from _utils import BenchmarkResults 

from timeit import timeit, repeat 
import gc 
import os 
from contextlib import suppress 
from math import floor 
from random import randint 

### Configuration 

FILE_NAME = 'test.bin' 
r = 5000 
n = 100 

reps = 1 

chunk_sizes = list(range(7)) + [7**x for x in range(1,7)] 

results = BenchmarkResults(description = 'Non-cachable binary data ingestion') 


### Setup 

FILE_SIZE = int(100e6) 

# remove left over test file 
with suppress(FileNotFoundError): 
    os.unlink(FILE_NAME) 

# determine how large a file needs to be to not fit in memory 
gc.collect() 
try: 
    while True: 
     data = bytearray(FILE_SIZE) 
     del data 
     FILE_SIZE *= 2 
     gc.collect() 
except MemoryError: 
    FILE_SIZE *= 2 
    print('Using file with {} GB'.format(FILE_SIZE/1024**3)) 

# check enough data in file 
required_size = sum(chunk_sizes)*2*2*reps*r 
print('File size used: {} GB'.format(required_size/1024**3)) 
assert required_size <= FILE_SIZE 


# create test file 
with open(FILE_NAME, 'wb') as file: 
    buffer_size = int(10e6) 
    data = bytearray(buffer_size) 
    for i in range(int(FILE_SIZE/buffer_size)): 
     file.write(data) 

# read file once to try to force it into system cache as much as possible 
from io import DEFAULT_BUFFER_SIZE 
buffer_size = 10*DEFAULT_BUFFER_SIZE 
buffer = bytearray(buffer_size) 
with open(FILE_NAME, 'rb') as file: 
    bytes_read = True 
    while bytes_read: 
     bytes_read = file.readinto(buffer) 
    blk_size = file.raw._blksize 

results.description += ' (file object blk_size = {})'.format(blk_size) 

file = open(FILE_NAME, 'rb') 

### Benchmarks 

setup = \ 
""" 
# random seek to avoid advantageous starting position biasing results 
file.seek(randint(0, file.raw._blksize), 1) 
""" 

read_read = \ 
""" 
file.read(chunk_size) 
file.read(chunk_size) 
""" 

seek_seek = \ 
""" 
file.seek(buffer_size, 1) 
file.seek(buffer_size, 1) 
""" 

seek_read = \ 
""" 
file.seek(buffer_size, 1) 
file.read(chunk_size) 
""" 

read_read_timings = {} 
seek_seek_timings = {} 
seek_read_timings = {} 
for chunk_size in chunk_sizes: 
    read_read_timings[chunk_size] = [] 
    seek_seek_timings[chunk_size] = [] 
    seek_read_timings[chunk_size] = [] 

for j in range(r): 
    #file.seek(0) 
    for chunk_size in chunk_sizes: 
     buffer = bytearray(chunk_size) 
     read_read_timings[chunk_size].append(timeit(read_read, setup, number=reps, globals=globals())) 
     #seek_seek_timings[chunk_size].append(timeit(seek_seek, setup, number=reps, globals=globals())) 
     seek_read_timings[chunk_size].append(timeit(seek_read, setup, number=reps, globals=globals())) 

for chunk_size in chunk_sizes: 
    results['2 x buffered {} byte readinto bytearray'.format(chunk_size)] = read_read_timings[chunk_size] 
    #results['2 x buffered {} byte seek'.format(chunk_size)] = seek_seek_timings[chunk_size] 
    results['Buffered {} byte seek followed by {} byte readinto'.format(chunk_size, chunk_size)] = seek_read_timings[chunk_size] 


### Cleanup 
file.close() 
os.unlink(FILE_NAME) 

results.show() 
results.save() 

答えて

1

あなたはすべての単一の呼び出しに機能するための完全なオーバーヘッドが発生しているためです。コンピュータがまだ8ビットのままだったら、この現象はもっと面白いでしょう。

答えは単純です:大きい値を渡すと、繰り返しあたりより多くのバイトを処理しています;町の向こう側のすべてのあなたの用事を扱って、町の向こう側に駆けつけるのと同じように。 read()に渡された値が大きければ大きいほど、一度に多くの手間がかかり、効率的になるはずです(潜在的に)。

+0

私はこれがかなり正しくないのではないかと心配しています。それがあった場合、1バイトを読み取ることは、一度に2バイトを読み取る場合よりも2倍遅くなければなりません。上記のケースでは20倍遅かった!しかし、努力をいただきありがとうございます。私はまだこの問題が残っているのを忘れていた。 – ARF

関連する問題