@murantが示唆しているように、sorted_index=
keyword to the read_hdf functionを使用するのが理想的です。
さらに一般的には、set_index
functionを使用して、他の方法で作成されたものであっても、任意のデータフレームでインデックスを設定できます。この関数には、新しい索引列がすでにソートされている場合、およびすでにパーティション間の分離値が分かっている場合に効率的な新しいキーワードがあります。ここに現在のdocstringがあります。最後の例は、あなたにとって興味深いかもしれません。
"""Set the DataFrame index (row labels) using an existing column
This realigns the dataset to be sorted by a new column. This can have a
significant impact on performance, because joins, groupbys, lookups, etc.
are all much faster on that column. However, this performance increase
comes with a cost, sorting a parallel dataset requires expensive shuffles.
Often we ``set_index`` once directly after data ingest and filtering and
then perform many cheap computations off of the sorted dataset.
This function operates exactly like ``pandas.set_index`` except with
different performance costs (it is much more expensive). Under normal
operation this function does an initial pass over the index column to
compute approximate qunatiles to serve as future divisions. It then passes
over the data a second time, splitting up each input partition into several
pieces and sharing those pieces to all of the output partitions now in
sorted order.
In some cases we can alleviate those costs, for example if your dataset is
sorted already then we can avoid making many small pieces or if you know
good values to split the new index column then we can avoid the initial
pass over the data. For example if your new index is a datetime index and
your data is already sorted by day then this entire operation can be done
for free. You can control these options with the following parameters.
Parameters
----------
df: Dask DataFrame
index: string or Dask Series
npartitions: int, None, or 'auto'
The ideal number of output partitions. If None use the same as
the input. If 'auto' then decide by memory use.
shuffle: string, optional
Either ``'disk'`` for single-node operation or ``'tasks'`` for
distributed operation. Will be inferred by your current scheduler.
sorted: bool, optional
If the index column is already sorted in increasing order.
Defaults to False
divisions: list, optional
Known values on which to separate index values of the partitions.
See http://dask.pydata.org/en/latest/dataframe-design.html#partitions
Defaults to computing this with a single pass over the data. Note
that if ``sorted=True``, specified divisions are assumed to match
the existing partitions in the data. If this is untrue, you should
leave divisions empty and call ``repartition`` after ``set_index``.
compute: bool
Whether or not to trigger an immediate computation. Defaults to False.
Examples
--------
>>> df2 = df.set_index('x') # doctest: +SKIP
>>> df2 = df.set_index(d.x) # doctest: +SKIP
>>> df2 = df.set_index(d.timestamp, sorted=True) # doctest: +SKIP
A common case is when we have a datetime column that we know to be
sorted and is cleanly divided by day. We can set this index for free
by specifying both that the column is pre-sorted and the particular
divisions along which is is separated
>>> import pandas as pd
>>> divisions = pd.date_range('2000', '2010', freq='1D')
>>> df2 = df.set_index('timestamp', sorted=True, divisions=divisions) # doctest: +SKIP
"""
「Apache Parquet」を優先しますか?つまり、HDF5は、1つのファイルとして保存する必要があるという点で限定されていますが、Parquetファイルは配布することができます。起こっていることは、DaskがHDF5ファイル全体を再読み込みしてDataFrameに変換する必要があることです。これは '.loc'ではなく長い時間がかかります。それが事実かもしれないかどうか観察するために2つのステップを分けましたか? – kuanb