Pandas: resample timeseries with groupby

AshB picture AshB · Aug 14, 2015 · Viewed 39.4k times · Source

Given the below pandas DataFrame:

In [115]: times = pd.to_datetime(pd.Series(['2014-08-25 21:00:00','2014-08-25 21:04:00',
                                            '2014-08-25 22:07:00','2014-08-25 22:09:00']))
          locations = ['HK', 'LDN', 'LDN', 'LDN']
          event = ['foo', 'bar', 'baz', 'qux']
          df = pd.DataFrame({'Location': locations,
                             'Event': event}, index=times)
          df
Out[115]:
                               Event Location
          2014-08-25 21:00:00  foo   HK
          2014-08-25 21:04:00  bar   LDN
          2014-08-25 22:07:00  baz   LDN
          2014-08-25 22:09:00  qux   LDN

I would like resample the data to aggregate it hourly by count while grouping by location to produce a data frame that looks like this:

Out[115]:
                               HK    LDN
          2014-08-25 21:00:00  1     1
          2014-08-25 22:00:00  0     2

I've tried various combinations of resample() and groupby() but with no luck. How would I go about this?

Answer

unutbu picture unutbu · Aug 14, 2015

In my original post, I suggested using pd.TimeGrouper. Nowadays, use pd.Grouper instead of pd.TimeGrouper. The syntax is largely the same, but TimeGrouper is now deprecated in favor of pd.Grouper.

Moreover, while pd.TimeGrouper could only group by DatetimeIndex, pd.Grouper can group by datetime columns which you can specify through the key parameter.


You could use a pd.Grouper to group the DatetimeIndex'ed DataFrame by hour:

grouper = df.groupby([pd.Grouper(freq='1H'), 'Location'])

use count to count the number of events in each group:

grouper['Event'].count()
#                      Location
# 2014-08-25 21:00:00  HK          1
#                      LDN         1
# 2014-08-25 22:00:00  LDN         2
# Name: Event, dtype: int64

use unstack to move the Location index level to a column level:

grouper['Event'].count().unstack()
# Out[49]: 
# Location             HK  LDN
# 2014-08-25 21:00:00   1    1
# 2014-08-25 22:00:00 NaN    2

and then use fillna to change the NaNs into zeros.


Putting it all together,

grouper = df.groupby([pd.Grouper(freq='1H'), 'Location'])
result = grouper['Event'].count().unstack('Location').fillna(0)

yields

Location             HK  LDN
2014-08-25 21:00:00   1    1
2014-08-25 22:00:00   0    2