R/zoo: index entries in ‘order.by’ are not unique

James A picture James A · Jul 3, 2013 · Viewed 10.5k times · Source

I have a .csv file containing 4 columns of data against a column of dates/times at one-minute intervals. Some timestamps are missing, so I'm trying to generate the missing dates/times and assign them NA values in the Y columns. I have previously done this with other .csv files with exactly the same formatting, with no issues. The code is:

# read the csv file
har10 = read.csv(fpath, header=TRUE);

# set date
har10$HAR.TS<-as.POSIXct(har10$HAR.TS,format="%y/%m/%d %H:%M")

# convert to zoo
df1.zoo<-zoo(har10[,-1],har10[,1]) #set date to Index

# merge and generate NAs
df2 <- merge(df1.zoo,zoo(,seq(start(df1.zoo),end(df1.zoo),by="min")), all=TRUE)

# write zoo object to .csv file in Home directory
write.zoo(df2, file = "har10fixed.csv", sep = ",")

My data looks like this (for an entire year, more or less) after conversion to POSIXct, which seems to go fine:

                    HAR.TS        C1       C2         C3        C4
1      2010-01-01 00:00:00 -4390.659 5042.423 -2241.6344 -2368.762
2      2010-01-01 00:01:00 -4391.711 5042.056 -2241.1796 -2366.725
3      2010-01-01 00:02:00 -4390.354 5043.003 -2242.5493 -2368.786
4      2010-01-01 00:03:00 -4390.337 5038.570 -2242.7653 -2371.289

When I the "convert to zoo" step I get the following error:

 Warning message:
 In zoo(har10[, -1], har10[, 1]) :
   some methods for “zoo” objects do not work if the index entries in ‘order.by’ are not unique

I have checked for duplicated entries but get no results:

> anyDuplicated(har10)
[1] 0

Any ideas? I have no idea why I'm getting this error on this file, but it has worked for previous ones. Thanks!


EDIT: Reproducable form:

EDIT 2: Have to remove the data/code, sorry!

Answer

Joshua Ulrich picture Joshua Ulrich · Jul 3, 2013

anyDuplicated(har10) tells you if any complete rows are duplicated. zoo is warning about the index, so you should run anyDuplicated(har10$HAR.TS). sum(duplicated(har10$HAR.TS)) will show there are almost 9,000 duplicate datetimes. The first duplicate is around row 311811, where 10/08/19 13:10 appears twice.