I was following a previous post on this that says:
For LinkedList
- get is O(n)
- add is O(1)
- remove is O(n)
- Iterator.remove is O(1)
For ArrayList
- get is O(1)
- add is O(1) amortized, but O(n) worst-case since the array must be resized and copied
- remove is O(n)
So by looking at this, I concluded that if I've to do just sequential insert in my collection for say 5000000 elements, LinkedList
will outclass ArrayList
.
And if I've to just fetch the elements from collection by iterating i.e. not grabbing the element in middle, still LinkedList
will outclass `ArrayList.
Now to verify my above two statements, I wrote below sample program… But I'm surprised that my above statements were proven wrong.
ArrayList
outclassed Linkedlist
in both the cases. It took less time than LinkedList
for adding as well as fetching them from Collection. Is there anything I'm doing wrong, or the initial statements about LinkedList
and ArrayList
does not hold true for collections of size 5000000?
I mentioned size, because if I reduce the number of elements to 50000, LinkedList
performs better and initial statements hold true.
long nano1 = System.nanoTime();
List<Integer> arr = new ArrayList();
for(int i = 0; i < 5000000; ++i) {
arr.add(i);
}
System.out.println( (System.nanoTime() - nano1) );
for(int j : arr) {
;
}
System.out.println( (System.nanoTime() - nano1) );
long nano2 = System.nanoTime();
List<Integer> arrL = new LinkedList();
for(int i = 0; i < 5000000; ++i) {
arrL.add(i);
}
System.out.println( (System.nanoTime() - nano2) );
for(int j : arrL) {
;
}
System.out.println( (System.nanoTime() - nano2) );
Remember that big-O complexity describes asymptotic behaviour and may not reflect actual implementation speed. It describes how the cost of each operation grows with the size of the list, not the speed of each operation. For example, the following implementation of add
is O(1) but is not fast:
public class MyList extends LinkedList {
public void add(Object o) {
Thread.sleep(10000);
super.add(o);
}
}
I suspect in your case ArrayList is performing well because it increases it's internal buffer size fairly aggressively so there will not be a large number of reallocations. When the buffer does not need to be resized ArrayList will have faster add
s.
You also need to be very careful when you do this kind of profiling. I'd suggest you change your profiling code to do a warm-up phase (so the JIT has the opportunity to do some optimization without affecting your results) and average the results over a number of runs.
private final static int WARMUP = 1000;
private final static int TEST = 1000;
private final static int SIZE = 500000;
public void perfTest() {
// Warmup
for (int i = 0; i < WARMUP; ++i) {
buildArrayList();
}
// Test
long sum = 0;
for (int i = 0; i < TEST; ++i) {
sum += buildArrayList();
}
System.out.println("Average time to build array list: " + (sum / TEST));
}
public long buildArrayList() {
long start = System.nanoTime();
ArrayList a = new ArrayList();
for (int i = 0; i < SIZE; ++i) {
a.add(i);
}
long end = System.nanoTime();
return end - start;
}
... same for buildLinkedList
(Note that sum
may overflow and you might be better to use System.currentTimeMillis()
).
It's also possible that the compiler is optimizing away your empty get
loops. Make sure the loop actually does something to ensure that the right code is getting called.