I've got large json array of objects that I need to filter down based on multiple user select inputs. Currently I'm chaining filter functions together but I've got a feeling this is most likely not the most performant way to do this.
Currently I'm doing this:
var filtered = data.filter(function(data) {
return Conditional1
})
.filter(function(data) {
return Conditional2
})
.filter(function(data) {
return Conditional3
}) etc...;
Although (I think) with each iteration 'data' could be less, I'm wondering if a better practice would be to do something like this:
var condition1 = Conditional1
var condition2 = Conditional2
var condition3 = Conditional3
etc...
var filtered = data.filter(function(data) {
return condition1 && condition2 && condition3 && etc...
});
I've looked into multiple chains of higher order functions, specifically the filter function - but I haven't seen anything on best practice (or bad practice, nor have I timed and compared the two I've suggested).
In a use case with a large data set and many conditionals which would be preferred (I reckon they are both fairly easily readable)?
Or maybe there is a more performant way that I'm missing (but still using higher-order functions).
Store your filter functions in an array and have array.reduce()
run through each filter, applying it to the data. This comes at the cost of running through all of them even when there's no more data to filter.
const data = [...]
const filters = [f1, f2, f3, ...]
const filteredData = filters.reduce((d, f) => d.filter(f) , data)
Another way to do it is to use array.every()
. This takes the inverse approach, running through the data, and checking if all filters apply. array.every()
returns false as soon as one item returns false
.
const data = [...]
const filters = [f1, f2, f3, ...]
const filteredData = data.filter(v => filters.every(f => f(v)))
Both are similar to your first and second samples, respectively. The only difference is it doesn't hardcode the filters or conditions.