var array = [];
for (var i = 0; i < 1000; i++) {
array.push(Math.random() * 1000);
}
[new Set(array)];
array.filter((item, index) => array.indexOf(item) === index);
array.reduce((unique, item) => unique.includes(item) ? unique : [unique, item], []);
_.uniq(array)
--enable-precise-memory-info
flag.
Test case name | Result |
---|---|
Set | |
Filter | |
Reduce | |
Lodash |
Test name | Executions per second |
---|---|
Set | 22496.8 Ops/sec |
Filter | 2426.5 Ops/sec |
Reduce | 548.8 Ops/sec |
Lodash | 18631.1 Ops/sec |
Let's break down what's being tested in this benchmark.
What is being tested?
The benchmark tests the performance of different algorithms for removing duplicates from an array:
Set
: creates a new Set object from the input array, which automatically removes duplicates.Filter
: uses the filter()
method to create a new array with only unique elements.Reduce
: uses the reduce()
method to accumulate a set of unique elements._uniq
(from Lodash library): uses the _uniq
function from the Lodash library, which is similar to the Set
and Filter
approaches.Options compared
The benchmark compares the performance of these four approaches:
Set
: creates a new Set object from the input array.Filter
: uses the filter()
method with a callback function that checks for duplicates.Reduce
: uses the reduce()
method with an accumulator that keeps track of unique elements._uniq
(Lodash): uses the _uniq
function from the Lodash library, which is similar to the Set
and Filter
approaches.Pros and Cons
Here's a brief summary of the pros and cons of each approach:
_uniq
(Lodash): Pros:Library and special JS features
The benchmark uses the Lodash library for its _uniq
function. Lodash is a utility library that provides a collection of helper functions, including uniq
.
No other special JavaScript features or syntax are used in this benchmark.
Alternatives
Some alternatives to these approaches could be:
Array
with forEach()
and checking for duplicates manually.Map
, to store unique elements.However, these alternatives are less efficient and more prone to errors compared to the tested approaches.