var array = Array.from({length: 40000}, () => Math.floor(Math.random() * 140));
const f = [ new Set(array)]
const s = new Set(array)
const l = Array.from(s)
const b = array.filter((i,index) => array.indexOf(i)=== index)
--enable-precise-memory-info
flag.
Test case name | Result |
---|---|
Set spread | |
Array from set | |
Filter |
Test name | Executions per second |
---|---|
Set spread | 1269.8 Ops/sec |
Array from set | 1316.2 Ops/sec |
Filter | 68.7 Ops/sec |
Let's dive into the explanation of the provided benchmark.
Benchmark Definition The benchmark is designed to compare three different approaches for removing duplicates from an array:
Set
data structure with the spread operator ([... new Set(array)]
)Set
and then converting it back to an array using Array.from()
(const s = new Set(array) \r\nconst l = Array.from(s)
)filter()
method (const b = array.filter((i,index) => array.indexOf(i)=== index)
)Options Compared
The benchmark compares three different approaches for removing duplicates from an array:
Set
and then converts it back to an array using Array.from()
.Pros and Cons
Array.from()
) to convert back to an array, which can add overhead.Library and Purpose
In the benchmark definition, Array.from()
is used to convert a set back to an array. This is a built-in JavaScript method that returns a new array from an iterable.
Special JS Feature or Syntax None of the approaches in this benchmark utilize any special JavaScript features or syntax beyond standard ES6 syntax.
Other Alternatives
If you're looking for alternative ways to remove duplicates from an array, here are a few options:
map()
and includes()
methods: array.map((i) => i.includes(i) ? array.indexOf(i) : false).filter(Boolean)
reduce()
method: array.reduce((acc, curr) => (curr in acc) ? acc : [...acc, curr])
sort()
method followed by removal of duplicates: array.sort().slice(0, -1)