<!--your preparation HTML code goes here-->
let groups = [];
for (let i = 0, lengthI = 10000; i < lengthI; ++i) {
const group = [];
for (let j = 0, lengthJ = 10000; j < lengthJ; ++j) {
group.push(j % 2 ? 'a' : 'b');
}
groups.push(group);
}
Array.from(groups.reduce((set, group) => {
group.forEach(value => set.add(value));
return set;
}, new Set()));
const mySet = new Set();
groups.forEach(group => group.forEach(value => mySet.add(value)));
return Array.from(mySet);
--enable-precise-memory-info
flag.
Test case name | Result |
---|---|
reduce | |
unify |
Test name | Executions per second |
---|---|
reduce | 1.7 Ops/sec |
unify | 1.7 Ops/sec |
The benchmark presented tests two different approaches for creating a unique array from a collection of arrays (called groups
) consisting of alternating 'a' and 'b' characters. The primary metric evaluated in this comparison is the execution speed of two JavaScript implementations.
The preparation code creates an array named groups
that contains 10,000 subarrays, each containing 10,000 elements. Within each subarray, the elements are 'a' and 'b', alternating based on the index. The result is that each subarray consists roughly equal amounts of 'a' and 'b'.
Test Case: reduce
Benchmark Definition:
Array.from(groups.reduce((set, group) => {
group.forEach(value => set.add(value));
return set;
}, new Set()));
Explanation: This approach utilizes the reduce
method on the groups
array. It initializes a Set
, which is a built-in JavaScript object that ensures all values are unique. As the reduce
iterates over each group, it uses the forEach
method to add each value to the Set
. Finally, Array.from
converts the Set
back into an array.
Pros:
reduce
and forEach
).Set
, which automatically takes care of duplicate entries.Cons:
Test Case: unify
Benchmark Definition:
const mySet = new Set();
groups.forEach(group => group.forEach(value => mySet.add(value)));
return Array.from(mySet);
Explanation: This alternative method also uses a Set
but adopts a more imperative approach. It initializes a Set
named mySet
and explicitly iterates over each group of elements using forEach
. Each value is subsequently added to mySet
. After all iterations, it converts the Set
to an array with Array.from
.
Pros:
Cons:
reduce
, the overhead of function calls may impact performance slightly, though it's less than the first approach owing to the lack of a chaining pattern.In the benchmark results, both approaches yield nearly identical performance metrics on the specified setup, with the reduce
method providing slightly better execution speed (approximately 1.746 executions per second vs. 1.733 for unify
). These differences are minimal and could vary depending on various factors like the specific JavaScript engine optimizations or environmental conditions during execution.
Performance: Both methods show similar performance, but larger datasets or different browser implementations may yield varied results. This implies that testing across environments is important.
Alternative Libraries:
_.uniq
or similar functions from Lodash.Other native methods: One could also use ES6 features such as the spread operator or additional built-in utility functions, although they might not be as direct as the Set
methodology demonstrated.
This benchmark illustrates the trade-offs between different JavaScript paradigms (functional vs. imperative) while showcasing modern JavaScript's capabilities to manipulate data efficiently. Developers can choose based on readability, performance needs, and their team's familiarity with certain paradigms.