<script src='https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.21/lodash.min.js'></script>
function createlist(size) {
return Array.from(Array(size)).map(() => Math.floor(Math.random() * size));
}
var list100 = createlist(100);
var list1000 = createlist(1000);
var list10000 = createlist(10000);
var identity = item => item;
var uniq1 = _.uniq(list100);
var uniq2 = _.uniqBy(list100, identity);
var uniq3 = Array.from(new Set(list100));
var uniq1 = _.uniq(list1000);
var uniq2 = _.uniqBy(list1000, identity);
var uniq3 = Array.from(new Set(list1000));
var uniq1 = _.uniq(list10000);
var uniq2 = _.uniqBy(list10000, identity);
var uniq3 = Array.from(new Set(list10000));
--enable-precise-memory-info
flag.
Test case name | Result |
---|---|
uniq 100 | |
uniqBy 100 | |
Set 100 | |
uniq 1000 | |
uniqBy 1000 | |
Set 1000 | |
uniq 10000 | |
uniqBy 10000 | |
Set 10000 |
Test name | Executions per second |
---|---|
uniq 100 | 241749.3 Ops/sec |
uniqBy 100 | 236622.5 Ops/sec |
Set 100 | 420157.2 Ops/sec |
uniq 1000 | 27393.0 Ops/sec |
uniqBy 1000 | 24571.0 Ops/sec |
Set 1000 | 32117.9 Ops/sec |
uniq 10000 | 2713.6 Ops/sec |
uniqBy 10000 | 1613.7 Ops/sec |
Set 10000 | 3117.2 Ops/sec |
Let's dive into the world of JavaScript microbenchmarks!
Benchmark Overview
The provided JSON represents a benchmark suite that tests three different approaches for removing duplicates from an array: uniq
, uniqBy
, and Set
. The test cases involve generating random arrays of varying sizes (100, 1000, and 10,000) and then applying each approach to remove duplicates.
Approaches Compared
_.uniq(list)
: This approach uses the _.uniq
function from Lodash, which removes duplicate elements from an array while preserving order._.uniqBy(list, identity)
: This approach uses the _.uniqBy
function from Lodash, which removes duplicate elements from an array based on a custom equality function (identity
).Array.from(new Set(list))
: This approach uses the Set
data structure to remove duplicates from an array.Pros and Cons of Each Approach
.uniq()
:.uniqBy()
:.uniq()
due to the additional computation required by the identity
function.Array.from(new Set(list))
:Other Considerations
Set
data structure._.uniq
, _uniqBy
). This introduces a dependency on an external library.Results and Analysis
The provided execution results show that .uniq()
is generally the fastest approach, followed closely by Array.from(new Set(list))
. .uniqBy()
tends to be slower due to the additional computation required by the identity
function. The performance differences between these approaches are most pronounced for large datasets (e.g., 10,000 elements).
Based on these findings, if you need to remove duplicates from an array while preserving order, .uniq()
is likely the best choice. If you need more flexibility in your duplicate removal process, _.uniqBy()
may be a better option. For efficiency and simplicity, Array.from(new Set(list))
is a solid choice.
Please note that this analysis assumes that the benchmark is representative of typical use cases and does not account for edge cases or specific requirements that might affect performance.