Requirements: You need to define a read-only collection type, which will not add, delete, or modify the collection, but will only query the collection. We hopeThe faster the query, the better。
Common set comparisons
When describing the complexity of the algorithm, o(1), o(n), o(logn), o(logn), o(nlogn) are commonly used to express the time complexity of the corresponding algorithm, which is the expression of the spatiotemporal complexity of the algorithm. It is used not only to represent temporal complexity, but also to represent spatial complexity.
O is followed by a function in parentheses that indicates the relationship between the time/space consumed by an algorithm and the amount of data grown. where n represents the amount of input data.
For example, if the time complexity is O(n), it means that the amount of data increases several times, and the time consumption also increases several times. For example, common traversal algorithms. Another example is the time complexity O(n^2), which means that when the data volume increases n times, it takes time to increase n square times, which is a higher time complexity than linear. For example, bubble sorting is a typical O(n^2) algorithm, which needs to be scanned n × n times to sort n numbers.
Another example is O(logn), when the data increases n times, it takes time to increase the logn times (the log here is based on 2, for example, when the data is increased by 256 times, the time required only increases by 8 times, which is lower than linear. The binary search is the algorithm of O (logn), which eliminates half of the possibilities every time it is found, and the search in 256 data only needs to be found 8 times to find the target.
O(nlogn) is the same, that is, n multiplied by logn, when the data is increased by 256 times, the time consumption increases by 256 * 8 = 2048 times. This complexity is higher than linearity below square. Merge and sort is the time complexity of O(nlogn).
O(1) is the lowest spatiotemporal complexity, that is, the time/space consumed is independent of the size of the input data, no matter how many times the input data is increased, the time/space consumed remains unchanged. The hashing algorithm is a typical O(1) time complexity, which can find the target after a single calculation (regardless of conflicts), no matter how large the data is.
This article uses the BenchmarkDotNet pairList、HashSet、SortedSet、DictionaryQuery for benchmarking, refer to the following:
The code is as follows:
Test result: Finding a Key on a Dictionary or HashSet is much faster than looking up in a List and SortedSet. The time complexity of the Dictionary and HashSet algorithms is O(1), and in order to save memory, the Dictionary's Value is of no use to us, so I still chose HashSet storage.
|