15 December, 14:40, «04 Hall. Ashot Yerkat»
The task of reducing the intensity of the event flow often arises in practice. Often it takes the form of limiting internet traffic to reduce the load on a particular service.
In my talk I will cover a slightly more complicated problem: reduce the intensity of the flow by removing only the most frequent elements (or, equivalently, by removing as few unique elements as possible).
It turns out this alternative approach to rate limiting is quite reasonable in some cases. And the talk will cover a bunch of application examples including several from our experience with traffic filtering at Qrator Labs. We will discuss the pros and cons of this approach and will also compare it with such limiting instruments as NGINX.
I will propose an algorithm developed by our team at Qrator Labs which solves this problem. Based on the famous Leaky Bucket algorithm, the algorithm is incredibly simple and requires no prior knowledge from the audience. Yet we find it quite interesting and elegant because it achieves the goal by spending O(1) time on processing each element, despite the apparent complexity of the task which in some way combines the task of rate limiting with the task of searching for the most frequent elements (so-called heavy hitters), that at first glance would require some kind of sorting.
In the second part of my talk, I will give a brief overview of Morris’s counters that allow counting a large number of events using a small amount of memory by introducing a probabilistic approach to updating when an event hits.
These counters can find their application in systems where the memory is a critical resource (in particular as a part of the algorithm mentioned above). In addition to describing the working principle and properties of the classical Morris’s counters, the talk will also present some novel ideas obtained in the course of our research.
The talk was accepted to the conference program