Summation formulas have played a very important role in analysis and number theory, dating back to the Poisson summation formula. The modern formulation of Poisson summation asserts the equality 1.1 $$\sum\limits_{{n \in \mathbb{Z}}} {f(n) = \sum\limits_{{n \in \mathbb{Z}}} {\widehat{f}(n)} } \left( {\widehat{f}(t) = \int_{\mathbb{R}} {f(x){{e}^{{ - 2\pi ixt}}}dx} } \right),$$ valid (at least) for all Schwartz functions f. Let us take a brief historical detour to the beginning of the 20th century, before the notion of Schwartz function had been introduced. The custom then was to state (1.1) for more general functions f,such as functions of bounded variation, but supported on a finite interval, and usually in terms of the cosine: 1.2 $$\mathop{{{{\sum }^{\prime }}}}\limits_{{a \leqslant n \leqslant b}} f(n) = \smallint _{a}^{b}f(x)dx + 2\mathop{\sum }\limits_{{n = 1}}^{\infty } \smallint _{a}^{b}f(x)\cos (2\pi nx)dx;$$ the notation ∑′ signifies that at points n where f has a discontinuity - including the endpoints a, b - the term f (n) is to be interpreted as the average of the left and right limits of f(x). Indeed, the general case of (1.2) can be reduced to the special case of a = 0, b = 1, which amounts to the statement that the Fourier series of a periodic function of bounded variation converges pointwise, to the average of its left and right-hand limits.