In order to treat high-dimensional problems, one has to find data-sparse representations. Starting with a six-dimensional problem, we first introduce the low-rank approximation of matrices. One purpose is the reduction of memory requirements, another advantage is that now vector operations instead of matrix operations can be applied. In the considered problem, the vectors correspond to grid functions defined on a three-dimensional grid. This leads to the next separation: these grid functions are tensors in ℝ n ⊗ ℝ n ⊗ ℝ n and can be represented by the hierarchical tensor format. Typical operations as the Hadamard product and the convolution are now reduced to operations between ℝ n vectors. Standard algorithms for operations with vectors from ℝ n are of order O ( n ) $\mathcal {O}(n)$ or larger. The tensorisation method is a representation method introducing additional data-sparsity. In many cases, the data size can be reduced from O ( n ) $\mathcal {O}(n)$ to O ( log n ) $\mathcal {O}(\log n)$ . Even more important, operations as the convolution can be performed with a cost corresponding to these data sizes.