Sparse matrix-vector and matrix-transpose-vector multiplication ($\mathrm {{Sp}MM^TV}$<alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq1-2453970.gif"/></alternatives> ) repeatedly performed as $z\leftarrow {A^T}x$ <alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq2-2453970.gif"/></alternatives> and $y\leftarrow A\ z$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq3-2453970.gif"/></alternatives> (or $y\leftarrow A\ w$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq4-2453970.gif"/></alternatives>) for the same sparse matrix $A$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq5-2453970.gif"/></alternatives> is a kernel operation widely used in various iterative solvers. One important optimization for serial $\mathrm {{Sp}MM^TV}$<alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq6-2453970.gif"/></alternatives> is reusing $A$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq7-2453970.gif"/></alternatives>-matrix nonzeros, which halves the memory bandwidth requirement. However, thread-level parallelization of $\mathrm {{Sp}MM^TV}$<alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq8-2453970.gif"/></alternatives> that reuses $A$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq9-2453970.gif"/></alternatives>-matrix nonzeros necessitates concurrent writes to the same output-vector entries. These concurrent writes can be handled in two ways: via atomic updates or thread-local temporary output vectors that will undergo a reduction operation, both of which are not efficient or scalable on processors with many cores and complicated cache-coherency protocols. In this work, we identify five quality criteria for efficient and scalable thread-level parallelization of $\mathrm {{Sp}MM^TV}$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq10-2453970.gif"/></alternatives> that utilizes one-dimensional (1D) matrix partitioning. We also propose two locality-aware 1D partitioning methods, which achieve reusing $A$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq11-2453970.gif"/></alternatives>-matrix nonzeros and intermediate $z$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq12-2453970.gif"/></alternatives>-vector entries; exploiting locality in accessing $x$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq13-2453970.gif"/></alternatives>-, $y$<alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq14-2453970.gif"/> </alternatives>-, and -vector entries; and reducing the number of concurrent writes to the same output-vector entries. These two methods utilize rowwise and columnwise singly bordered block-diagonal (SB) forms of $A$<alternatives> <inline-graphic xlink:type="simple" xlink:href="aykanat-ieq15-2453970.gif"/></alternatives>. We evaluate the validity of our methods on a wide range of sparse matrices. Experiments on the 60-core cache-coherent Intel Xeon Phi processor show the validity of the identified quality criteria and the validity of the proposed methods in practice. The results also show that the performance improvement from reusing $A$ <alternatives><inline-graphic xlink:type="simple" xlink:href="aykanat-ieq16-2453970.gif"/></alternatives>-matrix nonzeros compensates for the overhead of concurrent writes through the proposed SB-based methods.