Аннотация:Data preprocessing is a commonly used method to improvethe efficiency of neural network training algorithms. In this paper, wesuggest an approach for organizing parallel computations that makes itpossible to preprocess data against the background of neural networktraining. We assume that data preprocessing is performed on the processor using multiprocessing calculations, whereas training involves graphicprocessors. The proposed algorithms differ in the way of organizing parallelism and interprocess communication. The methods are implementedin Python and C++ and presented as a software library. We describe theresults of comparing the efficiency of the methods with the implementation of parallel preprocessing within the PyTorch framework on varioustest problems. Also, we give some recommendations on the method choicedepending on the dataset and the batch preprocessing algorithm.