Preview

Doklady BGUIR

Advanced search

Training Sample Formation for Convolution Neural Networks to Person Re-Identification from Video

https://doi.org/10.35596/1729-7648-2023-21-3-87-95

Abstract

To improve the person re-identification system accuracy, an integrated approach is proposed in the formation of a training sample for convolutional neural networks, which involves the use of a new image dataset, an increase in the training examples number using existing datasets, and the use of a number of transformations to increase their diversity. The created dataset PolReID1077 contains images of people that were obtained in all seasons, which will improve the correct operation of re-identification systems when the seasons change. Another PolReID1077 advantage is the video data use obtained from external and internal surveillance in a large number of different filming locations. Therefore, the people images in the created set are characterized by the variability of the background, brightness and color characteristics. Joining the created dataset with the existing CUHK02, CUHK03, Market-1501, DukeMTMC-ReID and MSMT17 sets made it possible to obtain 109 772 images for training. An increase in the variety of generated examples is achieved by applying a cyclic shift to them, eliminating color and replacing a fragment with a reduced copy of another image. The research results on estimating the accuracy of re-identification for the ResNet-50 and DenseNet-121 convolutional neural networks during their training, using the proposed approach to form a training sample, are presented.

About the Authors

S. A. Ihnatsyeva
Euphrosyne Polotskaya State University of Polotsk
Belarus

Ihnatsyeva Sviatlana Aleksandrovna, Postgraduate at the Department of Computing Systems and Networks

211440, Novopolotsk, Blokhina St., 29

Tel.: +375 214 42-30-31



R. P. Bohush
Euphrosyne Polotskaya State University of Polotsk
Belarus

Dr. of Sci. (Tech.), Associate Professor, Head of the Department of Computing Systems and Networks

Polotsk



References

1. Shorten С., Taghi M. K. (2019) A Survey on Image Data Augmentation for Deep Learning. Journal of Big Data. (6), 1–48. DOI: 10.1186/s40537-019-0197-0.

2. Chen H., Ihnatsyeva S., Bohush R., Ablameyko S. (2022) Choice of Activation Function in Convolution Neural Network in Video Surveillance Systems. Programming and Computer Software. (5), 312–321. DOI: 10.1134/S0361768822050036.

3. Wei L., Wang X. (2013) Locally Aligned Feature Transforms Across Views. 2013 IEEE Conference on Computer Vision and Pattern Recognition. 3594–3601. DOI: 10.1109/CVPR.2013.461.

4. Li W., Zhao R., Xiao T., Wang X. (2014) DeepReID: Deep Filter Pairing Neural Network for Person Re-Identification. 2014 IEEE Conference on Computer Vision and Pattern Recognition. 152–159. DOI: 10.1109/CVPR.2014.27.

5. Zheng L., Shen L., Tian L., Wang S., Wang J., Tian Q. (2015) Scalable Person Re-Identification: a Benchmark. 2015 IEEE International Conference on Computer Vision. 1116–1124. DOI: 10.1109/ICCV.2015.133.

6. Ristani E., Solera F., Zou R. S., Cucchiara R., Tomasi C. (2016) Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. ECCV Workshops. DOI: 10.1007/978-3-319-48881-3_2.

7. Wei L., Zhang S., Gao W., Tian Q. (2017) Person Transfer GAN to Bridge Domain Gap for Person Re-Identification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 79–88. DOI: 10.1109/CVPR.2018.00016.

8. Felzenszwalb P., Girshick R., McAllester D., Ramanan D. (2010) Object Detection with Discriminatively Trained Part-Based Models. PAMI. DOI: 10.1109/TPAMI.2009.167.

9. Gong Y., Zeng Z. (2021) An Effective Data Augmentation for Person Re-Identification. ArXiv, abs/2101.08533. DOI: 10.48550/arXiv.2101.08533.

10. Fu D., Chen D., Bao J., Yang H., Yuan L., Zhang L., Li H., Chen D. (2021) Unsupervised Pre-Training for Person Re-Identification. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14745–14754. DOI: 10.1109/CVPR46437.2021.01451.

11. Cubuk E. D., Zoph B., Shlens J., Le Q. V. (2020) Randaugment: Practical Automated Data Augmentation with a Reduced Search Space. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 3008–3017. DOI: 10.1109/CVPRW50498.2020.00359.

12. Hendrycks D., Mu N., Cubuk E. D., Zoph B., Gilmer J., Lakshminarayanan B. (2019) AugMix: a Simple Data Processing Method to Improve Robustness and Uncertainty. ArXiv, abs/1912.02781. DOI: 10.48550/arXiv.1912.02781.

13. Zhang H., Cissй M., Dauphin Y., Lopez-Paz D. (2017) Mixup: Beyond Empirical Risk Minimization. ArXiv, abs/1710.09412. DOI: 10.48550/arXiv.1710.09412.

14. Zhong Z., Zheng L., Kang G., Li S., Yang Y. (2017) Random Erasing Data Augmentation. AAAI Conference on Artificial Intelligence. DOI: 10.1609/AAAI.V34I07.7000.

15. Yun S., Han D., Oh S., Chun S., Choe J., Yoo Y. J. (2019) CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. 2019 IEEE/CVF International Conference on Computer Vision. 6022– 6031. DOI: 10.1109/ICCV.2019.00612.

16. Xie T., Cheng X., Wang X., Liu M., Deng J., Zhou T., Liu M. (2021) Cut-Thumbnail: a Novel Data Augmentation for Convolutional Neural Network. Proceedings of the 29th ACM International Conference on Multimedia. DOI: 10.1145/3474085.3475302.


Review

For citations:


Ihnatsyeva S.A., Bohush R.P. Training Sample Formation for Convolution Neural Networks to Person Re-Identification from Video. Doklady BGUIR. 2023;21(3):87-95. (In Russ.) https://doi.org/10.35596/1729-7648-2023-21-3-87-95

Views: 287


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1729-7648 (Print)
ISSN 2708-0382 (Online)