Human Recognition in a Collaborative Robot-Manipulator Working Area Based on MobileNetV2 Deep Neural Network in Real Time
PDF
DOI

Keywords

Industry 5.0
Collaborative Robot
Work Area
Computer Vision

How to Cite

Human Recognition in a Collaborative Robot-Manipulator Working Area Based on MobileNetV2 Deep Neural Network in Real Time. (2024). Journal of Universal Science Research, 2(9), 86-105. https://universalpublishings.com/~niverta1/index.php/jusr/article/view/7044

Abstract

The article deals with the development of a human recognition system in a collaborative robot-manipulator working area based on MobileNetV2 deep neural network. The purpose of the research is to implement an accurate and fast real-time recognition algorithm to improve security and work efficiency. Using the MobileNetV2 model allows you to achieve high accuracy with minimal resource consumption. The results of the experiments demonstrate the high reliability of the system in conditions of changing lighting and moving obstacles, which opens up new opportunities for the integration of recognition in industrial collaborative robot.

PDF
DOI

References

1. Kuzmenko, O., & et al. (2024). Robot Model For Mines Searching Development. Multidisciplinary Journal of Science and Technology, 4(6), 347-355.

2. Yevsieiev, V., & et al. (2024). Object Recognition and Tracking Method in the Mobile Robot’s Workspace in Real Time. Technical science research in Uzbekistan, 2(2), 115-124.

3. Samoilenko, H., & et al. (2024). Review for Collective Problem-Solving by a Group of Robots. Journal of Universal Science Research, 2(6), 7-16.

4. Bortnikova, V., & et al. (2019). Structural parameters influence on a soft robotic manipulator finger bend angle simulation. In 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), IEEE

5. Gurin, D., & et al. (2024). Using Convolutional Neural Networks to Analyze and Detect Key Points of Objects in Image. Multidisciplinary Journal of Science and Technology, 4(9), 5-15.

6. Yevsieiev, V., & et al. (2024). The Canny Algorithm Implementation for Obtaining the Object Contour in a Mobile Robot’s Workspace in Real Time. Journal of Universal Science Research, 2(3), 7–19.

7. Baker, J. H., Laariedh, F., Ahmad, M. A., Lyashenko, V., Sotnik, S., & Mustafa, S. K. (2021). Some interesting features of semantic model in Robotic Science. SSRG International Journal of Engineering Trends and Technology, 69(7), 38-44.

8. Sotnik, S., Mustafa, S. K., Ahmad, M. A., Lyashenko, V., & Zeleniy, O. (2020). Some features of route planning as the basis in a mobile robot. International Journal of Emerging Trends in Engineering Research, 8(5), 2074-2079.

9. Matarneh, R., Maksymova, S., Deineko, Z., & Lyashenko, V. (2017). Building robot voice control training methodology using artificial neural net. International Journal of Civil Engineering and Technology, 8(10), 523-532.

10. Nevliudov, I., Yevsieiev, V., Lyashenko, V., & Ahmad, M. A. (2021). GUI Elements and Windows Form Formalization Parameters and Events Method to Automate the Process of Additive Cyber-Design CPPS Development. Advances in Dynamical Systems and Applications, 16(2), 441-455.

11. Lyashenko, V., Abu-Jassar, A. T., Yevsieiev, V., & Maksymova, S. (2023). Automated Monitoring and Visualization System in Production. International Research Journal of Multidisciplinary Technovation, 5(6), 9-18.

12. Abu-Jassar, A. T., Attar, H., Lyashenko, V., Amer, A., Sotnik, S., & Solyman, A. (2023). Access control to robotic systems based on biometric: the generalized model and its practical implementation. International Journal of Intelligent Engineering and Systems, 16(5), 313-328.

13. Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2023). Generalized Procedure for Determining the Collision-Free Trajectory for a Robotic Arm. Tikrit Journal of Engineering Sciences, 30(2), 142-151.

14. Ahmad, M. A., Sinelnikova, T., Lyashenko, V., & Mustafa, S. K. (2020). Features of the construction and control of the navigation system of a mobile robot. International Journal of Emerging Trends in Engineering Research, 8(4), 1445-1449.

15. Gurin, D., & et al. (2024). MobileNetv2 Neural Network Model for Human Recognition and Identification in the Working Area of a Collaborative Robot. Multidisciplinary Journal of Science and Technology, 4(8), 5-12.

16. Abu-Jassar, A., & et al. (2023). Obstacle Avoidance Sensors: A Brief Overview. Multidisciplinary Journal of Science and Technology, 3(5), 4-10.

17. Funkendorf, A., & et al. (2019). 79 Mathematical Model of Adapted Ultrasonic Bonding Process for MEMS Packaging. In 2019 IEEE XVth International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH), IEEE, 79-82.

18. Gurin, D., & et al. (2024). Using the Kalman Filter to Represent Probabilistic Models for Determining the Location of a Person in Collaborative Robot Working Area. Multidisciplinary Journal of Science and Technology, 4(8), 66-75.

19. Yevsieiev, V., & et al. (2024). Building a traffic route taking into account obstacles based on the A-star algorithm using the python language. Technical Science Research In Uzbekistan, 2(3), 103-112.

20. Gurin, D., & et al. (2024). Effect of Frame Processing Frequency on Object Identification Using MobileNetV2 Neural Network for a Mobile Robot. Multidisciplinary Journal of Science and Technology, 4(8), 36-44.

21. Bortnikova, V., & et al. (2019). Mathematical model of equivalent stress value dependence from displacement of RF MEMS membrane. In 2019 IEEE XVth International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH), IEEE, 83-86.

22. Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2021). Neural networks as a tool for pattern recognition of fasteners. International Journal of Engineering Trends and Technology, 69(10), 151-160.

23. Abu-Jassar, A. T., Al-Sharo, Y. M., Lyashenko, V., & Sotnik, S. (2021). Some Features of Classifiers Implementation for Object Recognition in Specialized Computer systems. TEM Journal: Technology, Education, Management, Informatics, 10(4), 1645-1654.

24. Ahmad, M. A., Baker, J. H., Tvoroshenko, I., Kochura, L., & Lyashenko, V. (2020). Interactive Geoinformation Three-Dimensional Model of a Landscape Park Using Geoinformatics Tools. International Journal on Advanced Science, Engineering and Information Technology, 10(5), 2005-2013.

25. Baranova, V., Zeleniy, O., Deineko, Z., & Lyashenko, V. (2019, October). Stochastic Frontier Analysis and Wavelet Ideology in the Study of Emergence of Threats in the Financial Markets. In 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&T) (pp. 341-344). IEEE.

26. Al-Sherrawi, M. H., Lyashenko, V., Edaan, E. M., & Sotnik, S. (2018). Corrosion as a source of destruction in construction. International Journal of Civil Engineering and Technology, 9(5), 306-314.

27. Lyashenko, V., Ahmad, M. A., Sotnik, S., Deineko, Z., & Khan, A. (2018). Defects of communication pipes from plastic in modern civil engineering. International Journal of Mechanical and Production Engineering Research and Development, 8(1), 253-262.

28. Ляшенко В. В. (2007). Интерпретация и анализ статистических данных, описывающих процессы экономической динамики. Бизнес Информ, 9(2), 108-113.

29. Слюніна, Т. Л., Бережний, Є. Б., & Ляшенко, В. В. (2007). Розвиток вітчизняної мережі банківських установ: особливості та регіональні аспекти. Вісник ХНУ ім. В. Н. Каразіна. Економічна серія, 755. 84–88.

30. Kuzemin, A., Lуashenko, V., Bulavina, E., & Torojev, A. (2005). Analysis of movement of financial flows of economical agents as the basis for designing the system of economical security (general conception). In Third international conference «Information research, applications, and education (pp. 27-30).

31. Lyubchenko, V., & et al.. (2016). Digital image processing techniques for detection and diagnosis of fish diseases. International Journal of Advanced Research in Computer Science and Software Engineering, 6(7), 79-83.

32. Lyashenko, V. V., Matarneh, R., Kobylin, O., & Putyatin, Y. P. (2016). Contour Detection and Allocation for Cytological Images Using Wavelet Analysis Methodology. International Journal, 4(1), 85-94.

33. Drugarin, C. V. A., Lyashenko, V. V., Mbunwe, M. J., & Ahmad, M. A. (2018). Pre-processing of Images as a Source of Additional Information for Image of the Natural Polymer Composites. Analele Universitatii'Eftimie Murgu', 25(2).

34. Lyubchenko, V., Veretelnyk, K., Kots, P., & Lyashenko, V. (2024).Digital image segmentation procedure as an example of an NP-problem. Multidisciplinary Journal of Science and Technology, 4(4), 170-177.

35. Abu-Jassar, A., Al-Sharo, Y., Boboyorov, S., & Lyashenko, V. (2023, December). Contrast as a Method of Image Processing in Increasing Diagnostic Efficiency When Studying Liver Fatty Tissue Levels. In 2023 2nd International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI) (pp. 1-5). IEEE.

36. Tahseen A. J. A., & et al.. (2023). Binarization Methods in Multimedia Systems when Recognizing License Plates of Cars. International Journal of Academic Engineering Research (IJAER), 7(2), 1-9.

37. Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., & Solyman, A. (2024). Remote Monitoring System of Patient Status in Social IoT Environments Using Amazon Web Services (AWS) Technologies and Smart Health Care. International Journal of Crowd Science.

38. Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., & Solyman, A. (2024). Development and Investigation of Vision System for a Small-Sized Mobile Humanoid Robot in a Smart Environment. International Journal of Crowd Science.

39. Color correction of the input image as an element of improving the quality of its visualization / M. Yevstratov, V. Lyubchenko, Abu-Jassar Amer, V. Lyashenko // Technical science research in Uzbekistan. – 2024. – № 2(4). – P. 79-88.

40. Fan, J., & et al. (2022). Vision-based holistic scene understanding towards proactive human–robot collaboration. Robotics and Computer-Integrated Manufacturing, 75, 102304.

41. Liu, H., & Wang, L. (2021). Collision-free human-robot collaboration based on context awareness. Robotics and Computer-Integrated Manufacturing, 67, 101997.

42. Liau, Y. Y., & Ryu, K. (2021). Status recognition using pre-trained YOLOv5 for sustainable human-robot collaboration (HRC) system in mold assembly. Sustainability, 13(21), 12044.

43. Dinges, L., & et al. (2021). Using facial action recognition to evaluate user perception in aggravated HRC scenarios. In 2021 12th International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE, 195-199.

44. Zhdanova, M., & et al. (2020). Human activity recognition for efficient human-robot collaboration. In Artificial Intelligence and Machine Learning in Defense Applications II, SPIE, 11543, 94-104.

45. Wen, X., & Chen, H. (2020). 3D long-term recurrent convolutional networks for human sub-assembly recognition in human-robot collaboration. Assembly Automation, 40(4), 655-662.

46. Makris, S., & Aivaliotis, P. (2022). AI-based vision system for collision detection in HRC applications. Procedia CIRP, 106, 156-161.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.