A Novel 3D Sensing Framework for Safety Monitoring in Human-Robot Collaboration Work Cells
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
The demand for work safety protection in Human-Robot Interaction (HRI) work cells is rapidly increasing, driven by the projected 34.3% Compound Annual Growth Rate (CAGR) of the global Collaborative Robot (Cobot) market from 2020 to 2030 [1]. According to IRF-World Robotics 2023, it is reported that there are nearly 4 million industrial robots in operation worldwide, with approximately 10% of them being cobot [2]. A NIOSH report highlighted 61 robot-related fatalities between 1992 and 2015, with an expectation of further rising due to the increasing use of industrial robots and cobots in the US work environment [3]. A recent study in [4] delved into 355 robot accidents documented by KOSHA between 2009 and 2019, revealing that 95% occurred in manufacturing businesses. Pinch and crush incidents accounted for 52% of the accidents, while impacts and collisions accounted for 36%, and the remaining 12% involved falls, flying objects, trips/slips, cuts, burns, etc. These findings align with US data reported in [5].
The rising integration of cobot units among major manufacturers emphasizes the critical need for enhancing cobot safety in manufacturing. Owing to safety considerations and regulatory requirements, existing cobots frequently operate at significantly reduced speeds and are restricted from undertaking complex interaction tasks in shared workspace. This limitation has curtailed the full potential utilization and productivity of cobots in manufacturing. This paper introduces a novel 3D sensing framework designed to address these limitations by enabling safety assurance in workspaces requiring close human-robot interaction. The framework generates 3D human pose information and relays it to the robot for real-time safety monitoring. Our methodology begins with data collection from a single RGB-D camera capturing human-robot interactions in a manufacturing environment. Human shape and pose are predicted using deep neural networks, which then incorporate depth information and undergo 3D geometric transformations to deduce size, shape, and translation. This process produces a reconstructed 3D avatar with pose, size, and location. Following 3D human posture estimation, this data is then integrated into a virtual environment with a real robot for real-time monitoring. Results demonstrate successful reconstruction of 3D human geometry within human-robot collaboration settings. By integrating both the reconstructed mesh and real-time robot state into a unified virtual environment, we achieved real-time, offline, continuous monitoring of the critical distance between robot and human throughout operation. These distance measurements provide crucial data for developing collision detection, prediction, and avoidance capabilities when incorporated into the robot control feedback loop.
How to Cite
##plugins.themes.bootstrap3.article.details##
Human-Robot Collaboration, Machine Vision, Visuospatial Processing, Pose Estimation, Artificial Intelligence
International Federation of Robotics (IFR). (2023). World Robotics 2023 Report: Asia ahead of Europe and the Americas. https://ifr.org/ifr-press-releases/news/world-robotics-2023-report-asia-ahead-of-europe-and-the-americas
National Institute for Occupational Safety and Health (NIOSH). (n.d.). Robotics. Centers for Disease Control and Prevention. https://www.cdc.gov/niosh/topics/robotics/aboutthecenter.html
Lee, K., Shin, J., & Lim, J.-Y. (2021). Critical hazard factors in the risk assessments of industrial robots: Causal analysis and case studies. Safety and Health at Work, 12(4), 496–504. https://doi.org/10.1016/j.shaw.2021.06.006
Jiang, B. C., & Gainer, C. A., Jr. (1987). A cause-and-effect analysis of robot accidents. Journal of Occupational Accidents, 9(1), 27–45. https://doi.org/10.1016/0376-6349(87)90018-6

This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.