Why
Many local businesses are still shut down for COVID19. Social distancing is crucial to safely allow them to re-open. Currently, social distancing technology mainly works with fixed cameras, requires manual calibration and is not privacy-safe.
To overcome these limitations, we are open-sourcing our AI algorithms to communicate risky situations and help revive our economy. Our technology (i) goes beyond the concept of distance as it understands social interactions, (2) is privacy-safe and (iii) works with any camera phone.
We envision as potential users: groceries, street markets, florists, gyms, factories, offices, etc. With our technology, small businesses would only need a smartphone camera to monitor for social distancing while preserving privacy.
What
Our algorithm works with a single RGB image. It identifies:
- all humans in the image
- their 2D body poses
- their 3D distance from the camera (absolute distance)
- their orientation with respect to the camera
Using the above features, the algorithm accurately estimates cases where social distancing is not respected without incurring in too many false positives (“alarm fatigue”). This is crucial as two people facing each other (or talking to each other) are at greater risk of contagion than when facing away.
Some details of the algorithm:
- Original images are not stored nor shown
- No calibration needed for relative distances
- Works in real-time on any fixed or moving camera (such as a mobile phone)
- Analyzes not only the distance of people but also social interactions, such as their body pose and their orientation.
- More accurate than location-based technology since it can detect if two people are (not) facing each other
- Works real-time even on the Laptop CPU
How
- We designed and open-sourced a deep learning algorithm for 2D body poses that thrives in crowded and occluded scenarios: OpenPifPaf. Try it live on your browser with OpenPifPaf Web Demo. More details on the original paper (CVPR’19)
- We leveraged body poses to estimate accurate 3D localization and orientation of humans from a single RGB image. Our technology is called MonoLoco and the code is available online on GitHub.
Please note that the publication lists from Infoscience integrated into the EPFL website, lab or people pages are frozen following the launch of the new version of platform. The owners of these pages are invited to recreate their publication list from Infoscience. For any assistance, please consult the Infoscience help or contact support.