Proceedings of IEEE International Conference on Image Processing 2020
In the blooming era of smart edge devices, surveillance cam- eras have been deployed in many locations. Surveillance cam- eras are most useful when they are spaced out to maximize coverage of an area. However, deciding where to place cam- eras is an NP-hard problem and researchers have proposed heuristic solutions. Existing work does not consider a signifi- cant restriction of computer vision: in order to track a moving object, the object must occupy enough pixels. The number of pixels depends on many factors (how far away is the object? What is the camera resolution? What is the focal length?). In this study we propose a camera placement method that not only identifies effective camera placement in arbitrary spaces, but can account for different camera types as well. Our strat- egy represents spaces as polygons, then uses a greedy algo- rithm to partition the polygons and determine the cameras’ lo- cations to provide desired coverage. The solution also makes it possible to perform object tracking via overlapping camera placement. Our method is evaluated against complex shapes and real-world museum floor plans, achieving up to 82% cov- erage and 28% overlap.
Sara Aghajanzadeh, Roopasree Naidu, Shuo-Han Chen, Caleb Tung, Abhinav Goel, Yung-Hsiang Lu, George K. Thiruvathukal, Camera Placement Meeting Restrictions of Computer Vision, Proceedings of IEEE International Conference on Image Processing 2020.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.