TY - GEN
T1 - Motion Segmentation Based on Structure-Texture Decomposition and Improved Three Frame Differencing
AU - Sengar, Sandeep Singh
N1 - Publisher Copyright:
© 2019, IFIP International Federation for Information Processing.
PY - 2019/5/12
Y1 - 2019/5/12
N2 - Motion segmentation from the video datasets has several important applications like traffic monitoring, action recognition, visual object tracking, and video surveillance. The proposed technique combines the structure-texture decomposition and the improved three frames differencing for motion segmentation. First, the Osher and Vese approach is employed to decompose the video frame into two components, viz., structure and texture/noise. Now, to eliminate the noise, only the structure components are employed for further steps. Subsequently, the difference between (i) the current frame and the previous frame as well as (ii) the current frame and the next frame are estimated. Next, both the difference frames are combined using pixel-wise maximum operation. Each combined difference frame is then partitioned into non-overlapping blocks, and the intensity sum as well as median of each block is computed. Successively, target objects are detected with the help of threshold and intensity median. Finally, post-processing in the form of morphology operation and connected component analysis is carried out to accurately find the foreground. Our technique has been formulated, implemented and tested on publicly available standard benchmark datasets and it is proved from performance analysis that our technique exhibit efficient outcomes than existing approaches.
AB - Motion segmentation from the video datasets has several important applications like traffic monitoring, action recognition, visual object tracking, and video surveillance. The proposed technique combines the structure-texture decomposition and the improved three frames differencing for motion segmentation. First, the Osher and Vese approach is employed to decompose the video frame into two components, viz., structure and texture/noise. Now, to eliminate the noise, only the structure components are employed for further steps. Subsequently, the difference between (i) the current frame and the previous frame as well as (ii) the current frame and the next frame are estimated. Next, both the difference frames are combined using pixel-wise maximum operation. Each combined difference frame is then partitioned into non-overlapping blocks, and the intensity sum as well as median of each block is computed. Successively, target objects are detected with the help of threshold and intensity median. Finally, post-processing in the form of morphology operation and connected component analysis is carried out to accurately find the foreground. Our technique has been formulated, implemented and tested on publicly available standard benchmark datasets and it is proved from performance analysis that our technique exhibit efficient outcomes than existing approaches.
KW - Block
KW - Frame difference
KW - Morphology
KW - Motion segmentation
KW - Structure-texture decomposition
UR - http://www.scopus.com/inward/record.url?scp=85065917497&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-19823-7_51
DO - 10.1007/978-3-030-19823-7_51
M3 - Conference contribution
AN - SCOPUS:85065917497
SN - 9783030198220
T3 - IFIP Advances in Information and Communication Technology
SP - 609
EP - 622
BT - Artificial Intelligence Applications and Innovations - 15th IFIP WG 12.5 International Conference, AIAI 2019, Proceedings
A2 - Pimenidis, Elias
A2 - Maglogiannis, Ilias
A2 - Iliadis, Lazaros
A2 - MacIntyre, John
PB - Springer New York LLC
T2 - 15th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2019
Y2 - 24 May 2019 through 26 May 2019
ER -