Abstract
This paper presents SenseNet, a convolution neural network (CNN) model for image segmentation. SenseNet architecture includes encoders with their corresponding decoders and bottleneck skip connections. The last layer of the architecture is a classification layer that classifies each pixel of an image for image segmentation. SenseNet addresses the limitations of conventional semantic segmentation models. Moreover, the skip connection does not include sufficient information for the recovery in the decoder path. This paper proposes a novel network structure combining a modified dense block and dense skip connection for efficient information recovery at the decoder path. Furthermore, this paper also proposes a dense, long skip connection that transfers the feature maps of each layer of the encoder to a layer of the decoder. This dense skip connection helps the network recover the information efficiently in the decoder path. SenseNet achieves state-of-the-art accuracy with fewer parameters and high-level features in the decoder path. This study evaluated SenseNet on the urban scene benchmark dataset CamVid and measured the performance in terms of intersection over union (IoU) and global accuracy. SenseNet outperformed the baseline model by an 8.7% increase in IoU. SenseNet can be downloaded from https://github.com/sensenetskip/sensenet .
Original language | English |
---|---|
Pages (from-to) | 328-336 |
Number of pages | 9 |
Journal | IEIE Transactions on Smart Processing and Computing |
Volume | 13 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- 6G
- Artificial intelligence (AI)
- Autonomous cars
- Convolutional networks
- Deep neural network
- Image segmentation
- Semantic segmentation
- Surveillance cameras
- Virtual reality