SenseNet: Densely Connected, Fully Convolutional Network with Bottleneck Skip Connection for Image Segmentation

Bilal Ahmed Lodhi, Rehmat Ullah, Sajida Imran, Muhammad Imran, Byung Seo Kim*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents SenseNet, a convolution neural network (CNN) model for image segmentation. SenseNet architecture includes encoders with their corresponding decoders and bottleneck skip connections. The last layer of the architecture is a classification layer that classifies each pixel of an image for image segmentation. SenseNet addresses the limitations of conventional semantic segmentation models. Moreover, the skip connection does not include sufficient information for the recovery in the decoder path. This paper proposes a novel network structure combining a modified dense block and dense skip connection for efficient information recovery at the decoder path. Furthermore, this paper also proposes a dense, long skip connection that transfers the feature maps of each layer of the encoder to a layer of the decoder. This dense skip connection helps the network recover the information efficiently in the decoder path. SenseNet achieves state-of-the-art accuracy with fewer parameters and high-level features in the decoder path. This study evaluated SenseNet on the urban scene benchmark dataset CamVid and measured the performance in terms of intersection over union (IoU) and global accuracy. SenseNet outperformed the baseline model by an 8.7% increase in IoU. SenseNet can be downloaded from https://github.com/sensenetskip/sensenet .

Original languageEnglish
Pages (from-to)328-336
Number of pages9
JournalIEIE Transactions on Smart Processing and Computing
Volume13
Issue number4
DOIs
Publication statusPublished - 2024

Keywords

  • 6G
  • Artificial intelligence (AI)
  • Autonomous cars
  • Convolutional networks
  • Deep neural network
  • Image segmentation
  • Semantic segmentation
  • Surveillance cameras
  • Virtual reality

Cite this