TY - JOUR
T1 - Deep learning model with low-dimensional random projection for large-scale image search
AU - Alzu'bi, Ahmad
AU - Abuarqoub, Abdelrahman
N1 - Publisher Copyright:
© 2019 Karabuk University
PY - 2020/8/6
Y1 - 2020/8/6
N2 - Developing deep learning models that can scale to large image repositories is increasingly gaining significant efforts in the domain of image search. The current deep neural networks rely on the computational power of accelerators (e.g. GPUs) to tackle the processing limitations associated with features extraction and model training. This paper introduces and investigates a deep model of Convolutional Neural Networks (CNNs) to efficiently extract, index, and retrieve images in the context of large-scale Content-Based Image Retrieval (CBIR). Random Maclaurin projection is used to generate low-dimensional image descriptors and their discriminating efficiency is evaluated on standard image datasets. The scalability of deep architectures is also evaluated on one million image dataset over a High-Performance Computing (HPC) platform, which is assessed in terms of the retrieval accuracy, speed of features extraction and memory costs. Additionally, the controlling GPU kernels of the proposed model are examined under several optimization factors to evaluate their impact on the processing and retrieval performance. The experimental results show the effectiveness of the proposed model in the retrieval accuracy, GPU utilisation, speed of features extraction, and storage of image indexing.
AB - Developing deep learning models that can scale to large image repositories is increasingly gaining significant efforts in the domain of image search. The current deep neural networks rely on the computational power of accelerators (e.g. GPUs) to tackle the processing limitations associated with features extraction and model training. This paper introduces and investigates a deep model of Convolutional Neural Networks (CNNs) to efficiently extract, index, and retrieve images in the context of large-scale Content-Based Image Retrieval (CBIR). Random Maclaurin projection is used to generate low-dimensional image descriptors and their discriminating efficiency is evaluated on standard image datasets. The scalability of deep architectures is also evaluated on one million image dataset over a High-Performance Computing (HPC) platform, which is assessed in terms of the retrieval accuracy, speed of features extraction and memory costs. Additionally, the controlling GPU kernels of the proposed model are examined under several optimization factors to evaluate their impact on the processing and retrieval performance. The experimental results show the effectiveness of the proposed model in the retrieval accuracy, GPU utilisation, speed of features extraction, and storage of image indexing.
KW - Convolutional Neural Networks
KW - Deep learning
KW - GPU analysis
KW - Large-scale image retrieval
UR - http://www.scopus.com/inward/record.url?scp=85077680296&partnerID=8YFLogxK
U2 - 10.1016/j.jestch.2019.12.004
DO - 10.1016/j.jestch.2019.12.004
M3 - Article
AN - SCOPUS:85077680296
SN - 2215-0986
VL - 23
SP - 911
EP - 920
JO - Engineering Science and Technology, an International Journal
JF - Engineering Science and Technology, an International Journal
IS - 4
ER -