Fusion of visible and infrared images of ground targets by unmanned aerial vehicles based on knowledge distillation adaptive DenseNet
DOI:
CSTR:
Author:
Affiliation:

Clc Number:

TP391. 4 TH701

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Visible and infrared image fusion aims to exploit the effective information between two different sensors to achieve image enhancement through complementary image features. However, current deep learning-based fusion methods tend to priorities evaluation metrics, and the models have high complexity, large weight parameters, low inference performance, poor generalization, and are not easy to deploy on the UAV edge computing platform. To address these challenges, this paper proposes a novel approach for visible and infrared image fusion, i. e. , adaptive DenseNet with knowledge distillation to learn a pre-existing fusion model, which achieves fusion effectiveness and model lightweighting through the use of hyperparameters (e. g. , width and depth). The proposed method is evaluated on a typical ground target dataset, and the experimental results show that the model parameter is only 77 KB and the inference time is 0. 95 ms, which has an ultra-light network structure, excellent image fusion effect and strong generalization ability in complex scenes.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: September 14,2024
  • Published:
Article QR Code