Abstract:Mini/Micro-LED represents the next generation of display technology. As the physical size of Mini/Micro-LEDchips becomes smaller, fabrication yields have decreased while the degree of integration has significantly increased. Consequently, fast and accurate inspection of Mini/Micro-LED chips is crucial for industrial production. However, inspecting Mini/Micro-LED chips remains challenging due to their small size and dense distribution. The limited feature information from individual targets and the need for fast, easily deployable inspection algorithms add to these challenges. To address these issues, we designed a compressed attention detail-semantic complementary convolutional neural network (CADSC-CNN). By incorporating an encoder structure based on a self-attention mechanism into the feature fusion network, it becomes easier to acquire global information to complement the features of small targets. Additionally, the compression operation of self-attention reduces the model′s parameter count, thereby improving the detection rate. We validated the effectiveness of this method using a Mini/Micro-LED dataset collected by an industrial camera. Experiments demonstrated that this method achieves a mean average precision (mAP) rate of 95.6% and a speed of 100.6 frames per second.