Abstract:AR-HUD has been widely used in automobile. Its environment perception module needs to complete target detection, lane segmentation and other tasks, but multiple deep neural networks running at the same time will consume too much computing resources. In order to solve this problem, a lightweight multi-task convolutional neural network ( DYPNet) applied in AR-HUD environment perception is proposed in this paper. DYPNet is based on YOLOv3-tiny framework, and fuses the pyramid pooling model, DenseNet dense connection structure and CSPNet network model, which greatly reduces the computing resources consumption without reducing the accuracy. Aiming at the problem that the neural network is difficult to train, a linear weighted sum loss function based on dynamic loss weight is proposed, which makes the loss of the sub-networks tend to decline and converge synchronously. After training and testing on the open data set BDD100K, the results show that the detection mAP and segmentation mIOU of the neural network are 30% and 77. 14% , respectively, and after accelerating with TensorRt, it can reach about 15 FPS on Jetson TX2, which has met the application requirements of AR-HUD. It has been successfully applied to the vehicle AR-HUD.