MULTI-SCALE FEATURE FUSION AND STRUCTURE-PRESERVING NETWORK FOR FACE SUPER-RESOLUTION

Multi-Scale Feature Fusion and Structure-Preserving Network for Face Super-Resolution

Multi-Scale Feature Fusion and Structure-Preserving Network for Face Super-Resolution

Blog Article

Deep convolutional neural networks have demonstrated significant performance improvements in face super-resolution tasks.However, many deep learning-based approaches tend to overlook Essential Tote the inherent structural information and feature correlation across different scales in face images, making the accurate recovery of face structure in low-resolution cases challenging.To address this, this paper proposes a method that fuses multi-scale features while preserving the facial structure.It introduces a novel multi-scale residual block (MSRB) to reconstruct key facial parts and structures from spatial and channel dimensions, and utilizes pyramid attention (PA) to exploit non-local self-similarity, improving the details of the reconstructed face.

Feature Enhancement Modules (FEM) are employed in the upscale stage to refine and enhance current features using multi-scale features from previous stages.The experimental results on CelebA, Helen and LFW datasets provide evidence that our method achieves superior quantitative metrics compared to the KNITWEAR URBAN baseline, the Peak Signal-to-Noise Ratio (PSNR) outperforms the baseline by 0.282 dB, 0.343 dB, and 0.

336 dB.Furthermore, our method demonstrates improved visual performance on two additional no-reference datasets, Widerface and Webface.

Report this page