My AI NEWS STREAM
Subscribe
FPGA
Generative Models
Edge Computing
FPGAs vs. GPUs: Accelerating Image-Related AI on Edge Devices

FPGA-based accelerators propose a power-efficient solution for running generative model algorithms in edge devices, which are typically constrained by resources. The proposed system demonstrates superior performance compared to GPUs in terms of throughput to power ratio and consistency.

  • Spatio-temporally parallelized FPGA design.
  • Optimized for Deconvolutional Neural Network (DCNN) inference.
  • Tested on MNIST and CelebA datasets using Wasserstein GAN framework.
  • Exhibits higher throughput to power ratio and lower variation than NVIDIA Jetson TX1 GPU.
  • Utilizes Xilinx PYNQ-Z2 FPGA for inference acceleration.

This study is a leap forward in the deployment of AI-driven edge computing applications. Its efficient use of FPGAs to accelerate generative models for image processing tasks like denoising and super-resolution showcases potential for extensive use in mobile and edge devices. Future research may focus on improving architecture scalability and exploring different deep learning frameworks. Read more about the FPGA-based system’s designs and benchmarks.

Personalized AI news from scientific papers.