Technische Hochschule Augsburg Neural Network Accelerator
Introduction
Welcome to the THANNA project! The THANNA Framework is being developed by the Efficient Embedded Systems Group (EESG) at the technical university of applied sciences Augsburg. The main goal of this project to Create efficient neural networks with the THANNA Quantizer and deploy them on a Xillinx FPGA via the THANNA Processor
The THANNA Framework is a specialized framework designed to enhance the efficiency and performance of neural networks through hardware-based quantization. As neural networks continue to grow in complexity and size, traditional processors often struggle to handle the large volumes of data in real-time. The THANNA Quantizer addresses this challenge by leveraging Field Programmable Gate Arrays (FPGAs) to create custom hardware accelerators tailored to the specific needs of neural networks.
The goal of the THANNA Quantizer is to develop and implement an N-bit quantization framework to create quantized neural networks for custom-built quantization hardware. This framework aims to improve the efficiency and performance of neural networks through quantization.
Content of this Documentation
This documentation covers the tested code of the quantizer. All the code of the THANNA Quantizer is based on the QKeras code, with bugs fixed and compatibility with TensorFlow Version 2.15.1 ensured. The function and class fingerprints were not changed from QKeras. Therefore, the documentation of the code was just adapted from QKeras.