110 likes | 126 Views
Develop a hardware accelerator for convolutional layers in neural networks using a weight stationary design. Implement convolution filters like Sobel Edge Detection for image processing. Framework includes Bluespec files and C++ scripts for testing and visualization. Tasks involve setting weights, convolving data, and handling input/output operations. Guidelines for padding and data manipulation are provided for simplicity. The goal is to optimize the convolution process for efficient neural network operations.
E N D
CS 295: Modern SystemsLab2: Convolution Accelerators Sang-Woo Jun 2019 Spring
Lab 2 Goals • Implement a hardware accelerator for convolution • Ostensibly useful for the convolution layers of a neural network • Weight stationary design • Template and testbench framework written in Bluespec Weight stationary convolution for a row in the convolution Partial sum of a previous activation row if any Partial sum for stored for next activation row,or final sum
Intuitive Example of a Convolution:Sobel Edge Detection Filter • Simple convolution filter for emphasizing edges in an image • Two convolutions sweep over the input image • The output of the two filters are joined after convolutions are completed Images from Wikipedia
Framework Details • https://github.uci.edu/swjun/cs295_19s • Lab 2 files in sub-directory “convolution” • Two Bluespec files • Top.bsv: Testbench framework. No need to edit • Filter.bsv : Convolution filter implementation. You need to edit this • Three C++/h files • bdpi.cpp : Testbench framework that emulates data input/output • createbmp.cpp/h : Creates bmp files from edge-detected output • One data file • datain.bin : 64 32*32 images from the CIFAR-10 dataset
In Filter.bsv • mkFilter exposing parameterized FilterIfc interface • input channels, filter count, filter width • setWeight fills in a multidimensional vector “weights” • setWeightDone is called after testbench enters all weights • put is called with a input data vector • get is called to retrieve convolved data
Things To Keep in Mind • put is called repeatedly with a vector of channels per pixel (R,G,B) • There is no delineation between images. All images are 32*32 pixels • It is first converted to greyscale by simply averaging RGB values (provided in code) • Need to add zero padding around the images so that the output is the same dimension as the input • For simplicity, just add zero pixels around the smaller convolved output image • For sobel filter: Input is 32*32*3 per image, and output should be (for sobel filter) 32*32*2 per image
Tips • Pixels streamed through N row buffers (where N is width of filter) • Can use mkBRAMFIFO’s • For Sobel filter, each PE box in Fig. 3 should ideally implement 6 PEs, for 3 columns per each row, for two filters
Development Environment • Server at “orthanc.ics.uci.edu” • Login details will be emailed to you • Bluespec and licenses already installed • vim syntax files for Bluespec included with this document • Copy to respective directories in ~/.vim • Required environment variables in ~/setupbsv.sh • Already source’d in .bashrc
Building And Running • “make” • ./bsim/obj/bsim • Output: “outXX.bmp” • 32*32 pixel greyscale • Currently just greyscale of input image • Suggest saving them for comparison
Submission • Same as lab 1 • Submit Filter.bsv, and a short write-up (1+ paragraph is fine) describing how you did it • Please email to me with title [CS295] lab2 – ucinetid