70 likes | 99 Views
Explore the innovation of the SNAP predictor in predicting branch target addresses with minimal Hamming distance, incorporating set-associative memory and training coefficient vectors for improved accuracy.
E N D
Scaled Neural Indirect Predictor Daniel A. Jiménez Department of Computer Science The University of Texas at San Antonio
Basic Idea • Predict selected bits of target address • Attempt to match these bits to known targets • Target with minimum Hamming distance is prediction
Components • Tagless set-associative memory like a BTB • Indexed by bits of branch address • Filled with branch targets with LRU replacement • Predictors • Each predicts one bit of target • SNAP predictor provides good accuracy • Use conditional branch path/pattern history
Tricks • Use some of the same tricks used for OH-SNAP • Training coefficient vectors • Adaptively train threshold • Separate bias weights from correlating weights • All predictors use the same tables of weights • Only predict certain lower order bits • In our case, matching the following mask: • ...00000010110101111111111010
Short Presentation • The abstract idea of Hamming-distance-based target prediction is very simple • The intelligence in this indirect predictor is in the one-bit predictors • OH-SNAP in our case