270 likes | 400 Views
Prediction of Rare Volatility Surface Patches using Artificial Neural Network. Regression based model vs. Neural networks based model. Regression based model doesn’t consider infrequently available patches as less reliable Neural networks based model
E N D
Prediction of Rare Volatility Surface Patches usingArtificial Neural Network
Regression based model vs. Neural networks based model • Regression based model doesn’t consider infrequently available patches as less reliable • Neural networks based model • can consider the frequency of data by varying the learning rate between observations • easily implementable in today’s fast computers • can better model the underlying relationship between input and output data sets than regression based models
Outline of the Algorithm • Generation of data with missing values • Fill the missing values with a regression algorithm to construct the data set. • Train the neural network with the above data set with a modified learning rate using incremental (online) approach. • Simulate the above neural network with some testing data set and find results.
Generation of data Data Format: V=(v1, v2, v3,…, v27) • v1 .. v20 represent frequently available surface patches 2. v21 .. v22 represent macro-economic factors 3. v23 .. v27 represent infrequently available surface patches
Generation of data (contd.) • Facts: • No real life data • patches and macro economic factors correspond to a volatility surface • Data generation using simulated surfaces : • Generate surfaces (with random parameters) to simulate volatility surfaces, and • Obtain data (set of Vs)from these generated surfaces
Generating Data with Missing Values • Randomly miss values from v23… v27 with the following rates
Regularized EM Algorithm • Steps 1. Estimate mean values and covariance matrices of incomplete datasets. 2. Impute missing values in incomplete datasets. • Based on iterated analyses of linear regressions of variables with missing values on variables with available values • Regression coefficients are estimated by Ridge Regression
Data after missing values filled by Regularized EM Algorithm
… … Structure of Neural Network
Weight, Gradient & Learning Rate (LR) E R R O R Weight of a single link
Learning rate of an observation • Learning rate of an observation = lr*normalized multiplier of an observation, where lr is some constant. • Calculating multiplier of a observation 1. Frequency norm 2. Log frequency norm
Learning rate of an observation (contd.) Frequency norm • Multiplier of an observation, mfn = • Frequency of an output node is the number of observations, where a value for that output node is available (i.e., not missing) • mfn is between [0,1]
Learning rate of an observation (contd.) Log frequency norm • Map mfn within [lr/2, 2*lr] using logarithmic scaling
Incorporating mfn and mlfn in training of NN using Incremental Mode
Constant LR Unmodified Learning Rate
Modified LR with Frequency NORM Modified Learning Rate with Frequency Norm
Modified LR with Log Frequency NORM Modified Learning Rate with Log Frequency Norm
Simulation Result • 3 different methods have been tested 1.Modified learning rate with frequency norm 2.Modified learning rate with log frequency norm 3.ANN with constant learning rate • Each method is run for 10 times and 100 epochs • Result shows that the method with modified learning rate using frequency norm outperforms the other two methods consistently.