450 likes | 710 Views
RC tutorial. David Verstraeten. Reservoir Computing. Random and fixed. Trained. Steps in training and testing. Create random weight matrices Input to reservoir Reservoir to reservoir (Output to reservoir) Scale weight matrices globally to the desired value
E N D
RC tutorial David Verstraeten
Reservoir Computing Random and fixed Trained
Steps in training and testing • Create random weight matrices • Input to reservoir • Reservoir to reservoir • (Output to reservoir) • Scale weight matrices globally to the desired value • Feed all input samples to the reservoir • Collect reservoir states • Train readout weight using linear regression or ridge regression
Scaling reservoir matrix:Spectral radius LTI system RC system Spectral radius
Training Inputs Reservoir Outputs M N P 1 A 1 B 1 T1 2 2 2 T2 … … … S S S TS
Linear/ridge regression W A B Linear regression = X Ridge regression
Pattern generation tasks: training Fixed connections Trained connections
Pattern generation: generation/testing Fixed connections Trained connections Warmup Freerun
Global optimization parameters • Input scaling • Reservoir scaling (spectral radius) • Leak rate • Feedback scaling • Regularization parameter (through cross-validation) • Not: • connection fraction (only for spiking neurons) • reservoir size
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Rationale • Reservoir Computing toolbox: • Box of tools for • Quick-and-dirty experiments • Large scale parameter sweeps and optimizations • Modular setup: function hooks for customization • Reservoir types • Learning rules • Scoring and evaluation methods
Getting started • Download from http://snn.elis.ugent.be/rct • Unpack, go to directory $RCT_ROOT and type: >> install • For ‘Hello world’ experiment, type >> rc_simulate
Usage • Default settings • Are ok as starting point for many experiments • Contained in $RCT_ROOT/default_settings/default_*.m • Look at these scripts! They give an overview of RCToolbox functionality
Custom settings • Two ways of overriding defaults: • Command line >> dataset_generation_function=… @my_dataset_function(); >> rc_simulate; • Custom configuration file >> custom_configuration=‘my_conf’ >> rc_simulate; • Example: analog_speech.m • Other examples: $RCTROOT/examples/configurations
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Standard topology for classification/regression Input Bias Reservoir Mandatory Optional Readout
Connectivity matrix between layers Input Bias Reservoir Readout
Standard topology for classification/regression Reservoir Mandatory Bias Optional Readout
Connectivity matrix between layers Reservoir Bias Readout
Topology datastructure in the toolbox • topology. • layer(1,n) • conn(n,n) • with n = number of layers • Default settings: • $RCTROOT/default_settings/default_topology.m • Layer 1 : input • Layer 2 : bias • Layer 3 : reservoir • Layer 4 : readout
Topology datastructure in the toolbox • topology.layer(1:n). • size: integer • is_trained_offline: boolean • is_trained_online: boolean • node_type: string • training_function:function pointer • nonlin_functions: array of function pointers
Topology datastructure in the toolbox • topology.layer(1:n). • init_simulation: function pointer • finish_simulation: function pointer • dt: float • regul_param: float • scoring: function pointer
Topology datastructure in the toolbox • topology.conn(1:n,1:n). • is_active: boolean • creation_pipeline: array of function pointers • scaling_factor: float • conn_frac: float [0,1] • discrete_set: float array
Topology creation pipeline • Array of function pointers which pass weight matrices • Generation functions • @gen_rand, @gen_1D, @gen_load • Assignment functions • @assign_rand, @assign_discrete • Scaling functions • @scale_specrad, @scale_constant
Topology creation pipeline • Example (default for reservoir to reservoir connection): • topology.conn(3,3).creation_pipeline= {@gen_rand, @assign_randn, @scale_specrad} • Can be customized! Function hooks
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Dataset generation • Function hook • Default • dataset_generation_function=@dataset_narma_10(input_dt, input_maxtime, n_samples) • Signature • function [inputs outputs] = my_dataset_function(…) • inputs = cell(1,nr_samples) • inputs{:} = matrix (M,:); • M = input dimensionality • outputs = cell(1,nr_samples) • outputs{:} = matrix (P,:) or [] for signal generation tasks. • P = output dimensionality • Example: 10th order NARMA
Data struct • data.layer(1:n). • r : required/teacher forced states • s : simulated states
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Training and testing • cross-validate • Training and testing using cross-validation • cross_validate_grid • Training and testing using cross-validation with optimization of regularization parameter
Cross-validate • Example: three-fold cross-validation with three samples. Training Results 1 1 error fold 1 2 2 3 Testing 3 Fold 1
Cross-validate • Example: three-fold cross-validation with three samples. Training Results 1 1 error fold 1 3 2 error fold 2 3 Testing 2 Fold 2
Cross-validate • Example: three-fold cross-validation with three samples. Training Results 1 2 error fold 1 3 2 error fold 2 3 error fold 3 Testing 1 Fold 3
Cross-validation • Function hook: • train_params.cross_val_set_function • Default: @random_cross_val • Other possibilities: • cross_val_only_training: no test set • no_cross_val: simple train and test set • random_freerun_cross_val : cross-validation for freerun tasks • Freerun cross-validation Training 1 1 3 … 2 2 3 3 3 Testing Fold 1 3 3
Cross-validate with gridsearch • For optimization of regularization parameter l 1 2 3 Val. results Results Training Testing error fold 1 error l1 1 2 l=l2 l=ln l=l1 lopt 3 error l2 error fold 2 … Validation error fold 3 1 2 error ln Subfold 1 Subfold 2 Fold 2 Fold 1 Fold 3
Outline • Rationale, usage and structure • Topology • Datasets • Cross-validation • Parameter sweeps and custom scripts
Parameter sweeps • parameter_ranges = struct( 'name', {}, 'range', {}); • Example • 1D sweep: parameter_ranges = struct( 'name', {‘topology.layer(3).size’}, 'range’, {10:10:100}); • 2D sweep (all combinations!): parameter_ranges = struct( 'name', {‘topology.layer(3).size’, ‘topology.layer(1).scale_factor’}, 'range', {10:10:100, .1:.1:1});
Parameter sweeps • Results are saved in output_directory • Set save_results=true! • Plot results of sweep • plot_error(output_directory) • Get variables from simulation • get_saved_data(output_directory, variables)
Custom scripts • Toolbox is: box of tools you can use to build your own experiments • Good starting points: • $RCTROOT/examples/tutorial/tut1.m • $RCTROOT/examples/framework/rc_simulate_job.m