620 likes | 739 Views
ARTIFICIAL NEURAL NETWORKS vs LINEAR REGRESSION IN A FLUID MECHANICS AND CHEMICAL MODELLING PROBLEM: ELIMINATION OF HYDROGEN SULPHIDE IN A LAB-SCALE BIOFILTER. by G. Ibarra-Berastegi A. Elias, R. Arias, A. Barona UNIVERSITY OF THE BASQUE COUNTRY (SPAIN).
E N D
ARTIFICIAL NEURAL NETWORKS vs LINEAR REGRESSION IN A FLUID MECHANICS AND CHEMICAL MODELLING PROBLEM: ELIMINATION OF HYDROGEN SULPHIDE IN A LAB-SCALE BIOFILTER by G. Ibarra-Berastegi A. Elias, R. Arias, A. Barona UNIVERSITY OF THE BASQUE COUNTRY (SPAIN)
Hydrogen sulfide (H2S) is produced in industrial activities and the emission concentration range from 5 to 70 ppmv MAIN SOURCES of H2S: 1. Wastewater treatment 2. Paper and pulp manufacturing 3. Food processing
H2S: MAIN EFFECTS 1. Foul odour 2. Corrosive 3. Toxic air pollutant 4. Little information on long exposures to low concentrations
A biological reactor (also known as biofilter) is intended to eliminate pollutants by the action of microorganisms. The degradation of pollutants are very complex mechanisms and are known to be highly non-linear • Control&Management: A MODEL IS NEEDED • Black box (statistical modelling) • Step by step modelling each and every mechanism (mechanistic modelling)
AIM OF THIS STUDY: To model the performance of a biofilter for eliminating hydrogen sulphide (H2S) by using a statistical or black-box modelling approach. Two candidate mathematical tools : 1. Neural Network (MLP) 2. Multiple Linear Regression (MLR) PERFORMANCE:Statistical indicators 95% confidence level
LAB-SCALE BIOFILTER TO ELIMINATE H2S MASS FLOW CONTROLLER CONDENSER MIXING CHAMBER BIOFILTER (3 MODULES) FLOW METER H2S PC-DATA ACQUISITION COMPRESSOR ACTIVE CARBON CHAMBER HUMIDIFIERS PARTICLE FILTER
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T H2S RE (%) Time T+24 h
Experiment: 194 days Stationary changes: 24h H2S Conc (ppm) Flow (l min-1) Time T RESIDENCE TIME 3s. H2S RE (%) Time T+24 h
A NEED FOR BIOFILTER MANAGEMENT & CONTROLIN H2S DEGRADATIONTOOL: Why Neural Networks?THEY CAN DESCRIBE NON-LINEAR EFFECTSFLUID MECHANICSBIO-CHEMISTRY
BIOFILTER MODELLING “Black Box” approach Neural Networks vs Multiple Linear Regression INPUTS UNIT FLOW QU [1.5 4.7] min-1 + CONCENTRATION C [25 346] ppm AT TIME T OUTPUTS BIOFILTER H2S REMOVAL EFFICIENCY %E [37 100] % AT TIME T+24 h NN/MLR
NEURAL NETWORKS: BUILDING A NETWORK MEANS: • Selecting type: MLP Transfer function: Logistic. S-shaped (sigmoid) curve, with output in the range (0,1). F(x)=(1- e-1)-1RBF-GRNN • ARCHITECTURE: How many layers? How many nodes? • Estimating the parameters (weights and thresholds)
How to identify the best NN? (I) DATA & INPUTS • Database (194 cases) splitted into training (103), validation (18), test (73) • TOOL:GENETIC ALGORITHM • 2. Meaningful inputs • TOOL:GENETIC ALGORITHM and GRNN
How to identify the best NN? (II) Architectures • 5 different types of NN’s were tested • Multilayer Perceptron (MLP) with 1 & 2 hidden layer • Radial Basis Function (RBF) • GRNN (Generalized Regression Neural Networks) Each type of NN may have different architectures
How to identify the best NN? (III) AVOID OVERFITTING The fitting process progresses beyond a certain point and the NN instead of learning the basic features of the variables being modelled, starts to learn particular aspects associated to the random noise variation Cross-validation. As fitting progresses, the network error is calculated on the validation data set. When it reaches a minimum it stops.
How to identify the best NN? (IV) MINIMUM ABSOLUTE ERROR The fitting process is guided by an algorithm starting from random values of the parameters to estimate. The fitting process may get stuck in local minima. Some random initial values may be closer than others to absolute minimum error For each architecture, starting from different sets of random values more than one NN is fitted
Searching the best NN Once identified DATA & INPUTS 10000 candidate NN’s have been built ranging: • Five types of NN’s • Different numbers of nodes • Different initial random values
Choosing the best NN (I) 10000 different NN’s ranging different architectures (types-nodes-initial random values) were tested CRITERIA: Lowest error in validation dataset • Fitting algorithm: • Backpropagation (beginning) • Conjugated Gradient (Last stages of convergence)
Choosing the best NN (II) 100 minutes of computing time Best NN; MLP 2-2-1 (1 hidden layer) Minimum Error in validation dataset = 0.064158 The influence of Qu and C on E can be estimated by means of a Sensitivity Analysis: The relative impacts on the E due to changes in the inputs are measured. Qu = 3.67; C = 3.04. In a proportion according to the ratio of their relative impacts, Qu is more relevant than C to explain E. Backpropagation: 100 epochs/Conjugated Gradient: 42 epochs