70 likes | 235 Views
Output Y(s). ??. T =. s 3. Input (s). s 1. s P. s 2. n = Number of Input Points (=3) T = Number of Outputs at each input, i.e., “Replications” (=5). Special Cases I’d like to be able to deal with. When T = 1 (i.e., 1 Replication) and assuming normality it should reduce to a GP.
E N D
Output Y(s) ?? T = s3 Input (s) s1 sP s2 n = Number of Input Points (=3) T = Number of Outputs at each input, i.e., “Replications” (=5)
Special Cases I’d like to be able to deal with • When T = 1 (i.e., 1 Replication) and assuming normality it should reduce to a GP. • When n = 1 (i.e., 1 input point) it should reduce to CDF estimation as with a univariate DP prior.
Gelfand, Kottas, and MacEachernJASA vol. 100, No. 471 (2005) Output Z(x) Each color is a different replication Input (x) x3 x1 x2
Gelfand, Kottas, and MacEachernJASA vol. 100, No. 471 (2005)
How to break the association of outputs across inputs? • Could add a (uniform?) prior on Replication membership. • This could in principle be dealt with by inserting a Metropolis step within the Gibbs sampler . • I’m unsure whether this Gelfand et al. technique is helpful for estimating the CDF when there is only one input point.
Another Idea • Where CH(f)(F1,…,Fn) is a copula, either: • Elliptically contoured (e.g., Gaussian) • Fairlie-Gumbel-Morgenstern
Copulae • Gaussian • F-G-M