1 / 21

Lecture 3

Lecture 3. First, a bit more python. Then some noise statistics. Python ins and outs. We’re going to mostly read our data from FITS files, using a module called pyfits . http://www.stsci.edu/resources/software_hardware/pyfits We’ll crunch up the data using a module called numpy .

nhillard
Download Presentation

Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 3 • First, a bit more python. • Then some noise statistics.

  2. Python ins and outs • We’re going to mostly read our data from FITS files, using a module called pyfits. • http://www.stsci.edu/resources/software_hardware/pyfits • We’ll crunch up the data using a module called numpy. • http://numpy.scipy.org/ • For graphical output we’ll use module ppgplot, (2 ps) which is a (fairly crude) wrapper to a package called PGPLOT. • http://efault.net/npat/hacks/ppgplot/ • http://www.astro.caltech.edu/~tjp/pgplot/ BREAKING NEWS! pylab is better for graphical output.

  3. Manuals and example code • You won’t need to download manuals for pyfits, numpy, scipy or pylab. I’ll do so and make them available from my home page. It’ll be much quicker for you to copy (or simply access) them from there. • Some of these manuals are huge – I recommend you neither print or read them in entirety, but rather: • Read the bits you need using acroread. • Look at the example code I’ll provide you.

  4. Random variables – probability density p(x) Average μ: Units: probability per unit x. Variance σ2: x

  5. Random variables – probability density p(x) Estimate of μ: Estimate of σ2: x

  6. Random variables – probability density p(x) x

  7. Random variables – probability density p(x) x x0

  8. Noise • 2 most important distributions: • Gaussian • …but not necessarily ‘white’ p(x)

  9. Noise • Poisson …but note Central Limit theorem. (= i)

  10. Gauss and Poisson

  11. Combinations of random variables Weighted average: Uncertainty in this: Best SNR when: Uncertainty propagation: if y = f(a,b), where:

  12. Filtering and correlation Uncorrelated ie ‘white’ noise (with a Gaussian probability distribution.)

  13. Filtering and correlation Fourier transformed – looks the same – same power at all f – hence ‘white’.

  14. Filtering and correlation Autocorrelation function. Power spectrum: where F is the Fourier transform of signal f, eg: The autocorrelation function is:

  15. Filtering and correlation Filtering operation correlates the noise. It is still Gaussian but no longer white.

  16. Filtering and correlation FT is dominated in this case by low frequencies.

  17. Filtering and correlation Autocorrelation function is broadened.

  18. Signals and noise General: XMM-Newton: Interferometry: + uncorrelated noise + uncorrelated noise + uncorrelated noise Instrumental filtering/correlation Instrumental filtering/correlation Instrumental filtering/correlation + uncorrelated noise + uncorrelated noise + uncorrelated noise + background + background + background Natural signal Natural signal Natural signal

  19. Signal detection An obvious source. Much harder.

  20. Signal detection • Parent function; data; model. • Probability that the data results from a certain model: • Reduced chi2: divide by deg free. • Recall that Psignal = 1-Pno signal. • Many types of signal, but only 1 no-signal. • Hence test model with no source. • Called the ‘null hypothesis’.

  21. Signal detection • Null hypothesis requires: • Perfect knowledge of background B. • A good estimate of σ. • Sparse sources among gaussian: ok. • But… • what about poisson bins with zero counts? • Answer: Maximum Likelihood. • what about crowded fields? • Answer: Bayes..? • can we trust the distribution? • Answer: Monte Carlo.

More Related