1 / 33

Research Update: Optimal 3TI MPRAGE T1 Mapping and Differentiation of Bloch Simulations

Research Update: Optimal 3TI MPRAGE T1 Mapping and Differentiation of Bloch Simulations. Aug 4, 2014 Jason Su. Cramér -Rao Lower Bound. Kingsley . Concepts Magn . Reson . 1999;11:243–276. Theory: A common formulation. Theory: Fisher Information Matrix. Theory: Cramér -Rao Lower Bound.

maj
Download Presentation

Research Update: Optimal 3TI MPRAGE T1 Mapping and Differentiation of Bloch Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Update:Optimal 3TI MPRAGE T1 MappingandDifferentiation of Bloch Simulations Aug 4, 2014 Jason Su

  2. Cramér-Rao Lower Bound Kingsley. Concepts Magn. Reson. 1999;11:243–276.

  3. Theory: A common formulation

  4. Theory: Fisher Information Matrix

  5. Theory: Cramér-Rao Lower Bound

  6. The Challenge: Computing the Jacobian

  7. Optimal Experimental Design • How do we design an experiment to give us the highest SNR measurement of a quantity? • If you only had one shot to examine something for the next 20 years, how would you collect the best data possible?

  8. Optimal Experimental Design Atkinson and Donev. Optimum experimental designs. Oxford: Oxford University Press; 1992

  9. The Framework

  10. A Typical Setup

  11. A Typical Setup: Optimization

  12. Application: 3TI MPRAGE

  13. 3TI MPRAGE: Setup

  14. 3TI MPRAGE: Setup • Find TIs, TS, <acquisition time fraction> • Estimate T1, M0, κ • Cost: T1/σT1 * sqrt(TS) • The coefficient of variation of T1, normalized by acquisition time • i.e. precision efficiency • Fixed parameters – based on Liu’s marmosat parameters • N = 64 readouts, centric-encoding • α = 9deg., TR = 8.45 • Constraints – implemented as plugin transforms to the input variables • train_length = (N-1)TR = 532.35 • TS ≥ 2*train_length • TI ≤ TS – train_length • TI ≥ 10ms (or higher for multi T1) • Solver • Brute force search number of TIs/images from 2-5 • Regularization = 1e-16, checked against neighboring values 0, 1e-14, 1e-12 for consistency

  15. Single T1=1300ms Variable Time Fraction Equal Time Fraction

  16. Single T1=1300ms Variable Time Fraction Equal Time Fraction

  17. Comments

  18. Optimal Parameters as T1 Changess

  19. Cost Gap

  20. Multiple T1 • To optimize over a range of T1s, we sample the range and evaluate the CRLB of a protocol for each • The cost is then a single value summary of these precisions • Common to either minimize the mean or maximum (i.e. worst case) CoV • T1=1000-2000ms (20 pts)

  21. Multi T1=1000-2000 Mean Max

  22. Multi T1=1000-2000 Mean Max

  23. Multi T1=1000-2000, equal time Mean Max

  24. Multi T1=1000-2000, equal time Mean Max

  25. Multi T1=1000-2000, free FA, equal time Mean Max

  26. Multi T1=1000-2000, free FA, equal time Mean Max

  27. Next Steps • Multi κ • Multiple α recon? • ARLO?

  28. Bloch Simulation • One of the distinct advantages of automatic differentiation is that it can handle complex programs • Bloch simulation and extended phase graph analysis are ways to analyze the MR experiment using computer programs • More complicated mapping methods, like MR Fingerprinting, rely on simulation to describe their • This provides a way to extend the experimental design framework to more exotic pulse sequences

  29. Libraries • I’ve been using pyautodiff for AD • It provided the most seamless conversion of functions to derivatives, with 0 extra code asked of the user • However, it is slow. Somewhat surprising because it uses theano which does a JIT compile to C. • It is even slower for programs with loops • So I went shopping for different AD packages • theano (what I’ve been using) • ad (pure python) • algopy (inspired by ADOL-C?) • CasADi (python frontend to C library)

  30. Simple Speed Test • Bloch simulation is essentially a sequence of matrix multiplies and additions on an input time series • Simple vector accumulation test • Other ones I want to try: • CppAD, pyadolc

  31. Implementation • Given that there are still other AD packages out there that may be better • Bloch simulation implemented to be modular so can plugin in whatever AD library • Assuming it has the same general structure: • Instantiate symbolic tracer variables • Use the specific math functions from the library • With this I have theano and CasADi versions working

  32. Bloch Simulation Speed • For a 1000 length input Bloch simulation: • Hargreaves’s MEX = 0.13 ms • CasADi = 1ms • theano = 94ms • For a 1000x1000 Jacobian of the simulation: • Central difference = 0.1*2000 = 260ms • Half for forward difference • CasADi = 115ms (with no loss of accuracy!) • theano = 2m54s

  33. Should I migrate everything to CasADi? • Or away from theano • Ease of use? • Been treating bloch as a separate module so don’t implementation can be as complex as I can take

More Related