-
Type: Sub-task
-
Status: Done (View Workflow)
-
Priority: Normal
-
Resolution: Done
-
Affects Version/s: None
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
-
Sprint:2DDRP-2019 D
I have to find more efficient method to estimate wavefront aberrations.
First I used Levenberg–Marquardt algorithm, as used by Josh Meyers in HSC work. In early stages of the project I convinced myself that the algorithm was prone to find only local minimum and was giving poor results. I then switched to using emcee algorithm, but this had similar problem. At the moment I am using Parallel-Tempering Ensemble MCMC algorithm which more efficiently explores the parameter space. Problem is that this is very slow and takes large amount of computational time (e.g., ~10 hours on 28 cores for a single donut).
There are several avenues to explore:
1. Speeding up computation of individual donuts. This probably means breaking out from GALSIM -> potentially painful
2. Improving current code or the code that I know. Do I really have to use cool methods such as parallel tempering? Can I get faster convergence, by e.g., evaluating code in stages or setting better initial values? Is it really true that LM settles to wrong local minima and I can not use it? Should I use nested sampling to converge faster?
3. Use methods from literature. I found two papers that give some details on how they calculated Zernike coefficients relatively cheaply, using iterative methods.
Tokovinin & Heathcote 2006
Roodman 2010 and connected DECcam papers
+Bo Xin et al. algorithm, as mentioned in their papers and DM-Donuts slack channel