Characterize PSF using out-of-focus images
(PIPE2D-243)
|
|
| Status: | Done |
| Project: | DRP 2-D Pipeline |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Sub-task | Priority: | Normal |
| Reporter: | ncaplar | Assignee: | ncaplar |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Sprint: | 2DDRP-2019 D |
| Reviewers: | hassan |
| Description |
|
I have to find more efficient method to estimate wavefront aberrations. First I used Levenberg–Marquardt algorithm, as used by Josh Meyers in HSC work. In early stages of the project I convinced myself that the algorithm was prone to find only local minimum and was giving poor results. I then switched to using emcee algorithm, but this had similar problem. At the moment I am using Parallel-Tempering Ensemble MCMC algorithm which more efficiently explores the parameter space. Problem is that this is very slow and takes large amount of computational time (e.g., ~10 hours on 28 cores for a single donut). There are several avenues to explore: 1. Speeding up computation of individual donuts. This probably means breaking out from GALSIM -> potentially painful 2. Improving current code or the code that I know. Do I really have to use cool methods such as parallel tempering? Can I get faster convergence, by e.g., evaluating code in stages or setting better initial values? Is it really true that LM settles to wrong local minima and I can not use it? Should I use nested sampling to converge faster? 3. Use methods from literature. I found two papers that give some details on how they calculated Zernike coefficients relatively cheaply, using iterative methods. |
| Comments |
| Comment by ncaplar [ 11/Apr/18 ] |
|
Update as of April 10: 1. Speeding up computation of individual donuts: This does not seem to be possible. The most time has been spent in fft computation. This is already quite optimized. I tried scipy, numpy, and galsim implementations and they are all basically the same. Only potential avenue is to go through direct c (e.g., cython). In the meantime, the added complexity of the code (inclusion of convolution by the fiber which drops toward the edges, inclusion of radiometric effect (flux in the exit pupil not uniformly distributed)) has added time overhead. 2. Improving current code or the code that I know: Not fully explored as I concentrated on the code working at the most basic level. At the moment I am just throwing cpu cores at the problem. 3. Use methods from literature. Same as above
|
| Comment by ncaplar [ 12/Feb/19 ] |
|
I recommend closing this ticket. Some speed up has been achieved through evolution of the code. If I find that I am critically blocked by the speed of my code, I recommend reopening the new issue. |
| Comment by hassan [ 18/Mar/19 ] |
|
Agree with Neven's last comment - some speed updates have been made, and further work in speed performance will be captured in future tickets. |