[PIPE2D-614] Illumination estimate of the pupil should be constant across different defocuses for single spot Created: 29/Jun/20  Updated: 05/Jan/21  Resolved: 20/Nov/20

Status: Done
Project: DRP 2-D Pipeline
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: ncaplar Assignee: ncaplar
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Example_1d_extraction_large.png     PNG File Example_1d_extraction.png     PNG File Example_final_defocused.png     PNG File Example_final_focused.png     PNG File Example.png     PNG File focus1d_large.png     PNG File focus1d.png     PNG File focus_log.png     PNG File focus.png     PNG File m3_example.png     PNG File m4_example.png     PNG File p3_example.png     PNG File p4_example.png     PNG File Quality.png    
Issue Links:
Blocks
is blocked by PIPE2D-631 Illumination constant defocuses - im... Done
Relates
relates to PIPE2D-654 Create post-stamp images for Subaru d... Done
relates to PIPE2D-671 Analyze Subaru images and create a 2d... Done
relates to PIPE2D-670 Improve modeling of pupil illuminatio... Won't Fix
relates to PIPE2D-406 Skip parts of PSF modeling which are ... Done
Story Points: 6
Sprint: 2DDRP-2021 A

 Description   

Pupil illumination is the same at different levels of defocus, and only the wavefront changes. At the moment, I do iterative estimate which fits each spot at each defocus separately. This sometimes produces a consistent estimate of the illumination, but sometimes not. In particular, FRD estimate is often discrepant at different levels of defocus. Example.png shows an example of two spots. In the upper row, the estimate is inconsistent at two sides of defocus (frd_s values are very different). In the lower row, example where they are consistent. The estimation code should handle this properly, by analyzing images at the same time.



 Comments   
Comment by ncaplar [ 18/Aug/20 ]

I have uploaded images showing the current state of the processing. All images are analyzed at once (although this can still be improved, to be discussed below). The code uses an algorithm from Tokovinin & Heathcote 2006 to search for the best wavefront solution. This is a code I mentioned in PIPE2D-354 (and is the basis for PIPE2D-553, even though it is not mentioned explicitly there). 

The quality.png shows the chi*2 reduced of images as a function of defocus. It shows the maximal chi2 (i.e., effectively very close to just np.mean(sci_image2/var_image) ), and chi*2 after model is subtracted. I am also showing line showing improvement of log-factor of 2.3, which is roughly the level of improvement which I have been getting with single image fitting.

I am then showing several images showing fitting at different refocused. One can see that I am missing some global dependence - observed how there is a systematic trend in the residuals (more red in the bottom left and top right, more blue in bottom right and top left.)

 

I am then showing results in the focus, in 2d space and 1d space. Even though this is very high signal to noise spot so some deviations is expected, the results can certainly be improved.

 

At the moment the code still effectively analyzes individual images, suggests solutions for each image, and then interpolates the solution before going to the next step in the iteration. The proper way would be to analyze all images at once. 

Comment by ncaplar [ 20/Nov/20 ]

I have implemented the version of the code that is able to analyze multiple images at once in a relatively economic and sustainable fashion.

The code works in several steps:

1. Given the initial guess for the parameters, find the best wavefront that minimizes the likelihood of all images. To do that I move wavefront parameters one by one and recorded the change that this does to the images. I then solve the linear equation which aims to maximize the likelihood. This assumes that we are making small enough steps that the changes are linear and that we are close enough to the real solution that making these small steps makes sense. This is a relatively expensive step as creating the images for the change of each parameter takes time (around 30 to 40 seconds when working on 40 cores). 

2. There is an optimization algorithm that explores non-wavefront parameters. For probing of the non-wavefront parameters I also try to find the best wavefront parameters without recalculating the images, i.e., assuming that changes due to wavefront parameters changes are the same as calculated in step 1.    

3. If you found the parameters which are better than at the beginning, go back to step one and update the full result. Iterate until you converged. 

Comment by ncaplar [ 20/Nov/20 ]

Another improvement is the fact that I have discovered that 3 struts that hold the detector are not of the same width. In previous versions of the algorithm, they have been fixed to be the same, which was creating large problems for the algorithm. I am still trying to go to understand which numbers should be inserted, but a preliminary test in which the vertical struts is 33% larger than the other 2 struts gives a significant improvment.

Comment by ncaplar [ 20/Nov/20 ]

I have also added images showing the improvement results:

1. Example_final_defocused - 2d results for one of the spots when defocused
2. Example_final_focused - 2d results for the same spot in focus
3. Example_1d_extraction_large - 1d result extraction
4. Example_1d_extraction - 1d result extraction, zoomed 

Comment by ncaplar [ 20/Nov/20 ]

I will now close this ticket and use this algorithm to analyze defocused data from Subaru. 

The only improvement I hope to make is a more accurate description of struts, based on conversations with Jim and fmadec. I will open a separate ticket for this. 

Generated at Sat Feb 10 15:55:25 JST 2024 using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b.