[PIPE2D-631] Illumination constant defocuses - improvement of the analysis Created: 03/Sep/20 Updated: 05/Jan/21 Resolved: 20/Nov/20 |
|
| Status: | Done |
| Project: | DRP 2-D Pipeline |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Normal |
| Reporter: | ncaplar | Assignee: | ncaplar |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Story Points: | 6 | ||||||||
| Sprint: | 2DDRP-2021 A | ||||||||
| Description |
|
As mentioned in the comments of I am currently implementing this algorithm.
|
| Comments |
| Comment by ncaplar [ 15/Sep/20 ] |
|
I have implemented the analysis that truly fits all of the images at once, using the Tokovinin algorithm (see comments in |
| Comment by rhl [ 15/Sep/20 ] |
|
Is this all images for one (fibre, wavelength) or all images for one fibre, or for the whole focal plane? I suspect the first. |
| Comment by ncaplar [ 15/Sep/20 ] |
|
Yes, it is the first, 9 images going into this evaluation. In this case, it is 3 images on one side of defocus (+4mm,+3.5mm,+3mm slit movement), 3 around focus (+0.5,0,-0.5), and 3 from other side of defocus (-4mm,-3.5mm,-3mm). It is quite fast, converging in about an hour while using 120 cores - but that is because it is a very brutal straightforward algorithm that linearly converges and gets stuck in these weird minimums. |
| Comment by ncaplar [ 13/Oct/20 ] |
|
I want to below give a short overview of some work that I have done last week. Unfortunately, I think that this was somewhat a red herring and in the next post, I will describe some of my current ideas. I wanted to see if I can specify the wavefront that I need to achieve to explain the observed data by changing individual pixels in the wavefront. So instead of modifying a relatively small set of Zernike parameters, I was modifying a large number of individual pixels in the wavefront. In this context, I have split my wavefront into 51x51 pixels. I would move the wavefront by a small amount in each of those pixels and record the change that this causes in the model image. After that, I looked at the linear combination of these movements that would minimize the residual from the input science image. For instance, one example of the prediction of this improvement is shown in ``Linear_prediction_of_change_to_model.png''. ``Wavefront_pixel_change'' shows the change to the wavefront that I have applied in order to predict the improved residual. ``Actual_change'' shows the result of this change to the wavefront. As we can see, the change is very different than what is expected from our nice prediction. I also tried to directly fit this proposed wavefront with Zernike polynomials in order to get a smooth solution to the problem. This is shown in ``Zernike_interpolation_to_wavefront''. The results are again unsatisfactory (Actual_change_after_Zernike_wavefront_aprox). |
| Comment by ncaplar [ 13/Oct/20 ] |
|
At the moment I think the problem is that I am fitting wavefront parameters and pupil parameters separately. So, I first propose a change to the wavefront, and then I move pupil parameters. What happens here is that A) the initial guess for the pupil parameters is a little bit off (as one can see from the struts and detector edges being visible in the residuals) b) I propose a change to wavefront parameters which try really hard to compensate these differences from struts and detector c) when I try to fit pupil parameters they fail to reach correct solutions because wavefront parameters have been stretched so hard to compensate. Obviously, these need to fit together. I am have avoided that so far because changes in the pupil illumination parameters do not have such nice linear properties as wavefront parameters (i.e., changing the amplitude of a Zernike parameters changes the amplitude of flux in the final image at the same spots in a linear fashion, as long the change is small). Moving detector to the left or right does not have the same property. I think the solution is to determine which parts of the donut are obscured when moving pupil parameters (e.g., moving detector), so I can have a shortcut to determine what changing those parameters will have on the final image, without having to regenerate the whole image each and every time when I move the detector in the fitting procedure.
|
| Comment by ncaplar [ 20/Nov/20 ] |
|
summarized in |