[PIPE2D-411] reduceArc.py does not work with --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True Created: 27/Apr/19  Updated: 01/May/19  Resolved: 30/Apr/19

Status: Done
Project: DRP 2-D Pipeline
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: ncaplar Assignee: price
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File HgArFeb.png     PNG File KrFeb.png     PNG File NeonApr.png     PNG File NeonFeb.png     File pfs_Apr30_HgAr_Feb copy.sh    
Story Points: 2
Sprint: 2DDRP-2019 D
Reviewers: ncaplar

 Description   

reduceArc.py seems to fail with the config parameters

 --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True

Note that at the moment reduceArc passes without these config parameters specified, but it creates psfArm file that is 0 bytes for the first visit id. If more values for visit id are passed, psfArm files that have non-zero size are created for the non-initial visits. This is described in comments of PIPE2D-339. Perhaps fixing that issue will fix this one as well.

With pfs_pipe2d 5.0-10-gff9a74e and checkout of tickets/PIPE2D-404, on tigress, this works :

reduceArc.py /tigress/ncaplar/ReducedData/HgArFeb_2019 --calib /tigress/ncaplar/ReducedData/HgArFeb_2019/CALIB --rerun Apr25_2019_reduce_subtract/arc --id visit=11748..11753 -j 20

while this does not:

reduceArc.py /tigress/ncaplar/ReducedData/HgArFeb_2019 --calib /tigress/ncaplar/ReducedData/HgArFeb_2019/CALIB --rerun Apr25_2019_reduce_subtract/arc --id visit=11748..11753 -j 20 --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True 


 Comments   
Comment by price [ 27/Apr/19 ]

I believe I've fixed this on branch tickets/PIPE2D-411 of drp_stella. (There's also a tiny fix to pfs_pipe2d which is related but not necessary to fix it.)

ncaplar, please let me know if this fixes it for you.

Comment by ncaplar [ 27/Apr/19 ]

price Good news - this fixes the problem raised in PIPE2D-339, (first visit psfArm has 0 bytes). Bad news - it does not seem to fix problem raised in this ticket, i.e., the problem when one tries to run reduceArc with

reduceExposure.doSubtractContinuum=True 

I have place the output which ends with reduceArc FATAL at https://gist.github.com/nevencaplar/6b618c5360ac84f1c3751d2969f1b1ea

NOTE: I only did checkout of drp_stella, and left pfs_pipe2d at ticket/404

Comment by price [ 27/Apr/19 ]

I can't reproduce this with the simulated data, so it seems to be peculiar to the LAM data. The error seems to be due to a bad wavelength solution, and because this is before we do a fit on the arc, I'm suspicious of the detectorMap. Which detectorMap are you using?
I see two in your calibs directory, but according to the calib registry the only one active is pfsDetectorMap-011748-r1.fits, and that one is indeed non-monotonic. That looks like it was put together from LAM data, but it's not clear to me how it was made. Perhaps it was over-fit or there weren't a lot of lines, leading to non-monotonicity at the ends? Do you have the log from its creation?

>>> import numpy as np
>>> import pfs.drp.stella
>>> detMap = pfs.drp.stella.DetectorMap.readFits("DETECTORMAP/pfsDetectorMap-011748-r1.fits")
>>> [np.all(ww[1:] - ww[:-1] > 0) or np.all(ww[1:] - ww[:-1] < 0) for ww in detMap.wavelength]
[True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]
Comment by ncaplar [ 28/Apr/19 ]

I am not surprised that there are differences between simulated and LAM data

I have placed the whole script at https://gist.github.com/nevencaplar/6a01956f33bcc5e249c57f674f8ee67d. I should have done immediately. As you can see inside that I am using

ingestCalibs.py $TARGET --calib $TARGET/CALIB --validity 1800 \
		$OBS_PFS_DIR/pfs/camera/detectorMap-sim-1-r.fits --mode=copy --config clobber=True || exit 1

I was under the impression that that is what we were supposed do use that as a starting point? I believe that naoki.yasuda is also using detectorMap-sim-1-r?

Final note, as this is a script modified from integration_test, there is a line inside with

rm -rf $TARGET

which means that if you just use it out of the box, you will have to use recreate everything from scratch, which might not be something that you wish to do.

I do not have any extra files which are not in the directory ( /tigress/ncaplar/ReducedData/HgArFeb_2019/) which is specified in the gist from the previous comment (https://gist.github.com/nevencaplar/6b618c5360ac84f1c3751d2969f1b1ea).

Comment by price [ 30/Apr/19 ]

Ah, obs_pfs/pfs/camera/detectorMap-sim-1-r.fits is bad. I hadn't remembered that it was there when I discovered it had problems. The detector maps on master in drp_stella_data are better. I'll try them out and see how they do in this case.

Comment by price [ 30/Apr/19 ]

I copied /tigress/ncaplar/ReducedData/HgArFeb_2019/CALIB, deleted the detectorMap files and entries in the calibRegistry.sqlite3 and then:

pprice@tiger2-sumire:/tigress/pprice/pipe2d-411 $ ingestCalibs.py /tigress/ncaplar/ReducedData/HgArFeb_2019 --calib CALIB ../drp_stella_data/raw/detectorMap-sim-*.fits --validity 1000 --mode=copy
pprice@tiger2-sumire:/tigress/pprice/pipe2d-411 $ reduceArc.py /tigress/ncaplar/ReducedData/HgArFeb_2019 --calib CALIB --output out --id visit=11748..11753 -j 8 --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True

And that worked just fine. I'll copy the good detector maps over, and we should be good to go.

Comment by price [ 30/Apr/19 ]

OK, ncaplar, this is ready for review, please.

price@MacBook:~/pfs/obs_pfs/pfs/camera (tickets/PIPE2D-411=) $ cd ~/pfs/obs_pfs/price@MacBook:~/pfs/obs_pfs (tickets/PIPE2D-411=) $ git sub
commit a1553397b7684c97ec8e525ec6463d75d7163587 (HEAD -> tickets/PIPE2D-411, origin/tickets/PIPE2D-411)
Author: Paul Price <price@astro.princeton.edu>
Date:   Mon Apr 29 15:07:25 2019 -0400

    replace bad detectorMap with better ones
    
    The pfs/camera/detectorMap-sim-1-r.fits file comes from when I
    was over-fitting the wavelength solution, and it has
    non-monotonic wavelength solutions that break continuum
    subtraction in reduceArc (reduceExposure.doSubtractContinuum=True).
    
    These detectorMaps (red and blue) were generated from the
    simulator, and I've found they are much more reliable than
    the old one.

 pfs/camera/detectorMap-sim-1-r.fits | Bin 80205120 -> 0 bytes
 pfs/camera/detectorMap-sim-b1.fits  | Bin 0 -> 1465920 bytes
 pfs/camera/detectorMap-sim-r1.fits  | Bin 0 -> 1431360 bytes
 3 files changed, 0 insertions(+), 0 deletions(-)



price@MacBook:~/pfs/drp_stella (tickets/PIPE2D-411 %=) $ git sub
commit 149a73f1b06165d0d899ca7f085817c067775933 (HEAD -> tickets/PIPE2D-411, origin/tickets/PIPE2D-411)
Author: Paul Price <price@astro.princeton.edu>
Date:   Fri Apr 26 15:22:07 2019 -0400

    SpectrumSet: fix writeFits when used through the butler
    
    The butler gives us a temporary filename that we're supposed to
    write to, and then it moves it to the correct place. We were
    writing to the correct place, and then the empty temporary file
    was being written over the top. Instead, write to the filename
    we're given.

 python/pfs/drp/stella/SpectrumSetContinued.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
Comment by ncaplar [ 30/Apr/19 ]

1. I have placed outputs of the pipeline for all 4 datasets in the Figure.

2. I ran the whole pipeline again for HgAr data for which I want to run reduceArc.py with

reduceExposure.doSubtractContinuum=True

I had to run again because I inadvertently deleted detectorMap tables when messing up with {{calibRegistry.sqlite3 }}.

With the new fix I still get errors - final output at https://gist.github.com/nevencaplar/22769c0b049aef709f3d177f2925252a.
pfs_Apr30_HgAr_Feb copy.sh . I have also attached the whole script that I have used (pfs_Apr30_HgAr_Feb copy.sh).

The only difference is that ran without flat fielding as that takes long time. I would not expect that this creates a problem? I will let pipeline with flat-fielding run over night and report if that passes.

Comment by ncaplar [ 30/Apr/19 ]

The code passes. The problem are to be captured in the new ticket.

Comment by price [ 30/Apr/19 ]

Thanks Neven!

Everything's merged to master.

Generated at Sat Feb 10 15:52:52 JST 2024 using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b.