[PIPE2D-375] Investigate reduceArc failures on recent LAM data Created: 22/Feb/19  Updated: 15/Mar/19  Resolved: 15/Mar/19

Status: Done
Project: DRP 2-D Pipeline
Component/s: None
Affects Version/s: None
Fix Version/s: 6.0

Type: Story Priority: Normal
Reporter: price Assignee: price
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Relates
relates to PIPE2D-379 Create set of cutouts to be used to m... Done
Story Points: 2
Sprint: 2019 B, 2DDRP-2019 C
Reviewers: hassan

 Description   

ncaplar reports problems running reduceArc.py on recent LAM data:

  • Raw data in /tigress/HSC/PFS/LAM/raw/2019-02-02
  • bias=11835..11849, dark=11850..11879, ditheredflats=11535..11663:3, flats=11530..11534, science=11664..11834
  • reduceArc.py $TARGET --calib $TARGET/CALIB --rerun $RERUN/arc --id visit=11748 -j $CORES --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True
  • The numbers are deduced by looking at https://people.lam.fr/madec.fabrice/pfs/ait_logbook_SM1.html

It's possible that PIPE2D-319 fixes this without any further effort.



 Comments   
Comment by ncaplar [ 22/Feb/19 ]

note that science data=11664..11834 is the whole defocused sequence - I am not sure if the code can handle defocused data? Use 11748 for data in the focus (which is the visit in the reduceArc.py).

Comment by price [ 28/Feb/19 ]

After a bunch of mucking around with ingesting images on the cluster, I ran the following:

constructBias.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/bias --cores 16 --job bias --id visit=11835..11849
ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/bias/BIAS/pfsBias-2019-02-02-0-r1.fits
constructDark.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/dark --cores 16 --job dark --id visit=11850..11879
ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/dark/DARK/pfsDark-2019-02-02-0-r1.fits
ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/pprice/drp_stella_data/raw/detectorMap-sim-1-r.fits --mode=copy
constructFiberFlat.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/flat --cores 16 --job flat --id visit=11535..11663:3
ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/flat/FLAT/pfsFiberFlat-2019-02-01-011535-r1.fits
constructFiberTrace.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/fiberTrace --cores 1 --job fiberTrace --id visit=11530
ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/fiberTrace/FIBERTRACE/pfsFiberTrace-2019-02-01-011530-r1.fits

reduceArc.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/arc --id visit=11748 --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True

That last one fails with the following error:

reduceArc FATAL: Failed on dataId={'visit': 11748, 'dateObs': '2019-02-02', 'site': 'L', 'category': 'A', 'expId': 11748, 'arm': 'r', 'spectrograph': 1, 'field': 'ARC', 'ccd': 1, 'filter': 'r', 'expTime': 14.998, 'dataType': 'arc', 'taiObs': '2019-02-02', 'pfiDesignId': 0, 'slitOffset': 0.0}: AssertionError: Monotonic
Traceback (most recent call last):
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/pipe_base/16.0+1/python/lsst/pipe/base/cmdLineTask.py", line 392, in __call__
    result = task.run(dataRef, **kwargs)
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/reduceArcTask.py", line 146, in run
    results = self.reduceExposure.run([dataRef])
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/reduceExposure.py", line 220, in run
    continua = self.fitContinuum.run(spectra)
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 64, in run
    continuum = self.wrapArray(self.fitContinuum(spec), spec.fiberId)
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 86, in fitContinuum
    good = self.maskLines(spectrum)
  File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 186, in maskLines
    assert np.all(delta >= 0) or np.all(delta <= 0), "Monotonic"
AssertionError: Monotonic
reduceArc FATAL: Failed to process at least one of the components
Comment by price [ 01/Mar/19 ]

I found a couple of problems.

First, this needs an updated detectorMap (I believe the above error is caused by using one I've over-tuned, and is non-monotonic at the very ends). I've generated new detectorMaps from the simulator, and will put them into drp_stella_data as part of the SIM2D-94/SIM2D-99/PIPE2D-344/PIPE2D-316 effort.

Next, the list of reference lines was incredibly sparse (one or two) because the line list reading code thresholds on the intensity in the NIST list, and the threshold was set over most of the lines. Dropping the threshold, we get lots of lines.

Once that was sorted, I tried both the vanilla line identification and the new one on PIPE2D-319. I found that the vanilla version is far superior in terms of the number of lines found and the RMS. I believe this is due to the enormous number of Hg lines, many of which are pretty close to each other; I had previously tested PIPE2D-319 with Ne.

Comment by price [ 09/Mar/19 ]

This works with the changes on PIPE2D-316 (which should be merged soon; note: vanilla line identification algorithm) and the fixed detectorMap if you specify --config minArcLineIntensity=0.

Comment by ncaplar [ 09/Mar/19 ]

As discussed on Slack, the Neon data to be tested are:

Comment by price [ 15/Mar/19 ]

This includes a single commit that changes the default reference line intensity threshold to be suitable for HgAr as well as Ne.

Comment by price [ 15/Mar/19 ]

Merged to master.

Comment by hassan [ 15/Mar/19 ]

Fix has been tested against the Neon data Neven mentioned in his comment https://pfspipe.ipmu.jp/jira/browse/PIPE2D-375?focusedCommentId=15091&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15091

Generated at Sat Feb 10 15:52:26 JST 2024 using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b.