[PIPE2D-375] Investigate reduceArc failures on recent LAM data Created: 22/Feb/19 Updated: 15/Mar/19 Resolved: 15/Mar/19 |
|
| Status: | Done |
| Project: | DRP 2-D Pipeline |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 6.0 |
| Type: | Story | Priority: | Normal |
| Reporter: | price | Assignee: | price |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Story Points: | 2 | ||||||||
| Sprint: | 2019 B, 2DDRP-2019 C | ||||||||
| Reviewers: | hassan | ||||||||
| Description |
|
ncaplar reports problems running reduceArc.py on recent LAM data:
It's possible that |
| Comments |
| Comment by ncaplar [ 22/Feb/19 ] |
|
note that science data=11664..11834 is the whole defocused sequence - I am not sure if the code can handle defocused data? Use 11748 for data in the focus (which is the visit in the reduceArc.py). |
| Comment by price [ 28/Feb/19 ] |
|
After a bunch of mucking around with ingesting images on the cluster, I ran the following: constructBias.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/bias --cores 16 --job bias --id visit=11835..11849 ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/bias/BIAS/pfsBias-2019-02-02-0-r1.fits constructDark.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/dark --cores 16 --job dark --id visit=11850..11879 ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/dark/DARK/pfsDark-2019-02-02-0-r1.fits ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/pprice/drp_stella_data/raw/detectorMap-sim-1-r.fits --mode=copy constructFiberFlat.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/flat --cores 16 --job flat --id visit=11535..11663:3 ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/flat/FLAT/pfsFiberFlat-2019-02-01-011535-r1.fits constructFiberTrace.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/fiberTrace --cores 1 --job fiberTrace --id visit=11530 ingestCalibs.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --validity 3000 /tigress/HSC/PFS/LAM/rerun/price/pipe2d-375/fiberTrace/FIBERTRACE/pfsFiberTrace-2019-02-01-011530-r1.fits reduceArc.py /tigress/HSC/PFS/LAM --calib /tigress/HSC/PFS/LAM/CALIB --rerun price/pipe2d-375/arc --id visit=11748 --config reduceExposure.doSubtractContinuum=True reduceExposure.doWriteCalexp=True That last one fails with the following error:
reduceArc FATAL: Failed on dataId={'visit': 11748, 'dateObs': '2019-02-02', 'site': 'L', 'category': 'A', 'expId': 11748, 'arm': 'r', 'spectrograph': 1, 'field': 'ARC', 'ccd': 1, 'filter': 'r', 'expTime': 14.998, 'dataType': 'arc', 'taiObs': '2019-02-02', 'pfiDesignId': 0, 'slitOffset': 0.0}: AssertionError: Monotonic
Traceback (most recent call last):
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/pipe_base/16.0+1/python/lsst/pipe/base/cmdLineTask.py", line 392, in __call__
result = task.run(dataRef, **kwargs)
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/reduceArcTask.py", line 146, in run
results = self.reduceExposure.run([dataRef])
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/reduceExposure.py", line 220, in run
continua = self.fitContinuum.run(spectra)
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 64, in run
continuum = self.wrapArray(self.fitContinuum(spec), spec.fiberId)
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 86, in fitContinuum
good = self.maskLines(spectrum)
File "/tigress/HSC/PFS/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/drp_stella/5.0/python/pfs/drp/stella/fitContinuum.py", line 186, in maskLines
assert np.all(delta >= 0) or np.all(delta <= 0), "Monotonic"
AssertionError: Monotonic
reduceArc FATAL: Failed to process at least one of the components
|
| Comment by price [ 01/Mar/19 ] |
|
I found a couple of problems. First, this needs an updated detectorMap (I believe the above error is caused by using one I've over-tuned, and is non-monotonic at the very ends). I've generated new detectorMaps from the simulator, and will put them into drp_stella_data as part of the Next, the list of reference lines was incredibly sparse (one or two) because the line list reading code thresholds on the intensity in the NIST list, and the threshold was set over most of the lines. Dropping the threshold, we get lots of lines. Once that was sorted, I tried both the vanilla line identification and the new one on |
| Comment by price [ 09/Mar/19 ] |
|
This works with the changes on |
| Comment by ncaplar [ 09/Mar/19 ] |
|
As discussed on Slack, the Neon data to be tested are:
|
| Comment by price [ 15/Mar/19 ] |
|
This includes a single commit that changes the default reference line intensity threshold to be suitable for HgAr as well as Ne. |
| Comment by price [ 15/Mar/19 ] |
|
Merged to master. |
| Comment by hassan [ 15/Mar/19 ] |
|
Fix has been tested against the Neon data Neven mentioned in his comment https://pfspipe.ipmu.jp/jira/browse/PIPE2D-375?focusedCommentId=15091&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15091 |