[PIPE2D-1301] add support for reduceProfiles in generateCommands Created: 20/Sep/23 Updated: 30/Sep/23 Resolved: 30/Sep/23 |
|
| Status: | Done |
| Project: | DRP 2-D Pipeline |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Story | Priority: | Normal |
| Reporter: | price | Assignee: | sogo.mineo |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Reviewers: | price | ||||||||
| Description |
|
generateCommands.py currently uses constructFiberProfiles.py to build fiber profiles. reduceProfiles.py is an alternative fiber profile construction code that fits profiles to multiple images simultaneously, and is preferred for use on Subaru data because some cobras cannot be hidden. Please support reduceProfiles in generateCommands. |
| Comments |
| Comment by sogo.mineo [ 20/Sep/23 ] |
|
Does it suffice to cease using constructFiberProfiles.py and to use reduceProfiles.py instead, or should constructFiberProfiles.py be kept as an option? |
| Comment by sogo.mineo [ 20/Sep/23 ] |
|
reduceProfiles.py does not take "--batch-type=smp --cores=$n". Is it possible to modify reduceProfiles.py so that it will take --batch-type option? Should I make a special override for reduceProfiles.py? |
| Comment by price [ 20/Sep/23 ] |
|
I think we can replace constructFiberProfiles with reduceProfiles, but we'll need to test it on the integration test and weekly. One advantage to replacing it is that reduceProfiles writes a complete set of fiberProfiles: there's no need to run constructFiberProfiles twice and then merge the profiles. reduceProfiles uses CmdLineTask rather than being based on the usual calib construction, so it doesn't support --batch-type and --cores. |
| Comment by sogo.mineo [ 21/Sep/23 ] |
|
Change of integration_test.yaml:
--- a/examples/integration_test.yaml
+++ b/examples/integration_test.yaml
@@ -28,15 +28,7 @@ calibBlock:
config:
- "isr.doCrosstalk=False" # No crosstalk in simulated data
fiberProfiles:
- group:
- -
- id: "field=FLAT_ODD"
- config:
- - "isr.doCrosstalk=False" # No crosstalk in simulated data
- -
- id: "field=FLAT_EVEN"
- config:
- - "isr.doCrosstalk=False" # No crosstalk in simulated data
+ id: "field=FLAT_ODD^FLAT_EVEN"
detectorMap:
id: "field=ARC"
config:
Generated command line:
reduceProfiles.py /data22a/mineo/pfswork/pfsrepos/pfs_pipe2d/TEST/INTEGRATION --calib=/data22a/mineo/pfswork/pfsrepos/pfs_pipe2d/TEST/INTEGRATION/CALIB --rerun=integration/test_calib/fiberProfiles --longlog=1 -j10 --doraise --id 'field=FLAT_ODD^FLAT_EVEN'
I get this error: Traceback (most recent call last): File "/data22a/mineo/pfswork/pfsrepos/drp_stella/bin/reduceProfiles.py", line 3, in <module> ReduceProfilesTask.parseAndRun() File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 688, in parseAndRun resultList = taskRunner.run(parsedCmd) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 240, in run targetList = self.getTargetList(parsedCmd) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceProfiles.py", line 44, in getTargetList raise RuntimeError(f"No norm provided for spectrograph={spectrograph} arm={arm}") RuntimeError: No norm provided for spectrograph=1 arm=b Do I have to give "config=..." options to let reduceProfiles.py run with the integration test data? |
| Comment by price [ 21/Sep/23 ] |
|
Oh, sorry. There's an additional --normId argument, which provides a quartz at full fiber density to measure for the normalisation of the profiles. In the case of the integration test, it should probably be something like --normId field=FLAT. |
| Comment by sogo.mineo [ 22/Sep/23 ] |
|
reduceProfiles.py now runs, but does not end in success. Yaml file:
--- a/examples/integration_test.yaml
+++ b/examples/integration_test.yaml
@@ -28,15 +28,8 @@ calibBlock:
config:
- "isr.doCrosstalk=False" # No crosstalk in simulated data
fiberProfiles:
- group:
- -
- id: "field=FLAT_ODD"
- config:
- - "isr.doCrosstalk=False" # No crosstalk in simulated data
- -
- id: "field=FLAT_EVEN"
- config:
- - "isr.doCrosstalk=False" # No crosstalk in simulated data
+ id: "field=FLAT_ODD^FLAT_EVEN"
+ normId: "field=FLAT"
detectorMap:
id: "field=ARC"
config:
Generated command line:
reduceProfiles.py /data22a/mineo/pfswork/pfsrepos/pfs_pipe2d/TEST/INTEGRATION --calib=/data22a/mineo/pfswork/pfsrepos/pfs_pipe2d/TEST/INTEGRATION/CALIB --rerun=integration/test_calib/fiberProfiles --longlog=1 -j10 --doraise --id 'field=FLAT_ODD^FLAT_EVEN' --normId field=FLAT
Error message: Traceback (most recent call last): File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 433, in __call__ result = self.runTask(task, dataRef, kwargs) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 504, in runTask return task.runDataRef(dataRef, **kwargs) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceProfiles.py", line 190, in runDataRef normList = [self.processExposure(ref) for ref in normRefList] File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceProfiles.py", line 190, in <listcomp> normList = [self.processExposure(ref) for ref in normRefList] File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceProfiles.py", line 241, in processExposure return self.reduceExposure.runDataRef(dataRef) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceExposure.py", line 261, in runDataRef calibs = self.getSpectralCalibs(sensorRef, exposure, pfsConfig) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/reduceExposure.py", line 524, in getSpectralCalibs detectorMap = self.adjustDetectorMap.run( File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/adjustDetectorMap.py", line 69, in run raise RuntimeError(f"Insufficient good lines: {numGoodLines} vs {needNumLines}") RuntimeError: Insufficient good lines: 0 vs 24 """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/data22a/mineo/pfswork/pfsrepos/drp_stella/bin/reduceProfiles.py", line 3, in <module> ReduceProfilesTask.parseAndRun() File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 688, in parseAndRun resultList = taskRunner.run(parsedCmd) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 244, in run resultList = list(mapFunc(self, targetList)) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/pipe_base/g590c34a36e+5da9528084/python/lsst/pipe/base/cmdLineTask.py", line 47, in _runPool return pool.map_async(function, iterable).get(timeout) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value RuntimeError: Insufficient good lines: 0 vs 24 I pushed the latest ticket branch. Do you have any suggestion? |
| Comment by price [ 23/Sep/23 ] |
|
I found two problems. First, we need to use --normId field=FLAT dither=0, in order to exclude the dithered flats. Second, I found some problems with reduceProfiles after an API change in reduceExposures earlier; I've put a fix on this ticket branch. |
| Comment by sogo.mineo [ 25/Sep/23 ] |
|
Thanks. The integration test now runs successfully. I have made PRs in github for both pfs_pipe2d and drp_stella, but I am not sure whether I can ask you price to review them, for the commit to drp_stella was made by you. Who should I ask to review them? |
| Comment by price [ 26/Sep/23 ] |
|
I tried running the weekly, since this is a big change to the workflow, and found a couple of problems. Firstly, the configuration overrides for reduceProfiles got stripped out, which causes the weekly to fail because it can't find the NIR IPC calibrations. I put the configuration overrides back on a new commit, but you're welcome to squash it with your commit if you like. |
| Comment by sogo.mineo [ 26/Sep/23 ] |
|
Thank you. Now I think we can close this issue. Should I ask someone else to be the reviewer? (If so, whom should I ask?) |
| Comment by price [ 26/Sep/23 ] |
|
Sorry, I'm still working on updating the weekly. I'm about to start it running before heading off to bed. I'll finish it up in the morning. |
| Comment by price [ 28/Sep/23 ] |
|
I finally got the weekly to pass. Would you please review the changes I made in pfs_pipe2d and drp_stella? |
| Comment by sogo.mineo [ 29/Sep/23 ] |
|
Thank you for your work. I made a comment on drp_stella, but everything else looks good. Could you review the commit I first made? |
| Comment by price [ 30/Sep/23 ] |
|
Your changes were great, thanks. Since everything is in order and the tests pass, I've gone ahead and merged to master. Thanks again! |