Uploaded image for project: 'DRP 2-D Pipeline'
  1. DRP 2-D Pipeline
  2. PIPE2D-1362

correct for the focal plane variation of the flux calibration vector

    Details

    • Type: Task
    • Status: Done (View Workflow)
    • Priority: Normal
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:

      Description

      After applying the fix for PIPD2D-1361, we clearly observe the spatial variation of the flux calibration vector. The variation likely has two components; the global pattern due to the global offset, rotation, and scale difference, and fiber-to-fiber offset. We want to correct for the former with, e.g., low-order polinomials. As the seeing is a function of wavelength, a fiber offset introduces a wavelength-dependent flux loss. We thus need to allow the flux calibration vector to vary both spatially and along the wavelength. The correction can be estimated by comparing the PS1 magnitudes and synthetic PS1 magnitudes of each FLUXSTD.

        Attachments

        1. fig1_3rd_order.png
          fig1_3rd_order.png
          81 kB
        2. fig1_4th_order.png
          fig1_4th_order.png
          81 kB
        3. fig1_nocorr.png
          fig1_nocorr.png
          83 kB
        4. fig2_3rd_order.png
          fig2_3rd_order.png
          78 kB
        5. fig2_4th_order.png
          fig2_4th_order.png
          77 kB
        6. fig2_nocorr.png
          fig2_nocorr.png
          78 kB

          Issue Links

            Activity

            Hide
            sogo.mineo sogo.mineo added a comment -

            I pushed another commit. I moved FluxCalib class to focalPlaneFunction.py. Its method fitArrays() takes a parameter fitter, and all that fitArrays() does is to call fitter(). After this commit, the Gen2 integration test passes.

            Gen3 test still fails:

            lsst.ctrl.mpexec.mpGraphExecutor INFO: Executed 24 quanta successfully, 0 failed and 3 remain out of total 27 quanta.
            py.warnings WARNING: /data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/utils/polynomialND.py:76: RuntimeWarning: divide by zero encountered in true_divide
              self._scale = 1.0 / (self._max - self._min)
            
            py.warnings WARNING: /data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/utils/polynomialND.py:130: RuntimeWarning: invalid value encountered in multiply
              x *= scale
            
            lsst.ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'fitFluxCal' on quantum {instrument: 'PFS-F', exposure: 24, ...} failed. Exception RuntimeError: No good points
            Process task-{instrument: 'PFS-F', exposure: 24, ...}:
            Traceback (most recent call last):
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
                self.run()
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/process.py", line 108, in run
                self._target(*self._args, **self._kwargs)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py", line 143, in _executeJob
                quantumExecutor.execute(taskDef, quantum, butler)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py", line 135, in execute
                result = self._execute(taskDef, quantum, butler)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py", line 212, in _execute
                self.runQuantum(runTask, quantum, taskDef, butler)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py", line 580, in runQuantum
                task.runQuantum(butlerQC, inputRefs, outputRefs)
              File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py", line 761, in runQuantum
                outputs = self.run(**inputs, pfsArmList=armInputs.pfsArm, sky1dList=armInputs.sky1d)
              File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py", line 703, in run
                fluxCal = self.calculateCalibrations(pfsConfig, pfsMerged, pfsMergedLsf, references)
              File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py", line 900, in calculateCalibrations
                return self.fitFocalPlane.run(
              File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFocalPlane.py", line 109, in run
                raise RuntimeError("No good points")
            RuntimeError: No good points
            lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
            Traceback (most recent call last):
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 130, in run
                script.run(qgraphObj=qgraph, **kwargs)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cli/script/run.py", line 187, in run
                f.runPipeline(qgraphObj, taskFactory, args)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 740, in runPipeline
                executor.execute(graph, butler)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py", line 373, in execute
                self._executeQuantaMP(graph, butler)
              File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py", line 568, in _executeQuantaMP
                raise MPGraphExecutorError(message)
            lsst.ctrl.mpexec.mpGraphExecutor.MPGraphExecutorError: Task <TaskDef(pfs.drp.stella.fitFluxCal.FitFluxCalTask, label=fitFluxCal) dataId={instrument: 'PFS-F', exposure: 24, ...}> failed, exit code=1
            

            Is this error the one that price told me a few weeks ago?

            Show
            sogo.mineo sogo.mineo added a comment - I pushed another commit. I moved FluxCalib class to focalPlaneFunction.py . Its method fitArrays() takes a parameter fitter , and all that fitArrays() does is to call fitter() . After this commit, the Gen2 integration test passes. Gen3 test still fails: lsst.ctrl.mpexec.mpGraphExecutor INFO: Executed 24 quanta successfully, 0 failed and 3 remain out of total 27 quanta. py.warnings WARNING: /data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/utils/polynomialND.py:76: RuntimeWarning: divide by zero encountered in true_divide self._scale = 1.0 / (self._max - self._min) py.warnings WARNING: /data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/utils/polynomialND.py:130: RuntimeWarning: invalid value encountered in multiply x *= scale lsst.ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'fitFluxCal' on quantum {instrument: 'PFS-F' , exposure: 24, ...} failed. Exception RuntimeError: No good points Process task-{instrument: 'PFS-F' , exposure: 24, ...}: Traceback (most recent call last): File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/process.py" , line 315, in _bootstrap self.run() File "/data22a/mineo/pfswork/lsst_stack/lsst_home/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-3.0.0/lib/python3.8/multiprocessing/process.py" , line 108, in run self._target(*self._args, **self._kwargs) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py" , line 143, in _executeJob quantumExecutor.execute(taskDef, quantum, butler) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py" , line 135, in execute result = self._execute(taskDef, quantum, butler) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py" , line 212, in _execute self.runQuantum(runTask, quantum, taskDef, butler) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/singleQuantumExecutor.py" , line 580, in runQuantum task.runQuantum(butlerQC, inputRefs, outputRefs) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py" , line 761, in runQuantum outputs = self.run(**inputs, pfsArmList=armInputs.pfsArm, sky1dList=armInputs.sky1d) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py" , line 703, in run fluxCal = self.calculateCalibrations(pfsConfig, pfsMerged, pfsMergedLsf, references) File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFluxCal.py" , line 900, in calculateCalibrations return self.fitFocalPlane.run( File "/data22a/mineo/pfswork/pfsrepos/drp_stella/python/pfs/drp/stella/fitFocalPlane.py" , line 109, in run raise RuntimeError( "No good points" ) RuntimeError: No good points lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback: Traceback (most recent call last): File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cli/cmd/commands.py" , line 130, in run script.run(qgraphObj=qgraph, **kwargs) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cli/script/run.py" , line 187, in run f.runPipeline(qgraphObj, taskFactory, args) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/cmdLineFwk.py" , line 740, in runPipeline executor.execute(graph, butler) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py" , line 373, in execute self._executeQuantaMP(graph, butler) File "/data22a/mineo/pfswork/lsst_stack/lsst_home/stack/miniconda3-py38_4.9.2-3.0.0/Linux64/ctrl_mpexec/gb02ad94e9c+e480a1db32/python/lsst/ctrl/mpexec/mpGraphExecutor.py" , line 568, in _executeQuantaMP raise MPGraphExecutorError(message) lsst.ctrl.mpexec.mpGraphExecutor.MPGraphExecutorError: Task <TaskDef(pfs.drp.stella.fitFluxCal.FitFluxCalTask, label=fitFluxCal) dataId={instrument: 'PFS-F' , exposure: 24, ...}> failed, exit code=1 Is this error the one that price told me a few weeks ago?
            Hide
            price price added a comment -

            Yes, I think that's the same one. It would be great if you could also fix that.

            Show
            price price added a comment - Yes, I think that's the same one. It would be great if you could also fix that.
            Hide
            sogo.mineo sogo.mineo added a comment -

            Interim report: I ran the integration test with the master branch and tickets/PIPE2D-1362 branch of drp_stella, and compared the products. The products started diverging from each other at "sky1d" or "pfsMerged". But every time I ran the integration test with the master branch of drp_stella, different "sky1d" and "pfsMerged" were output (difference of FITS headers are ignored). So, the variation of "sky1d" and "pfsMerged" does not seem to be the cause of the error.

            Show
            sogo.mineo sogo.mineo added a comment - Interim report: I ran the integration test with the master branch and tickets/ PIPE2D-1362 branch of drp_stella, and compared the products. The products started diverging from each other at "sky1d" or "pfsMerged". But every time I ran the integration test with the master branch of drp_stella, different "sky1d" and "pfsMerged" were output (difference of FITS headers are ignored). So, the variation of "sky1d" and "pfsMerged" does not seem to be the cause of the error.
            Hide
            sogo.mineo sogo.mineo added a comment -

            The problem was that the integration test contains too few flux standards to determine 3rd-order trivariate polynomial. I made a small change to pfs_pipe2d (so that the polynomial order would be 0), and the integration test passed. Could you price review the pull request to pfs_pipe2d?

            Show
            sogo.mineo sogo.mineo added a comment - The problem was that the integration test contains too few flux standards to determine 3rd-order trivariate polynomial. I made a small change to pfs_pipe2d (so that the polynomial order would be 0), and the integration test passed. Could you price review the pull request to pfs_pipe2d?
            Hide
            sogo.mineo sogo.mineo added a comment -

            Three branches have been merged. Thanks for reviewing.

            Show
            sogo.mineo sogo.mineo added a comment - Three branches have been merged. Thanks for reviewing.

              People

              • Assignee:
                sogo.mineo sogo.mineo
                Reporter:
                msyktnk Masayuki Tanaka
                Reviewers:
                price
              • Votes:
                0 Vote for this issue
                Watchers:
                Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: