[PIPE2D-863] re-generate pfsConfig for blue and yellow bundles, ingest data again Created: 28/Jun/21  Updated: 29/Jun/21  Resolved: 29/Jun/21

Status: Done
Project: DRP 2-D Pipeline
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: arnaud.lefur Assignee: arnaud.lefur
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Sprint: 2DDRP-2021 A 6

 Description   

We have found that the expected illuminated fibers from bundle yellow and blue were incorrect. Now that the dummyCableB design has been fixed, we need to re-generate pfsConfig file and re-ingest the large dcb-21-fibers dataset (visit=51400..63009)



 Comments   
Comment by arnaud.lefur [ 28/Jun/21 ]

the 21-fibers and 23-fibers design files has been re-generated.

(lsst-scipipe) [afur@tiger2-sumire w.2021.26]$ makeDummyCableBDesign.py --directory /projects/HSC/PFS/Subaru/pfsConfig orange blue yellow                                                      
WARNING: VerifyWarning: Card is too long, comment will be truncated. [astropy.io.fits.card]                                                                                                    
Wrote pfsDesign-0x0000100000000101.fits                                                                                                                                                        
(lsst-scipipe) [afur@tiger2-sumire w.2021.26]$ makeDummyCableBDesign.py --directory /projects/HSC/PFS/Subaru/pfsConfig orange blue yellow red2 red7                                            WARNING: VerifyWarning: Card is too long, comment will be truncated. [astropy.io.fits.card]                                                                                                    
Wrote pfsDesign-0x0000101000010101.fits   

I tried to re-ingest the data, but fails with already ingested

(lsst-scipipe) [afur@tiger2-sumire ~]$ ingestPfsImages.py /tigress/HSC/PFS/Subaru/ --pfsConfigDir=/tigress/HSC/PFS/Subaru/pfsConfig --mode=link /tigress/HSC/PFS/Subaru/raw/2021-05-12/sps/PFSA051* --config clobber=True
CameraMapper INFO: Loading exposure registry from /tigress/HSC/PFS/Subaru/registry.sqlite3
ingestPfs WARN: /tigress/HSC/PFS/Subaru/raw/2021-05-12/sps/PFSA05139411.fits: already ingested: {'site': 'S', 'category': 'A', 'visit': 51394, 'filter': 'b', 'arm': 'b', 'spectrograph': 1, 'ccd': 0, 'expTime': 8.0, 'taiObs': '2021-05-12T20:51:36.570', 'field': 'SEEING_TEST', 'dataType': 'COMPARISON', 'pfsDesignId': 1099528409104, 'slitOffset': 0.0, 'lamps': 'Ar', 'dateObs': '2021-05-12', 'dither': -0.0, 'shift': -1e-05, 'focus': 0.0, 'attenuator': -9998.0, 'photodiode': -9998.0}
ingestPfs INFO: /tigress/HSC/PFS/Subaru/raw/2021-05-12/sps/PFSA05139411.fits --<link>--> /tigress/HSC/PFS/Subaru/2021-05-12/PFSA051394b1.fits
Traceback (most recent call last):
  File "/tigress/HSC/PFS/stack/20190925/stack/miniconda3-4.5.12-1172c30/Linux64/obs_pfs/w.2021.26/bin/ingestPfsImages.py", line 3, in <module>
    PfsIngestTask.parseAndRun()
  File "/tigress/HSC/PFS/stack/20190925/stack/miniconda3-4.5.12-1172c30/Linux64/pipe_tasks/18.1.0/python/lsst/pipe/tasks/ingest.py", line 416, in parseAndRun
    task.run(args)
  File "/tigress/HSC/PFS/stack/20190925/stack/miniconda3-4.5.12-1172c30/Linux64/pipe_tasks/18.1.0/python/lsst/pipe/tasks/ingest.py", line 553, in run
    self.register.addRow(registry, info, dryrun=args.dryrun, create=args.create)
  File "/tigress/HSC/PFS/stack/20190925/stack/miniconda3-4.5.12-1172c30/Linux64/obs_pfs/w.2021.26/python/lsst/obs/pfs/ingest.py", line 439, in addRow
    conn.cursor().execute(sql, values)
sqlite3.IntegrityError: UNIQUE constraint failed: raw.site, raw.category, raw.visit, raw.filter, raw.arm, raw.spectrograph, raw.pfsDesignId

price how do you usually proceed ?

Comment by price [ 28/Jun/21 ]

Are you changing the pfsDesignId? If so, we'd need to delete the ingested rows by hand. Otherwise, --config clobber=True register.ignore=True is what you want.

Comment by arnaud.lefur [ 28/Jun/21 ]

the pfsDesignId value is the same but the interpretation from bundle name to fiberId is different.
So I went ahead and tried :

ingestPfsImages.py /tigress/HSC/PFS/Subaru/ --pfsConfigDir=/tigress/HSC/PFS/Subaru /pfsConfig --mode=link /tigress/HSC/PFS/Subaru/raw/2021-05-13/sps/PFSA0514371* --config clobber=True register.ignore=True
CameraMapper INFO: Loading exposure registry from /tigress/HSC/PFS/Subaru/registry.sqlite3                                                                                                     
ingestPfs INFO: /tigress/HSC/PFS/Subaru/raw/2021-05-13/sps/PFSA05143711.fits --<link>--> /tigress/HSC/PFS/Subaru/2021-05-13/PFSA051437b1.fits                                                
ingestPfs INFO: /tigress/HSC/PFS/Subaru/raw/2021-05-13/sps/PFSA05143712.fits --<link>--> /tigress/HSC/PFS/Subaru/2021-05-13/PFSA051437r1.fits 

But the pfsConfig from the butler still looks wrong :

repoRoot = "/tigress/HSC/PFS/Subaru"
butler = dafPersist.Butler(repoRoot)
pfsConfig = butler.get('pfsConfig', dataId, visit=51437)
pfsConfig.fiberId[pfsConfig.fiberId<652] 
   array([ 12,  32,  60, 110, 111, 161, 210, 223, 259, 289, 341, 347, 400, 
               418, 449, 466, 518, 545, 593, 620, 641], dtype=int32)
Comment by price [ 28/Jun/21 ]

I'm guessing there's no facility to clobber the pfsConfig. That's something I could add, but it's probably not worth it given that we're going to switch the middleware soon. Can you delete the generated pfsConfig files (the originals, not the links) by hand?

Comment by arnaud.lefur [ 29/Jun/21 ]

okay just did it.
I had to do it twice because I badly specified the --pfsConfigDir in the first place.
--pfsConfigDir=/tigress/HSC/PFS/Subaru/pfsConfig instead of --pfsConfigDir=/tigress/HSC/PFS/Subaru/drp/pfsDesign

fiberIds should now be correctly identified.

Generated at Sat Feb 10 15:59:03 JST 2024 using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b.