[INSTRM-468] Check MCS data timings. Created: 01/Sep/18  Updated: 22/Oct/18  Resolved: 22/Oct/18

Status: Done
Project: Instrument control development
Component/s: ics_fpsActor, ics_mcsActor
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: cloomis Assignee: chihyi
Resolution: Done Votes: 0
Labels: MCS
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

We long ago, at ASIAA, confirmed that we could stuff MCS centroids into the database and retrieve them from FPS fast enough (and faster than going through a FITS file). I don't think we are doing as well at Subaru. In any case this should be checked.



 Comments   
Comment by chihyi [ 03/Sep/18 ]

We will try to run the following test: inject ~2400 data into a database table from one MHS actor, update the status keyword, then read data from another actor. Is there any comment?

Comment by cloomis [ 04/Sep/18 ]

That's exactly what we want to confirm. Specifically, injecting using from the existing code in mcsActor, and reading it in the fpsActor.

The problem with the VM ethernet interfaces was not on the db VM, so that's not likely to be the problem, unfortunately. That would have been easy. But if the problem is from running postgres on one VM, writing to a database stored on NFS mounted from a second VM, I am not so worried: that architecture is temporary.

Comment by cloomis [ 02/Oct/18 ]

We also need to test writes to and and reads from /data, and in general the complete throughput of the system. We might need to change how files are being written.

Comment by chyan [ 03/Oct/18 ]

Here are information from my log.

-------------------------------------------------------------

Image Acquisition 0.82 second per frame
Image Saving       0.2 second per frame
Image co-adding    >0.1 second

For a 1000ms exposure time, there will be three readouts, one stacking, four file write.
The time is 0.82x3 + 0.2x4 + 0.1 = 3.36 seconds.

Since we only save one stacked frame in the end, the timing should be  0.82x3 + 0.2 + 0.1 = 2.76 seconds.

Regarding the database IO, I insert 2394 fiber positions and measured FWHM, RMS and repeat for 100 times.  The averaged time on database IO is 1.14 second.

 

 

 

Comment by cloomis [ 03/Oct/18 ]

Was that at Subaru or ASIAA?

Comment by chyan [ 04/Oct/18 ]

These tests were done in ASIAA, right before the shipment of MCS.  

Comment by cloomis [ 04/Oct/18 ]

OK, tx. Good to have measured timing we know the system should be able to meet.

Comment by hassan [ 08/Oct/18 ]

cloomis: can we now resolve this ticket?

Comment by cloomis [ 08/Oct/18 ]

We have three issues, one expected, one feared, and one unexpected:

  • The time to acquire a new Gen2 frame number (~2s) needs to be hidden by being run in parallel to the exposure.
  • The time to write the new exposure needs to be shortened from the current ~1.5s. This can be dealt with in a couple of ways.
  • The time to read the camera is a fair bit longer than expected. For a 1s exposure, I'm seeing ~8.5s. Log below.

I'll work on 1 and 2, but ASIAA will probably have to look into 3.

2018-10-07 16:38:22.544Z cmds             20 CommandLink.py:121 > 2 11 d text="Exposure time now is 1000 ms."
2018-10-07 16:38:22.545Z cmdr             20 CmdrConnection.py:215 queueing command gen2(getVisit)
2018-10-07 16:38:24.621Z cmdr             20 CmdrConnection.py:209 command returned gen2 'getVisit'
2018-10-07 16:38:24.622Z cmds             20 CommandLink.py:121 > 2 11 d text="new expose: /data/mcs/2018-10-07/PFSC00006300.fits"
2018-10-07 16:38:24.622Z cmds             20 CommandLink.py:121 > 2 11 i exposureState="exposing"
2018-10-07 16:38:24.622Z cmds             20 CommandLink.py:121 > 2 11 i text="ETYPE: dark"
2018-10-07 16:38:24.623Z cmds             20 CommandLink.py:121 > 2 11 i text="slice name: /data/mcs/2018-10-07/PFSC00006300_"
2018-10-07 16:38:29.913Z cmds             20 CommandLink.py:121 > 2 11 i exposureState="reading"
2018-10-07 16:38:33.087Z cmds             20 CommandLink.py:121 > 2 11 i text="Time for exposure = 5.289644."
2018-10-07 16:38:33.087Z cmds             20 CommandLink.py:121 > 2 11 i text="Time for image loading= 3.173522."
2018-10-07 16:38:33.103Z cmds             20 CommandLink.py:121 > 2 11 d text="done: (5778, 8960)"
Comment by hassan [ 09/Oct/18 ]

Pinging chyan as we need his feedback rather urgently.

Comment by cloomis [ 09/Oct/18 ]

Chatted with him a couple of hours ago. The camera timings are almost certainly ok: the coadd is being written into and read back out of the NFS-mounted code directory. Changing that to a local disk  will probably restore the original timings.

Comment by hassan [ 09/Oct/18 ]

Ok thanks!

Comment by hassan [ 10/Oct/18 ]

@cloomis: has a test against a local disk been performed to confirm the original timings?

Comment by cloomis [ 10/Oct/18 ]

Better, but not good. Running with the actor in /tmp:

pfs@mhs-ics:~$ oneCmd.py mcs expose bias
2018-10-10T13:53:31.580 sent mcs expose bias
2018-10-10T13:53:33.621 mcs i exposurestate="exposing"
2018-10-10T13:53:33.631 mcs i text="ETYPE: bias"
2018-10-10T13:53:33.632 mcs i text="slice name: /data/mcs/2018-10-10/PFSC00007000_"
2018-10-10T13:53:37.241 mcs i exposurestate="reading"
2018-10-10T13:53:38.850 mcs i text="Time for exposure = 3.618860."
2018-10-10T13:53:38.851 mcs i text="Time for image loading= 1.607460."
Comment by cloomis [ 16/Oct/18 ]

There was a spurious copy of an uncompressed image to NFS; we have gained another 1.4s:

2018-10-16T00:40:09.714 sent mcs expose object expTime=1.0
2018-10-16T00:40:11.681 mcs i exposurestate="exposing"
2018-10-16T00:40:11.690 mcs i text="slice name: /data/mcs/2018-10-15/PFSC00009500_"
2018-10-16T00:40:15.533 mcs i exposurestate="reading"
2018-10-16T00:40:15.660 mcs i text="Time for exposure = 3.849244."
2018-10-16T00:40:15.660 mcs i text="Time for image loading= 0.125715."
2018-10-16T00:40:17.886 mcs i filename="/data/mcs/2018-10-15/PFSC00009500.fits"
2018-10-16T00:40:17.886 mcs : exposurestate="done"
Comment by cloomis [ 22/Oct/18 ]

This is looking OK. Closing.

Generated at Tue Apr 08 19:29:05 JST 2025 using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b.