<!-- 
RSS generated by JIRA (8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b) at Sat Feb 10 16:25:05 JST 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>PFS-JIRA</title>
    <link>https://pfspipe.ipmu.jp/jira</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>8.3.4</version>
        <build-number>803005</build-number>
        <build-date>13-09-2019</build-date>
    </build-info>


<item>
            <title>[INSTRM-445] Provide 10TB disk at Subaru for MCS testing</title>
                <link>https://pfspipe.ipmu.jp/jira/browse/INSTRM-445</link>
                <project id="10300" key="INSTRM">Instrument control development</project>
                    <description>&lt;p&gt;As discussed during the PFS Servers and Infrastructure WG meeting 2018-07-25, a request by &lt;a href=&quot;https://pfspipe.ipmu.jp/jira/secure/ViewProfile.jspa?name=cloomis&quot; class=&quot;user-hover&quot; rel=&quot;cloomis&quot;&gt;cloomis&lt;/a&gt; for a 10TB disk for installing the minimum software and data for the MCS testing in September 2018 needs to be made available.&lt;/p&gt;

&lt;p&gt;Ideally this should be available by 2018-09-01 to allow Craig to install the necessary data.&lt;/p&gt;</description>
                <environment></environment>
        <key id="12666">INSTRM-445</key>
            <summary>Provide 10TB disk at Subaru for MCS testing</summary>
                <type id="3" iconUrl="https://pfspipe.ipmu.jp/jira/secure/viewavatar?size=xsmall&amp;avatarId=10518&amp;avatarType=issuetype">Task</type>
                                            <priority id="10000" iconUrl="https://pfspipe.ipmu.jp/jira/images/icons/priorities/medium.svg">Normal</priority>
                        <status id="10002" iconUrl="https://pfspipe.ipmu.jp/jira/images/icons/statuses/generic.png" description="The issue is resolved, reviewed, and merged">Done</status>
                    <statusCategory id="3" key="done" colorName="green"/>
                                    <resolution id="10000">Done</resolution>
                                        <assignee username="hiro">Yoshida, Hiroshige</assignee>
                                    <reporter username="hassan">hassan</reporter>
                        <labels>
                            <label>MCS</label>
                            <label>Subaru</label>
                            <label>subaru-personnel</label>
                    </labels>
                <created>Fri, 3 Aug 2018 12:51:31 +0000</created>
                <updated>Thu, 7 Nov 2019 01:51:09 +0000</updated>
                            <resolved>Thu, 7 Nov 2019 01:51:09 +0000</resolved>
                                                                    <component>Summit infrastructure</component>
                        <due>Fri, 31 Aug 2018 00:00:00 +0900</due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                <comments>
                            <comment id="13836" author="hassan" created="Fri, 3 Aug 2018 12:53:01 +0000"  >&lt;p&gt;&lt;a href=&quot;https://pfspipe.ipmu.jp/jira/secure/ViewProfile.jspa?name=kyono&quot; class=&quot;user-hover&quot; rel=&quot;kyono&quot;&gt;kyono&lt;/a&gt;: if this issue is not applicable to you please re-assign to the appropriate person or inform me or Tamura-san.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://pfspipe.ipmu.jp/jira/secure/ViewProfile.jspa?name=atsushi.shimono&quot; class=&quot;user-hover&quot; rel=&quot;atsushi.shimono&quot;&gt;shimono&lt;/a&gt;: please move this issue to the appropriate JIRA project if this is not the correct one (or we need to create a new project).&lt;/p&gt;</comment>
                            <comment id="13906" author="kiaina" created="Wed, 8 Aug 2018 19:48:42 +0000"  >&lt;p&gt;Hassan,&lt;/p&gt;

&lt;p&gt;Discussion about some technical requirements for this request.&lt;/p&gt;

&lt;p&gt;1.&#160; This 10TB of additional disk space, not from current storage environment?&lt;/p&gt;

&lt;p&gt;2.&#160; IF yes to 1, which is prefferable,&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Provision the best server R720(possible RGA1 replacement) with JBOD 6TB disk, with ZFS or FreeNAS as NFS server&lt;/li&gt;
	&lt;li&gt;the best server R720(possible RGA1 replacement) with 2 RAID1 OS SSD, 6 x 6TB RAID6 ~ 20TB usable space, nfs exported&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="13907" author="cloomis" created="Wed, 8 Aug 2018 19:57:32 +0000"  >&lt;p&gt;This is &lt;em&gt;purely&lt;/em&gt;&#160;a short-term resource, to cover us in the very unlikely case that the storage server fails before we implement a redundant storage system. &lt;/p&gt;

&lt;p&gt;Someone at Subaru said that there is some general-purpose NFS storage available. That would be fine. Alternately we would put a pair of disks in the PFS NFS server and use that. &lt;/p&gt;</comment>
                            <comment id="13908" author="cloomis" created="Wed, 8 Aug 2018 20:16:43 +0000"  >&lt;p&gt;I will state that from &lt;em&gt;my&lt;/em&gt; point of view, two of those 2xSSD + 6xHDD boxes provisioned as ZFS + NFS servers would be a good choice for a permanent system. Carve out a small intent cache from an SSD, and get someone to buy new data disks before real observing. &lt;/p&gt;

&lt;p&gt;Yes, a properly engineered HA system would be ideal. I just don&apos;t know enough about that world in 2018 to say anything useful. The paired NFS servers, the paired VM servers, and some scripts is probably good enough. ~1 hour downtime max?&lt;/p&gt;</comment>
                            <comment id="13912" author="kiaina" created="Tue, 14 Aug 2018 18:27:11 +0000"  >&lt;p&gt;Craig,&lt;/p&gt;

&lt;p&gt;I been using a test server for the past year.&#160; it has 6x4TB drives, running freenas.&#160; Freenas is installed on USB stick.&#160; I&apos;m willing to put this into PFS for your usage.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Kiaina&lt;/p&gt;</comment>
                            <comment id="13913" author="cloomis" created="Tue, 14 Aug 2018 18:43:21 +0000"  >&lt;p&gt;That certainly works for me, either as a short-term backup and or as a longer-term server if the hardware is otherwise appropriate. I would be interested in a small SSD-based ZIL if there is any chance of carving one out; we would &lt;em&gt;need&lt;/em&gt; a ZIL for any long-term server, as well as newer/bigger drives (a fairly small expense). With a ZIL NFS writes are instantaneous...&lt;/p&gt;</comment>
                            <comment id="13914" author="kiaina" created="Tue, 14 Aug 2018 18:52:05 +0000"  >&lt;p&gt;So basically, pulling 4TB and sticking a ssd in that slot, and configuring FreeNAS?&#160; Can be done&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="13915" author="cloomis" created="Tue, 14 Aug 2018 19:03:42 +0000"  >&lt;p&gt;Mmm, I&apos;d rather have the 4+2 raid than any possible speedup. If you have no free slots, or an existing SSD then leave it as is. I think the minimum for a long-term system would be 2xSSD plus 6xHDD.&lt;/p&gt;</comment>
                            <comment id="13916" author="kiaina" created="Tue, 14 Aug 2018 23:46:07 +0000"  >&lt;p&gt;Craig,&lt;/p&gt;

&lt;p&gt;I know we promise two systems, can we dig up notes on&#160; the purpose of the two systems again.&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;replace old server - rfa1&lt;/li&gt;
	&lt;li&gt;spare (additional) server NFS?&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;Was the expectation of the 10TB of NFS space part of #2?&lt;/p&gt;</comment>
                            <comment id="13944" author="cloomis" created="Tue, 21 Aug 2018 22:21:32 +0000"  >&lt;p&gt;Summary of conversation between CPL and CDM:&lt;/p&gt;

&lt;p&gt;This ticket was just for providing emergency storage to allow us to recover from the (very unlikely) failure of the single RAID server. For that, we only need a small amount of NFS storage which can be hosted anywhere. The estimate of 10TB was (very) excessive. All we really need is:&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;/virt, which currently holds 250GB. 500GB would be ample.&lt;/li&gt;
	&lt;li&gt;/proddata, currently 100GB. Real data will be archived, 500GB-1TB would be ample until SM1.&lt;/li&gt;
	&lt;li&gt;/software, currently 5GB. 50GB ample&lt;/li&gt;
	&lt;li&gt;/home/xxx, currently 20GB. 100GB ample.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Frankly, 1TB would do, 2TB would be spacious. I&apos;d prefer not to use the existing core PFS machines, if only because they have been provisioned via ansible, and adding this would be a bit of a hack.&lt;/p&gt;


&lt;p&gt;In the longer term, we do need redundant storage, and &lt;a href=&quot;https://pfspipe.ipmu.jp/jira/secure/ViewProfile.jspa?name=hassan&quot; class=&quot;user-hover&quot; rel=&quot;hassan&quot;&gt;hassan&lt;/a&gt;&apos;s &lt;a href=&quot;https://sumire.pbworks.com/w/file/fetch/127236176/PFS-ICS-PRU030000-03_PFS_ICS_Hardware_Configuration_Report.pdf&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;https://sumire.pbworks.com/w/file/fetch/127236176/PFS-ICS-PRU030000-03_PFS_ICS_Hardware_Configuration_Report.pdf&lt;/a&gt; covers that.&lt;/p&gt;

&lt;p&gt;One last note. CDM has a test FreeNAS/ZFS server (~15TB net, raid-z2 (4+2), can add small ZIL cache) which they would like to evaluate with PFS usage. I think this is a good idea. And yes, we could use that either as the backup we are discussing or even as the primary NFS server depending on how it evaluates.&lt;/p&gt;</comment>
                            <comment id="16297" author="cloomis" created="Thu, 7 Nov 2019 01:51:09 +0000"  >&lt;p&gt;No longer needed, I don&apos;t think. &lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10500" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                            <customfield id="customfield_10010" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>0|s0010d:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>