[REDMINE1D-244] [RM-5497] CFHTLS training tests with high resolution photo-z added Created: 06/Jul/23 Updated: 06/Jul/23 |
|
| Status: | Open |
| Project: | 1D Redmine |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Normal |
| Reporter: | Redmine-Jira Migtation | Assignee: | Redmine-Jira Migtation |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
| Description |
|
Created on 2019-12-18 17:50:49 by Marie Treyer. % Done: 0 TRAINING SAMPLE: Stephane added high resolution photo-z + lower resolution spec-z to the initial SPEC only catalog that Johanna and Jerome previously used for training. here's the mag/zspec and zspec distributions ("zspec" refers to the redshifts used for training even if there are zphot) : TRAINING TESTS: model "x" (Jo&Je settings) : Given that the loss function and other parameters for the validation samples seem to reach a minimum far sooner than iteration 300k (see fig below), i tried these 2 things: model "u": model "v": Here's what's happening. There are 5 cross-validations for each model, the averages are shown in black.
INFERENCES: The performance at 160k and 200k for "u" and "v" are quasi similar, and only slightly better than at 100k. The models aren't significantly different but the "u" (and even "v") trainings run in half the time as "x" (~4h for 1 cross-validation versus ~8h). Also the PDFs are smoother. I wanted to show a random sample of PDFs as well as the distribution of local peaks (above 5%) for "x", "u" and "v" but i seem to have exceeded my quota. Can we change this? Also Jerome is not part of this group and Johanna's address will change soon, we need to do something about that too! |
| Comments |
| Comment by Redmine-Jira Migtation [ 06/Jul/23 ] |
|
Comment by Stephane Arnouts on 2019-12-18 18:32:02: > The models aren't significantly different but the "u" (and even "v") trainings run in half the time as "x" (~4h for 1 cross-validation versus ~8h). Also the PDFs are smoother. I wanted to show a random sample of PDFs as well as the distribution of local peaks (above 5%) for "x", "u" and "v" but i seem to have exceeded my quota. Can we change this? Also Jerome is not part of this group and Johanna's address will change soon, we need to do something about that too! Great ! results appear quite similar between the 3 versions. v model leads also larger PDF with a better PIT at the end. In the stats there is also a mix of DEEP and WIDE images I guess, which should also be distinguished J'ai ajouté Jerome dans les membres du wiki ! |
| Comment by Redmine-Jira Migtation [ 06/Jul/23 ] |
|
Comment by Stephane Arnouts on 2019-12-19 08:18:57: > TRAINING TESTS: |