# Photometric Redshifts (PDR3)

## Overview

The HSC photo-z team computed photometric redshifts using three independent codes for PDR3.  We constructed training/validation/test samples by combining public spectroscopic redshifts, HST grism redshifts, and high-quality many-band photometric redshifts in COSMOS with weights to each objects to reproduce the color-magnitude distributions of the HSC objects.  As in the previous releases, the photo-z’s are stored in the database and the schema browser gives the details of each of our photo-z the catalogs.  As discussed in the PDR1 photo-z paper, we suggest that zbest should be used for point estimates and zrisk as an indicator of reliability.  In addition to the catalogs, we also release the redshift probability distribution functions in the fits format below (only for Mizuki).  Further description of our training procedure, data products, explanation of the statistics plots below can be found in the release note below.  When you use the photo-z products, please quote the photo-z release papers (both PDR1 and PDR2) and the relevant code papers linked below.

## Codes

We have computed photo-z’s using several independent codes.  Here is a brief summary of our codes.

DEmP: Combination of nearest neighbor technique and polynomial (actually linear) fitting. Redshift of each object is estimated using the 40 nearest neighbors in the ten-dimensional space (5 magnitude axes, 4 color axes, and the size derived from SDSS shape) with a linear function.

Mizuki:  Template fitting with Bayesian priors on physical properties of galaxies.  In addition to redshifts, physical parameters such as stellar mass and SFRs are computed.  The code uses undeblended convolved fluxes scaled to total fluxes using cModel (i.e., the colors are from the convolved fluxes and overall normalization is scaled to cModel).  Photo-z’s are available only for objects with i<25 in the Wide layer.  All objects in Deep+UltraDeep have photo-z’s.

DNNz: a deep learning code (Nishizawa et al. in prep). DNNz architecture is consist of multi-layer perceptorons with 5 hidden layers.  The input layers can handle every types of observable and we use cmodel fluxes, undeblended convolved flux, PSF fluxes and size derived from shape SDSS. In total, we have (3 fluxes + 1 size) × 5 bands = 20 attributes.

## Caveat

Note that the photo-z’s released here are for pdr3_dud and pdr3_wide.  As described in this page, the D/UD layer has been reprocessed and the reprocessed data is available in the pdr3_dud_rev tables.   However, the HSC photo-z team has not yet computed photo-z’s for the reprocessed data (even for team internal).  Thus, you should use the photo-z’s here in conjunction with photometry in pdr3_dud for consistency.

## Probability Distribution Functions

Our P(z) files are available for each field in the fits format.  Note these are massive files! Some important notes:

1. One fits file for one tract.  The file name gives the tract number.
2. The 1st HDU contains P(z) and the 2nd HDU defines the redshift grids.
3. Please do not use the header keywords to define the redshift grids.  Please always use the 2nd HDU.