Speaker
Description
Stellar disk truncations are a long-sought galactic size indicator based on the radial location of the gas density threshold for star formation, i.e., the edge/limit of the luminous matter in a galaxy. The study of galaxy sizes is crucial for understanding the physical processes that shape galaxy evolution across cosmic time. Current and future ultradeep and large-area imaging surveys, such as the JWST and the ESA's Euclid mission, will allow us to explore the growth of galaxies and trace the limits of star formation in their outskirts.
The task of identifying the disk truncations in galaxy images is, therefore, equivalent to what is called (informed) image segmentation in computer vision. Recently, the Meat AI research team has published the Segment Anything Model (SAM, Kirillov et al. 2023). SAM is a deep learning model that is capable of segmenting any type of data (including text, images, and audio) into smaller components or segments. The model is designed to be highly adaptable and versatile making it suitable for a wide range of applications.
In preparation for automatically identifying disk truncations in the galaxy images that will be soon released by Euclid, we run SAM over a dataset of 1048 disc galaxies with $M_* > 10^{10} M_\odot$ and $z<1$ within the HST CANDELS fields presented in Buitrago et al. 2023 (A&A, in press). We 'euclidize' the HST galaxy images by making composite RGB images using the H, J and I+V HST filters, respectively. Using these images as input for the SAM, we retrieve various truncation masks for each galaxy image given different configurations of the input dataset (i.e. varying the stretch and normalization of the input images) and of the SAM pipeline. Finally, we present a comparison of the truncations obtained with the SAM on the whole 'euclidized' dataset with the results presented in: a) Buitrago et al. 2023 (A&A, in press), in which truncations are evaluated using the radial positions of the edge in the light profiles of galaxies —inferred in a non-automated way; b) Fernández-Iglesias et al. 2023 (A&A, submitted), in which segmented images of truncations are automatically obtained using a U-Net.