Speaker
Description
How much cosmological information does a cube of dark matter contain? Are we utilising the full potential of information available within a density field? Neural summaries aim to extract all these informations; but success depends on the availability of simulations, network architecture and hyperparameters, and the ability to train the networks. Even for the simplest summary statistics power spectrum we need 7 layers to get best possible optimal result from Quijote dark matter simulations. Hyperparameter tuning for every additional layer is not possible every time while trying different architecture. So an extensive hyperparameter search for a single layer perceptron on P(k) has been done to match the results with linear regression prediction and these hyperparameters are expected to be scalable for larger networks from the current studies on Large Neural Networks. Our study on loss vs number of training simulation suggests currently available 2000 latin hypercube simulations are not enough to reach the optimal regime. On the other hand fitting high resolution, large volume of cosmological data from next generation surveys like DESI or EUCLID into a GPU with the best performance built till date and infer parameter constraints from it is almost impossible without losing some amount information available within the data. Current methods tend to use the low resolution and smaller volume data because of which we are throwing away a larger part of the information. We have come up with a method that can solve this issue by combining sub volumes of density field with the power spectrum.