Deep Convolutional Mixture Density Networks
The deep convolutional mixture density network (DCMDN) is a feed-forward neural network model, built combining a convolutional neural network (CNN) and a mixture density network (MDN).
The presented software realizes an architecture to extract photometric redshift probability density functions (PDFs) directly from images, without need of pre-classification and pre-processing of the data. Such a model is able to make a better use of the information contained in the data and give superior performance respect to other models commonly used in literature. Moreover, the DCMDN is a very general model, and can be used to solve several regression problems. In fact, the algorithm is highly flexible and the user can define its own architecture, changing number and type of the layers and several hyperparameters, in order to build the structure that is more suitable for the problem considered.
In particular, the proposed model uses by default the continuous rank probability score (CRPS) as loss function, but other functions are available (e.g. the log-likelihood). The estimates are expressed as Gaussian mixture models, representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) can be calculated as performance criteria.
The code is developed in Python using the library Theano and can run in CPU or GPU (suggested) environments.
Virtual Observatory Virtual Reality
The virtual observatory (VO) and its standards have become a success story in providing uniform access to a huge amount of data sets. Those data sets contain correlations, distributions, and relations that have to be unveiled. Visualization has always been a key tool to understand complex structures. Typically high-dimensional information is projected to a two dimensional plane to create a diagnostic plot. Besides expensive stereoscopic visualization cubes, only stereoscopic displays provided an affordable tool to peek into a three dimensional data space.
We present a low-cost immersive visualization environment that makes use of a smart-phone, a game controllers and Google cardboard. This simple equipment allows you to explore your data more natively by flying through your data space. The presented software consists of a central server application running on a computer and a client implementation performing the rendering on multiple smart-phones, enabling users to inspect the data jointly. As the server application uses the VO simple application messaging protocol (SAMP), it is seamlessly integrated with other VO tools, like topcat or aladin. Access the data in the usual way and employ Virtual Observatory Virtual Reality (VOVR) to explore it.
VOVR server vovr.zip (2.6MB)
VOVR sources source.zip (0.1MB)
VOVR java api documentation javadoc.zip (0.5MB)
external jar files externalJars.zip (4.6MB)Read more