Date(s) - 18/09/2017
11:00 am - 12:00 pm
Studio Villa Bosch
From Random Projections to Learning Theory to Algorithms and Back
We consider two problems in statistical machine learning – an old and a new:
(1) Given a machine learning task, what kinds of data distributions make it easier or harder? For instance, it is known that large margin makes classification tasks easier.
(2) Given a high dimensional learning task, when can we solve it from a few random projections of the data with good-enough approximation? This is the compressed learning problem.
This talk will present results and work in progress that highlight parallels between these two problems. The implication is that random projection — a simple and effective dimensionality reduction method with origins in theoretical computer science and metric embeddings — is not only a door opener for efficient learning from large high dimensional data sets, but it can also help us make a previously elusive fundamental problem more approachable – which leads to new algorithms for machine learning. On the flip side, this parallel allows us to broaden the guarantees that hold for compressed learning beyond of those implied by compressed sensing.
Ata Kaban (https://www.cs.bham.ac.uk/~axk/) is currently a senior lecturer in Computer Science at the University of Birmingham UK, and has recently started an EPSRC Fellowship. Her research interests include machine learning and data mining and black box optimisation in high dimensional settings. She published cca. 80 papers, and won best paper awards at GECCO’13, ACML’13, ICPR’10, and a runner-up at CEC’15. She holds a PhD in Computer Science (2001) and a PhD in Musicology (1999). She is member of the IEEE CIS Technical Committee on Data Mining and Big Data Analytics, and vice-chair of the IEEE CIS Task Force on High Dimensional Data Mining.
For registration please contact Benedicta Frech: email@example.com