Hi all, Mike here.
A few months back, I introduced our new AI that can work together with volunteers to classify galaxies. It’s able to understand which galaxies, if classified by you, would best help it to learn. You and the AI have together classified tens of thousands of galaxies since we launched the new system in May.
I’m really happy to say that our paper was recently accepted for publication in the Monthly Notices of the Royal Astronomical Society!
We’ve made a few changes since the early version I shared before. I think the most interesting change is a new section applying AI fairness tools. These tools are usually used to check if AI models make biased decisions – for example, offering less jobs to women. We used these tools to check if our model is biased against galaxies with certain physical properties (it isn’t).
You can read the latest pre-print of the paper for free here. The (essentially identical) final publication will be also available for free from Monthly Notices once published – we’ll update this post when that happens.
The following is a blog by Yjan Gordon (@YjanGordon), a postdoc at the University of Manitoba, Canada (having recently completed a PhD at the University of Hull). Here, he describes his new paper making use of the latest Galaxy Zoo classifications.
One of the key questions I look to address in my research is that of why the black holes at the centres of some galaxies are actively feeding on matter (an active galactic nucleus, AGN for short) and why some aren’t. We know of multiple mechanisms that can trigger an AGN, from high-impact galaxy mergers to secular processes such as feeding on the matter ejected from stars over the course of their lives. However, not all AGN are created equal, and many of these objects, whilst active, are only barely so. While more powerful AGN are having a steak dinner, these weaker variants are merely snacking.
The processes that initiate these weak AGN may be different to those that fuel their more powerful cousins or simply a scaled down version of the same mechanisms. For example, we know that the collision of two similar sized galaxies (known as a major merger) can trigger an AGN. Then a minor merger, where a small galaxy collides with a much more massive one, may provide less fuel for an AGN, resulting in one of these weak AGN. This is exactly the question we investigate in our latest paper.
In order to test whether minor mergers are a factor in triggering weak AGN, high quality, deep observations are needed to look for very faint merger signatures in a sample of these galaxies. To conduct our analysis we made use of the Dark Energy Camera Legacy Survey (DECaLS). This survey not only provides the deep, high quality imaging necessary for looking for minor galactic mergers (and is far improved in this regard than previous wide-field imaging surveys, see figure below), but is also the latest survey being put to the galaxy zoo volunteers to obtain reliable galaxy morphologies.
A control sample of galaxies that don’t host an AGN is required, so that we can compare the fractions of weak AGN and non-AGN experiencing mergers, i.e. are mergers more frequently associated with these AGN or not? In order to control against other variables that could impact your results here, reliable morphological information is a valuable asset. For instance, spiral galaxies have a delicate structure that can be disrupted by galaxy mergers, and the presence of this morphology in a merging system can provide information about the scale or timeline of the event. One can hence see the potential for elliptical galaxies to be more likely to exhibit the tidal disturbances than their more delicate spiral counterparts.
This kind of project wouldn’t be possible without the contributions of the many Galaxy Zoo volunteers providing morphological classifications on hundreds of thousands of galaxies.
When we compare the merger rates and the merger scales in both the weak AGN and the non-AGN control sample we found a couple of compelling results.
Firstly, we found that the fraction of both these samples experiencing minor mergers was about the same. This is interesting as it shows that minor mergers, which had long thought to be a potential trigger for these weak AGN, are not involved initiating weak activity of the central black hole in a galaxy.
Secondly, we found that for the least massive of these weak AGN, major mergers were significantly more common than in non-AGN. This is an unexpected result, as such major mergers might provide so much gas that any resulting AGN might be expected to be fairly powerful. Furthermore, previous research hadn’t shown any substantial evidence of this, so why are we seeing such an effect? Well, whilst major mergers are more common in these weak AGN, they still only represent a minority of the weak AGN population (~10%), and are thus not typical of the main population of weak AGN. One intriguing possibility is that these particular objects may actually be the early stages of more powerful AGN, and that as the merger progresses, and more gas falls into the galactic nucleus, the AGN will have more fuel to feed on and become a more powerful AGN. Further research is required to investigate such a hypothesis.
This kind of project wouldn’t be possible without the contributions of the many Galaxy Zoo volunteers providing morphological classifications on hundreds of thousands of galaxies. In this case, as is so frequent in research, not only have we answered a question about the evolution of these galaxies, but we have been presented with another.
Please keep up the great work, it really makes a difference.
Since I joined the team in 2018, citizen scientists like you have given us over 2 million classifications for 50,000 galaxies. We rely on these classifications for our research: from spiral arm winding, to merging galaxies, to star formation – and that’s just in the last month!
We want to get as much science as possible out of every single click. Your time is valuable and we have an almost unlimited pile of galaxies to classify. To do this, we’ve spent the past year designing a system to prioritise which galaxies you see on the site – which you can choose to access via the ‘Enhanced’ workflow.
This workflow depends on a new automated galaxy classifier using machine learning – an AI, if you like. Our AI is good at classifying boring, easy galaxies very fast. You are a much better classifier, able to make sense of the most difficult galaxies and even make new discoveries like Voorwerpen, but unfortunately need to eat and sleep and so on. Our idea is to have you and the AI work together.
The AI can guess which challenging galaxies, if classified by you, would best help it to learn. Each morning, we upload around 100 of these extra-helpful galaxies. The next day, we collect the classifications and use them to teach our AI. Thanks to your classifications, our AI should improve over time. We also upload thousands of random galaxies and show each to 3 humans, to check our AI is working and to keep an eye out for anything exciting.
With this approach, we combine human skill with AI speed to classify far more galaxies and do better science. For each new survey:
- 40 humans classify the most challenging and helpful galaxies
- Each galaxy is seen by 3 humans
- The AI learns to predict well on all the simple galaxies not yet classified
What does this mean in practice? Those choosing the ‘Enhanced’ workflow will see somewhat fewer simple galaxies (like the ones on the right), and somewhat more galaxies which are diverse, interesting and unusual (like the ones on the left). You will still see both interesting and simple galaxies, and still see every galaxy if you make enough classifications.
With our new system, you’ll see somewhat more galaxies like the ones on the left, and somewhat fewer like the ones on the right.
We would love for you to join in with our upgrade, because it helps us do more science. But if you like Galaxy Zoo just the way it is, no problem – we’ve made a copy (the ‘Classic’ workflow) that still shows random galaxies, just as we always have. If you’d like to know more, check out this post for more detail or read our paper. Separately, we’re also experimenting with sending short messages – check out this post to learn more.
Myself and the Galaxy Zoo team are really excited to see what you’ll discover. Let’s get started.
I’d love to be able to take every galaxy and say something about it’s morphology. The more galaxies we label, the more specific questions we can answer. When you want to know what fraction of low-mass barred spiral galaxies host AGN, suddenly it really matters that you have a lot of labelled galaxies to divide up.
But there’s a problem: humans don’t scale. Surveys keep getting bigger, but we will always have the same number of volunteers (applying order-of-magnitude astronomer math).
We’re struggling to keep pace now. When EUCLID (2022), LSST (2023) and WFIRST (2025ish) come online, we’ll start to look silly.
To keep up, Galaxy Zoo needs an automatic classifier. Other researchers have used responses that we’ve already collected from volunteers to train classifiers. The best performing of these are convolutional neural networks (CNNs) – a type of deep learning model tailored for image recognition. But CNNs have a drawback. They don’t easily handle uncertainty.
When learning, they implicitly assume that all labels are equally confident – which is definitely not the case for Galaxy Zoo (more in the section below). And when making (regression) predictions, they only give a ‘best guess’ answer with no error bars.
In our paper, we use Bayesian CNNs for morphology classification. Our Bayesian CNNs provide two key improvements:
- They account for varying uncertainty when learning from volunteer responses
- They predict full posteriors over the morphology of each galaxy
Using our Bayesian CNN, we can learn from noisy labels and make reliable predictions (with error bars) for hundreds of millions of galaxies.
How Bayesian Convolutional Neural Networks Work
There’s two key steps to creating Bayesian CNNs.
1. Predict the parameters of a probability distribution, not the label itself
Training neural networks is much like any other fitting problem: you tweak the model to match the observations. If all the labels are equally uncertain, you can just minimise the difference between your predictions and the observed values. But for Galaxy Zoo, many labels are more confident than others. If I observe that, for some galaxy, 30% of volunteers say “barred”, my confidence in that 30% massively depends on how many people replied – was it 4 or 40?
Instead, we predict the probability that a typical volunteer will say “Bar”, and minimise how surprised we should be given the total number of volunteers who replied. This way, our model understands that errors on galaxies where many volunteers replied are worse than errors on galaxies where few volunteers replied – letting it learn from every galaxy.
2. Use Dropout to Pretend to Train Many Networks
Our model now makes probabilistic predictions. But what if we had trained a different model? It would make slightly different probabilistic predictions. We need to marginalise over the possible models we might have trained. To do this, we use dropout. Dropout turns off many random neurons in our model, permuting our network into a new one each time we make predictions.
Below, you can see our Bayesian CNN in action. Each row is a galaxy (shown to the left). In the central column, our CNN makes a single probabilistic prediction (the probability that a typical volunteer would say “Bar”). We can interpret that as a posterior for the probability that k of N volunteers would say “Bar” – shown in black. On the right, we marginalise over many CNN using dropout. Each CNN posterior (grey) is different, but we can marginalise over them to get the posterior over many CNN (green) – our Bayesian prediction.
Read more about it in the paper.
Modern surveys will image hundreds of millions of galaxies – more than we can show to volunteers. Given that, which galaxies should we classify with volunteers, and which by our Bayesian CNN?
Ideally we would only show volunteers the images that the model would find most informative. The model should be able to ask – hey, these galaxies would be really helpful to learn from– can you label them for me please? Then the humans would label them and the model would retrain. This is active learning.
In our experiments, applying active learning reduces the number of galaxies needed to reach a given performance level by up to 35-60% (See the paper).
We can use our posteriors to work out which galaxies are most informative. Remember that we use dropout to approximate training many models (see above). We show in the paper that informative galaxies are galaxies where those models confidently disagree.
This is only possible because we think about labels probabilistically and approximate training many models.
What galaxies are informative? Exactly the galaxies you would intuitively expect.
- The model strongly prefers diverse featured galaxies over ellipticals
- For identifying bars, the model prefers galaxies which are better resolved (lower redshift)
This selection is completely automatic. Indeed, I didn’t realise the lower redshift preference until I looked at the images!
I’m excited to see what science can be done as we move from morphology catalogs of hundreds of thousands of galaxies to hundreds of millions. If you’d like to know more or you have any questions, get in touch in the comments or on Twitter (@mike_w_ai, @chrislintott, @yaringal).
Excited to join in? Click here to go to Galaxy Zoo and start classifying! What could you discover?