Archive | Site News RSS for this section

Visualizing the decision trees for Galaxy Zoo

This post (and visualization) is by Coleman Krawczyk, a postdoc at the ICG at the University of Portsmouth

Today we’ve added a new tool that visualizes the full decision tree for every Galaxy Zoo project from GZ2 onward (GZ1 only asked users one question, and would make for a boring visualization).  Each tree shows all the possible paths Galaxy Zoo users can take when classifying a galaxy.  Each “task” is color-coded by the minimum number of branches in the tree a classifier needs to take in order to reach that question.  In other words, it indicates how deeply buried in the tree a particular question is, a property that is helpful when scientists are analyzing the classifications.

Galaxy Zoo has used two basic templates for its decision trees.  The first template allowed users to classify galaxies into smooth, edge-on disks, or face on disks (with bars and/or spiral arms) and was used for Galaxy Zoo 2, the infrared UKIDSS images, and is currently being used for the SDSS data that is live on the site. The second template was designed for high-redshift galaxies, and allows users to classify galaxies into smooth, clumpy, edge on disks, or face on disks. This template was used for Galaxy Zoo: Hubble (GZ3), FERENGI (artificially redshifted images of galaxies), and is currently being used by the CANDELS and GOODS images in GZ4.  Although these final three projects ask the same basic questions, there are some subtle differences between them in the questions we ask about the bulge dominance, “odd” features, mergers, spiral arms, and/or clumps.

Visualization of the decision tree for Galaxy Zoo 2 (GZ2), by C. Krawcyzk. Colors indicate the depth of a particular question within the decision tree.

Visualization of the decision tree for Galaxy Zoo 2 (GZ2), by C. Krawczyk. Colors indicate the depth of a particular question within the tree.

If you ever wanted to know all the questions Galaxy Zoo could possibly ask you, head on over to the new visualization and have a look!

New paper: Galaxy Zoo and machine learning

I’m really happy to announce a new paper based on Galaxy Zoo data has just been accepted for publication. This one is different than many of our previous works; it focuses on the science of machine learning, and how we’re improving the ability of computers to identify galaxy morphologies after being trained off the classifications you’ve provided in Galaxy Zoo. This paper was led by Sander Dieleman, a PhD student at Ghent University in Belgium.

This work was begun in early 2014, when we ran an online competition through the Kaggle data platform called “The Galaxy Challenge”. The premise was fairly simple – we used the classifications provided by citizen scientists for the Galaxy Zoo 2 project and challenged computer scientists to write an algorithm to match those classifications as closely as possible. We provided about 75,000 anonymized images + classifications as a training set for participants, and kept the same amount of data secret; solutions submitted by competitors were tested on this set. More than 300 teams participated, and we awarded prizes to the top three scores. You can see more details on the competition site.

Since completing the competition, Sander has been working on writing up his solution as an academic paper, which has just been accepted to Monthly Notices of the Royal Astronomical Society (MNRAS). The method he’s developed relies on a technique known as a neural network; these are sets of algorithms (or statistical models) in which the parameters being fit can change as they learn, and can model “non-linear” relationships between the inputs. The name and design of many neural networks are inspired by similarities to the way that neurons function in the brain.

One of the innovative techniques in Sander’s work has been to use a model that makes use of the symmetry in the galaxy images. Consider the pictures of the same galaxy below:

Screen Shot 2015-03-27 at 4.16.07 PM

A galaxy from GZ2, shown both with no rotation (left) and rotated by 45 degrees (right).

From the classifications in GZ, we’d expect the answers for these two images to be identical; it’s the same galaxy, after all, no matter which way we look at it. For a computer program, however, these images would need to be separately analyzed and classified. Sander’s work exploits this in two ways:

  1. The size of the training data can be dramatically increased by including multiple, rotated versions of the different images. More training data typically results in a better-performing algorithm.
  2. Since the morphological classification for the two galaxies should be the same, we can apply the same feature detectors to the rotated images and thus share parameters in the model. This makes the model more general and improves the overall performance.

Once all of the training data is in, Sander’s model takes images and can provide very precise classifications of morphology. I think one of the neatest visualizations is this one: galaxies along the top vs bottom rows are considered “most dis-similar” by the maps in the model. You can see that it’s doing well by, for example, grouping all the loose spiral galaxies together and predicting that these are a distinct class from edge-on spirals.

From Figure 13 in Dieleman et al. (2015). Example sets of images that are maximally distinct in the prediction model. The top row consists of loose winding spirals, while the bottom row are edge-on disks.

From Figure 13 in Dieleman et al. (2015). Example sets of images that are maximally distinct in the prediction model. The top row consists of loose winding spirals, while the bottom row are edge-on disks.

For more details on Sander’s work, he has an excellent blog post on his own site that goes into many of the details, a lot of which is accessible even to a non-expert.

While there are a lot of applications for these sorts of algorithms, we’re particularly interested in how this will help us select future datasets for Galaxy Zoo and similar projects. For future surveys like LSST, which will contain many millions of images, we want to efficiently select the images where citizen scientists can contribute the most – either for their unusualness or for the possibility of more serendipitous discoveries. Your data are what make innovations like this possible, and we’re looking forward to seeing how these can be applied to new scientific problems.

Paper: Dieleman, Willett, & Jambre (2015). “Rotation-invariant convolutional neural networks for galaxy morphology prediction”, MNRAS, accepted.

New Images on Galaxy Zoo, Part 1

We’re delighted to announce that we have some new images on Galaxy Zoo for you to classify! There are two sets of new images:

1. Galaxies from the CANDELS survey

2. Galaxies from the GOODS survey

The general look of these images should be quite familiar to our regular classifiers, and we’ve already described them in many previous posts (examples: here, here, and here), so they may not need too much explanation. The only difference for these new images are their sensitivities: the GOODS images are made from more HST orbits and are deeper, so you should be able to better see details in a larger number of galaxies compared to HST.

Comparison of the different sets of images from the GOODS survey taken with the Hubble Space Telescope. The left shows shallower images from GZH with only 2 sets of exposures; the right shows the new, deeper images with 5 sets of exposures now being classified.

Comparison of the different sets of images from the GOODS survey taken with the Hubble Space Telescope. The left shows shallower images from GZH with only 2 sets of exposures; the right shows the new, deeper images with 5 sets of exposures now being classified.

The new CANDELS images, however, are slightly shallower than before. The main reason that these are being included is to help us get data measuring the effect of brightness and imaging depth for your crowdsourced classifications. While they aren’t always as visually stunning as nearby SDSS or HST images, getting accurate data is really crucial for the science we want to do on high-redshift objects, and so we hope you’ll give the new images your best efforts.

Images from the CANDELS survey with the Hubble Space Telescope. Left: deeper 5-epoch images already classified in GZ. Right: the shallower 2-epoch images now being classified.

Images from the CANDELS survey with the Hubble Space Telescope. Left: deeper 5-epoch images already classified in GZ. Right: the shallower 2-epoch images now being classified.

Both of these datasets are relatively small compared to the full Sloan Digital Sky Survey (SDSS) and Hubble Space Telescope (HST) sets that users have helped us with over the last several years. With about 13,000 total images, we hope that they’ll can be finished by the Galaxy Zoo community within a couple months. We already have more sets of data prepared for as soon as these finish – stay tuned for Part 2 coming up shortly!

As always, thanks to everyone for their help – please ask the scientists or moderators here or on Talk if you have any questions!

Radio Galaxy Zoo searches for Hybrid Morphology Radio galaxies (HyMoRS): #hybrid

First science paper on hybrid morphology radio galaxies found through Radio Galaxy Zoo project has now been submitted!

hybrid_blogfig1 In the paper we have revised the definition of the hybrid morphology radio galaxy (HyMoRS or hybrids) class. In general, HyMoRS show different Fanaroff-Riley radio morphology on either side of the active nucleus, that is FRI type on one side and FRII on the other side of their infrared host galaxy. But we found that this wasn’t very precise, and set up a clear definition of these sources, which is:

”To minimise the misclassification of HyMoRS, we attempt to tighten the original morphological classification of radio galaxies in the scope of detailed observational and analytical/numerical studies undertaken in the past 30 years. We consider a radio source to be a HyMoRS only if

(i) it has a well-defined hotspot on one side and a clear FR I type jet on the other, though we note the hotspots may `flicker’, that is their brightness may be rapidly variable (Saxton et al. 2002), and, in the case the radio source has a very prominent core or is highly asymmetric,

(ii) its core prominence does not suggest strong relativistic beaming nor its asymmetric radio structure can be explained by differential light travel time effects. ”

Based on this we revised hybrids reported in scientific literature and found 18 objects that satisfy our criteria. With Radio Galaxy Zoo during the first year of its operation, through our fantastic RadioTalk, you guys now nearly doubled this number finding another 14 hybrids, which we now confirm! Two examples from the paper are below:
hybrid_blogfig2

We also looked at the mid-infrared colours of hybrids’ hosts. As explained by Ivy in our last RGZ blog post (http://blog.galaxyzoo.org/2015/03/02/first-radio-galaxy-zoo-paper-has-been-submitted/), the mid-infrared colour space is defined by the WISE filter bands: W1, W2 and W3, corresponding to 3.4, 4.6 and 12 microns, respectively.

The results are below:

hybrid_blogfig3

For those of you interested in seeing the full paper, we will post a link to freely accessible copy once the paper is accepted by the journal and is in press! :)

Fantastic job everyone!
Anna & the RGZ science team

First Radio Galaxy Zoo paper has been submitted!

The project description and early science paper (results from Year 1) for the Radio Galaxy Zoo project has been submitted!

authorlist

authorlist1We find that the RGZ citizen scientists are as effective as the science experts at identifying the radio sources and their host galaxies.

Based upon our results from 1 year of operation, we find the RGZ host galaxies reside in 3 primary loci of mid-infrared colour space.  The mid-infrared colour space is defined by the WISE filter bands: W1, W2 and W3, corresponding to 3.4, 4.6 and 12 microns; respectively.

Approximately 10% of the RGZ sample reside in the mid-IR colour space dominated by elliptical galaxies, which have older stellar populations and are less dusty, hence resulting in bluer (W2-W3) colours. The 2nd locus (where ~15% of RGZ sources are found) lies in the colour space known as the `AGN wedge’, typically associated with X-ray-bright QSOs and Seyferts. And lastly, the largest concentration of RGZ host galaxies (~30%) can be found in the 3rd locus usually associated with luminous infrared galaxies (LIRGs).  It should be noted that only a small fraction of LIRGs are associated with late-stage mergers.  The remainder of the RGZ host population are distributed along the loci of both star-forming and active galaxies, indicative of radio emission from star-forming galaxies and/or dusty elliptical (non-star-forming) galaxies. See the figure below for a plot of these results.

blog_fig2Caption to figure WISE colour-colour diagram, showing sources from the WISE all-sky catalog (colourmap), 33,127 sources from the 75% RGZ catalog (black contours), and powerful radio galaxies (green points) from (Gürkan et al. 2014). The wedge used to identify IR colours of X-ray-bright AGN from Lacy et al. (2004) & Mateos et al. (2012) is overplotted (red dashes). Only 10% of the WISE all-sky sources have colours in the X-ray bright AGN wedge; this is contrasted with 40% of RGZ and 49% of the Gürkan et al. (2014) radio galaxies. The remaining RGZ sources have WISE colours consistent with distinct populations of elliptical galaxies and LIRGs, with smaller numbers of spiral galaxies and starbursts.

In addition, we will also be submitting our paper on Hybrid Morphology Radio Sources (HyMoRS) in the next few days so stay tuned!

As always, thank you all very much for all your help and support and keep up the awesome work!

Cheers,
Julie, Ivy & the RGZ science team

Zooniverse at Mauna Kea, Day 6: This is the End

Ed, Chris, Sandor, and Becky in front of the telescope

Part 1, Part 2, Part 3, Part 4, Part 5

I’m not sure if we’ve been especially unlucky or if this is the norm for observing trips, but we once again the weather is curtailing our telescope time. After a few hours of normal observing, clouds started to blow across the top of Mauna Kea, and now it’s raining outside the dome.

Tonight's Weather

The Dip in the humidity (2nd from the top) represents when we were able to observe.

In the meantime, Becky and I shot a short video tour of the dome a couple days ago you can check out:

Tomorrow, we check out of Hale Pohaku and head down to Hilo for a night. Then I’m off to Chicago and Becky and Sandor are back to Oxford. Even with the bad weather, sleep deprivation, and static electricity, this trip has been a really great experience for me. I now know infinitely more about radio astronomy than I did before! I hope the people doing the real work were able to get all the data they needed.

A Few Notes:

Sad Becky

This sums up the general mood

  • Sandor and Becky took some sick photos around sunset, you should check out all of them.
  • When everything is terrible, you just have to let it go.
  • Thanks again to all the Galaxy Zoo volunteers, whose work made this observing trip possible for us. You are the best.

Zooniverse at Mauna Kea, Day 5: The Wind Strikes Back

windy

Part 1, Part 2, Part 3, Part 4

After few good days of observations the wind has returned to ruin our fun. The CSO telescope is supposed to be closed when the wind is above 35mph. Curiously the telescope itself doesn’t have its own anemometer, so we have to rely on readings from the other telescopes on the mountain to decide if it is safe to open the telescope building.

Feeling this entire situation was quite unsatisfactory, I decided to build my own anemometer using a clipboard with a ruler and Becky’s boot, giving you the answer to Chris’s question from earlier tonight:

Graph of Wind speed vs Deflection Angle

Shout out to Mrs Beck’s AP Physics for me understanding this

Using the above chart we tried to workout the wind speed. We had to do a bit of fudging. We decided the boot was a perfect cylinder (drag coefficient 0.82), and that it weighed about 300g. We also decided not to take into account lower air pressure. Finally when Sandor and I calculated it independently, we got wildly different results, so it was a futile exercise in the end. (Also CSO buy an anemometer)

Sandor doing the hard work!

Sandor doing the hard work!

Since then, we’ve been playing chicken with the wind. Sometimes having to close the dome. Sometimes thinking we can be open, only to have the telescope struggle to stay on target. Sometimes we hear Meg Schwamb‘s wind tracker say “Warning High Winds”.  The conditions made us miss out on a second night of observing Comet Lovejoy, and everyone seemed pretty down for most of the night.

Around 1 or 2am the wind finally let up and we were able to start observing, so the night wasn’t a complete loss. Hopefully the weather tomorrow is better.

A Few Notes:

  • It’s really hard to get enough sleep. Sleeping at altitude is hard anyway, and adding in trying to sleep during the day gives us all points for degree of difficulty. Everyone has lovely bags around their eyes.
  • This is the last day Chris is with us. We’ll be all alone tomorrow night.
  • Sandor is succumbing to the static curse now too.
  • @GeertHub on Twitter wanted to me to post a screen shot of the telescope software: snapshot1
  • All the Sex & Drugs & Rock & Roll is helping us touch the sky.

Zooniverse at Mauna Kea, Day 4: Stand Back!

cso_at_night

(Part 1, Part 2, Part 3)

Tonight we took a brief break from observing galaxies to train the Caltech Submillimeter Observatory on comet Lovejoy. I was able to help out with the observation in a real life version of:

Wait, forgot to escape a space. Wheeeeee[taptaptap]eeeeee

(turns out they have swinging ropes in the control room, who knew?). Sandor and Becky did the actual observing work. Sandor running the telescope, and Becky doing the data reduction to produce a nice graph Chris tweeted:

In the last post, I talked about how the telescope deals with the background noise from the Earth’s atmosphere by ‘chopping’ or alternating reading from its target and a point slightly off the target, then combining the readings to produce a measurement of the target with atmospheric interference removed. This works well for the distant galaxies we are observing, but not with the comet. Chris realized that the comet was too close and large (in a relative sense) for chopping to work. The telescope would take its noise reading while still pointing at the comet.

Instead, we used another, albeit less effective, technique for handling noise. We tuned the telescope to the frequency we were looking for (Carbon Monoxide) took a measurement, and then tuned it to another frequency to measure the background noise. Subtracting the noise measurement from the measurement of our target frequency gives us a clean(-ish) signal.

After that the really exciting bit happened. I got to operate the telescope as we recalibrated it and got ready to point it at our first galaxy of the night. It was pretty easy, telescope operating. Even someone with a BS in Film, like me, can do it. The procedure for moving on our first source was to first pick a bright known object, aim the telescope at it, and have the telescope calibrate its positioning by taking five measurements around the source to figure out the source’s true location.

Once the positioning was calibrated, I ordered the telescope to ‘slew’ (using that new vocabulary) to the galaxy we’re observing, set the exposure time, and then had it ‘chop’. And then ‘chop’ again. And then ‘chop’ again. And again. And you get the idea. I’ve gotten to use a bunch of different cameras, but this was by far the coolest one I’ve operated.

Ed at the telescope's controls

That full-frame Red One is weaksauce next my 10m dish

A Few Notes:

  • We ran into to computer glitch around 5 in the morning yesterday. Simon, the telescope manager, kindly helped us fix it.
  • “Watts/Hertz or Watts*m^2/Hertz” I overheard Becky saying, triggering deeply repressed memories of doing unit conversion in High School chemistry.
  • Sorry there haven’t been as many pictures recently. Stuff inside the control room doesn’t really seem to change that much from night to night.
  • There was concern about our comet observation from a collaborator. It turns out the telescope was trying to compensate for the comet’s motion as though it were a distant galaxy, so the above graph still needs a few adjustments applied to it.
  • We had some Comet Lovejoy themed music tonight . We didn’t even look at M83.

Zooniverse at Mauna Kea, Day 3: This line is not hidden in the RMS!

Caltech Submillimeter Observatory

It occurred to me I haven’t talked much about the telescope itself. There haven’t been any pictures of it yet either. We’re at the Caltech Submillimeter Observatory which is basically a giant (10m) dish inside a sweet looking disco ball on top of a dormant volcano. It observes at wavelengths somewhere in between infrared and microwave.

CSO dish with astronomers in front

The Dish (Astronomers for Scale)

We spend all of our time at the telescope in the control room with everyone hunched over a computer. I’ve learned a couple of the incantations they use to control the telescope. The first command ‘chop’ is what actually makes it record an observation. I wondered why it wasn’t called ‘listen’ or ‘observe’, but it turns out that ‘chop’ pretty accurately describes the motion of the telescope while it records.

The galaxies we’re observing are very distant and faint, and blend in to the background radiation in our atmosphere. To make up for this, the telescope will take a measurement of the source and then another slightly off the source. The controlling computer uses the second measurement to subtract the background noise from its measurement of the source galaxy.

The other command causes the telescope to move. It’s called ‘slew’. When I asked where that name came from, I was given a shrug by the so-called ‘experts’ in the room. So I turned to Google, and found the dictionary definition is to ‘turn or slide violently or uncontrollably in a particular direction’, which sounds like an accurate description of how the telescope’s movement feels from the control room. It’s also originally a nautical term which also feels appropriate.

The Astronomers

Observing is serious business. Watching people observe is not!

A few notes from the second half of last evening and this:

  • We had a small earthquake! It was exciting. It was shocking. It was only a 3.3! This is the second earthquake Chris, Sandor, and I have experienced and was Becky’s first. Pretty cool.
  • Apparently the observations tonight have provided some confusing results. I tried to get Chris to explain what was odd about them. Mostly due to altitude (partly due to working on this), all I could grasp was that they wanted to compare their observations to a nice looking graph with a clear regression line, and the galaxies they are observing are way off in a corner instead of along the line.*
  • Becky has a major problem with static electricity.
  • Here are some of the songs we’ve been listening to tonight (presented without judgment).
  • You can find more pictures of all the other telescopes at the top of Mauna Kea (post about all of them upcoming!) and other photos of the trip here.

* They misinterpreted the data and everything fits now!

Zooniverse at Mauna Kea Day 2: Take that Meteorologists!

Moon

(In which dismayed by forecasts of 100mph winds we go to beach and then end up observing anyway)

A brief update on last night, we were actually able to open the telescope in the wee hours of Thursday morning. Sandor and Becky got as far as pointing the telescope and starting to calibrate it when the wind picked back up and forced us to close.

On the bright-side we enjoyed a beautiful sunrise from the top the mountain.

We awoke late in the afternoon, to emails warning us that “Summit Conditions are Extremely Dangerous” and weather predictions of 100mph winds on the top of the mountain. Thinking it would be a lost night, Becky, Sandor, and I took off for Kona, hoping to checkout the ocean and maybe catch the sunset. Chris stayed behind to answer emails.

Sunset at Kona

Sooo pretty (Photo by Becky)

It was awesome. Definitely a good decision.

Back at Mauna Kea, the predicted extreme winds never materialized, and Chris and Meg Schwamb were able to open the CSO’s doors for a bit of remote observing, while the beach bums rushed back to Hale Pohaku to join. After a brief wind scare, we made the trip up the mountain to observe on site.

The team at their stations.

Hard at work! (Photo by Ed)

It turns that radio astronomy is pretty similar to computer programming (my normal Zooniverse occupation), in that it mostly seems to involve typing obscure commands into a shell prompt and then waiting for things to happen. Unlike programming, it also involves stomach churning shifts, as the entire building moves to track the source.

During the waiting periods, I’ve tried to learn more about how the telescope works after being mesmerized by Simon’s, the telescope’s manager, technospeak. One part of the telescope he seemed most eager to show us was the heterodyne receiver. After asking the real astronomers what is was, I was very disappointed to learn that it wasn’t a Terminator weapon. Instead, it’s part of the telescope’s processing pipeline that transforms the signal from the telescope to a frequency where detectors are cheap(er). Anyway it’s certainly a cool looking piece of equipment.

The Heterodyne Receiver

It came form the Future to save the Present (Photo by Sandor)

That’s about it for me tonight. We’ll just be up here listening to some sick jams and looking at distant galaxies. Remember you can find a bunch of pictures of trip (not many of people observing yet though) here.

Follow

Get every new post delivered to your Inbox.

Join 21,842 other followers