It was discussed within the science team once the nature of Hanny’s Voorwerp was becoming clear, since the color of that giant loop suggested similar emission-line properties at a larger redshift. Kevin gave it the name “Teacup” in honor of this loop. Then in March 2009, Georgia State University colleague Mike Crenshaw was here on my campus for a thesis defense. I showed him this object, and he mentioned that one of their graduate students was doing spectroscopy of active galaxies at the Lowell Observatory 1.8m telescope that week. Two nights later, Stephen Rafter from GSU obtained a long-slit spectrum crossing the loop and showed that it was, indeed, gas photoionized by an AGN. Later this object featured in the Voorwerpje hunt, as one of the 8 cases showing an energy deficit from the nucleus so it must have faded. Indeed, this example was a major factor in showing that the Hunt project would be worthwhile.
Today’s post is also from Dr Enno Middelberg and is the second part of two explaining in more detail about radio interferometry and the techniques used in producing the radio images in Radio Galaxy Zoo.
In a previous post I have explained how the similarity of the electric field at two antennas’ locations is related to the Fourier transform of the sky brightness. Unfortunately, we’re not quite there (yet). You may have heard about sine and cosine functions and know that they are one-dimensional. Images, and the sky brightness distribution, however, are two-dimensional. So how can we imagine a two-dimensional Fourier transform? In this case, we have to combine 2D waves with various frequencies, amplitudes, and orientations into one image. We can make a comparison with waves on a lake. Just like a sine or cosine wave, a water wave has an amplitude and a frequency, but in addition it also has an orientation, or a direction in which it travels. Now let us think of a few people sitting around a pond or lake. Everyone kneels down to generate waves which then propagate through the water. Let us further assume that the waves are not curved, but that the crests and valleys are parallel lines. Now all these waves, with properly chosen frequencies, amplitudes, and directions will propagate into the center of the pond, where the waves interfere. With just the right parameters, the interference pattern can be made to look like a 2D image. In a radio interferometer, every two telescopes make a measurement which represents the properties of such a wave, and all waves combined then can be turned into an image. Let me point out that the analogy with the lake is taking things a little bit too far: since the water waves keep moving across the lake, a potential image formed by their intererence will disappear quite quickly, but I hope you get the point about interfering 2D waves.
To illustrate this further I have made a little movie. Let us assume that the radio sky looks just like Jean Baptiste Joseph Fourier (top left panel in the movie). I have taken this image from Wikipedia, cropped it to 128×128 pixels, and calculated its Fourier transform. The Fourier transform is an image with the same dimensions, but the pixels indicate the amplitude, phase and frequency of 2D waves which, when combined, result in an image. Then I have taken an increasing number of pixels from this Fourier transform (which ones is indicated at the top right), calculated which 2D waves they represent (bottom right), and incrementally added them into an image (bottom left). At the beginning of the movie, when only few Fourier transform pixels are used, the reconstructed Mr. Fourier is barely recognizable, with 50 Fourier pixels added, one begins to identify a person, and with an increasing number of waves added, the image more and more resembles the input image. You should play it frame by frame, in particular at the beginning, when the changes in the reconstructed image are large. In radio interferometry, Mr. Fourier’s image is what we want (how does the sky look like?), but what we get is only the pixels shown in the upper right image. Each of these pixels, all by itself, provides information as illustrated in the bottom right, but all together, they yield an image such as in the bottom left image. And the more pixels we measure, the more accurate the image becomes.
So in summary: a radio interferometer makes measurements of the similarity of the electric field at two locations, and the degree of similarity represents the Fourier transform of the sky radio brightness for the two antennas in that instant. Astronomers then reconstruct the sky brightness from all these measurements taken together – that’s also why the technique is called “synthesis imaging”, or “aperture synthesis”. And if you’ve kept reading until here without having your brain turn to mush – congratulations! This is typically the subject of lectures for advanced physics students. I’ve been learning about radio interferometry now for more than 15 years and am still discovering new and interesting bits.
I’ve used some statistical tools to analyze the spatial distribution of Galaxy Zoo galaxies and to see whether we find galaxies with particular classifications in more dense environments or less dense ones. By “environment” I’m referring to the kinds of regions that these galaxies tend to be found: for example, galaxies in dense environments are usually strongly clustered in groups and clusters of many galaxies. In particular, I’ve used what we call “marked correlation functions,” which I’ve found are very sensitive statistics for identifying and quantifying trends between objects and their environments. This is also important from the perspective of models, since we think that massive clumps of dark matter are in the same regions as massive galaxy groups.
We’ve mainly used them in two papers, where we analyzed the environmental dependence of morphology and color and where we analyzed the environmental dependence of barred galaxies. These papers have been described a bit in this post andthis post. We’ve also had other Galaxy Zoo papers about similar subjects, especially this paper by Steven Bamford and this one by Kevin Casteels.
What I loved about these projects is that we obtained impressive results that nobody else had seen before, and it’s all thanks to the many many classifications that the citizen scientists have contributed. These statistics are useful only when one has large catalogs, and that’s exactly what we had in Galaxy Zoo 1 and 2. We have catalogs with visual classifications and type likelihoods that are ten times as large as ones other astronomers have used.
What are these “marked correlation functions”, you ask? Traditional correlation functions tell us about how objects are clustered relative to random clustering, and we usually write this as 1+ ξ. But we have lots of information about these galaxies, more than just their spatial positions. So we can weight the galaxies by a particular property, such as the elliptical galaxy likelihood, and then measure the clustering signal. We usually write this as 1+W. Then the ratio of (1+W)/(1+ξ), which is the marked correlation function M(r), tells us whether galaxies with high values of the weight are more dense or less dense environments on average. And if 1+W=1+ξ, or in other words M=1, then the weight is not correlated with the environment at all.
First, I’ll show you one of our main results from that paper using Galaxy Zoo 1 data. The upper panel shows the clustering of galaxies in the sample we selected, and it’s a function of projected galaxy separation (rp). This is something other people have measured before, and we already knew that galaxies are clustered more than random clustering. But then we weighted the galaxies by the GZ elliptical likelihood (based on the fraction of classifiers identifying the galaxies as ellipticals) and then took the (1+W)/(1+ξ) ratio, which is M(rp), and that’s shown by the red squares in the lower panel. When we use the spiral likelihoods, the blue squares are the result. This means that elliptical galaxies tend to be found in dense environments, since they have a M(rp) ratio that’s greater than 1, and spiral galaxies are in less dense environments than average. When I first ran these measurements, I expected kind of noisy results, but the measurements are very precise and they far exceeded my expectations. Without many visual classifications of every galaxy, this wouldn’t be possible.
Second, using Galaxy Zoo 2 data, we measured the clustering of disc galaxies, and that’s shown in the upper panel of the plot above. Then we weighted the galaxies by their bar likelihoods (based on the fractions of people who classified them as having a stellar bar) and measured the same statistic as before. The result is shown in the lower panel, and it shows that barred disc galaxies tend to be found in denser environments than average disc galaxies! This is a completely new result and had never been seen before. Astronomers had not detected this signal before mainly because their samples were too small, but we were able to do better with the classifications provided by Zooites. We argued that barred galaxies often reside in galaxy groups and that a minor merger or interaction with a neighboring galaxy can trigger disc instabilities that produce bars.
What kinds of science shall we use these great datasets and statistics for next? My next priority with Galaxy Zoo is to develop dark matter halo models of the environmental dependence of galaxy morphology. Our measurements are definitely good enough to tell us how spiral and elliptical morphologies are related to the masses of the dark matter haloes that host the galaxies, and these relations would be an excellent and new way to test models and simulations of galaxy formation. And I’m sure there are many other exciting things we can do too.
Today’s post comes from Dr Enno Middelberg and is the first part of two explaining in more detail about radio interferometry and the techniques used in producing the radio images in Radio Galaxy Zoo.
I have written in an earlier post about the basic idea of how to increase the resolution of a radio telescope: use many telescopes, separated by kilometers, and observe the same object with all. Here is a little more information about how this works.
At the very heart of an interferometer is the van Cittert-Zernike theorem: it essentially states that the degree of similarity of the electric field at two locations is a measure of the Fourier transform of the sky brightness distribution. Now that’s a big bite to swallow, but let me explain it in less confusing words: the electric field is all we can measure – radio waves are electromagnetic waves, and radio telescopes are sensitive to the electric field. Now we can build a radio telescope in a way that it produces as its output a voltage which is proportional to the electric field which the antenna receives from, e.g., a galaxy. Much of the signal will be noise from our own Milky Way, the atmosphere and the electronics which amplify the feeble signals, but a tiny little bit of the signal will be caused by radio waves from space, and both antennas will receive a little bit of these. Now suppose we have two telescopes separated by 1 km or so, and both telescopes produce such voltages which contain a little bit of this signal. The voltages are digitised and the two data streams are fed into a correlator. The correlator is a computer which takes the two data streams and calculates their correlation coefficient, which is an indicator for their similarity. If the two data streams have nothing in common (for example, because an unexperienced PhD student pointed the two antennas in different directions :-) ) then the correlation coefficient will be zero, which is to say that they are not similar at all. However, if the two telescopes point at the same source, the data streams will have a few bits in common, and the correlator spits out a correlation coefficient which is not zero. This is our measurement!
Now that we’ve that out of the way, we need to talk about Fourier transforms. The van Cittert-Zernike theorem states that the correlation coefficient is a measure for the Fourier transform of the sky brightness. Now what is a Fourier transform? The Fourier transform is an ingenious way of representing a mathematical function with a sum of sine and cosine functions. That is, if I take a large number of sine and cosine functions with various (but carefully selected!) frequencies and amplitudes, then their sum will be an accurate representation of another function, for example a square wave or a sawtooth. Check out the Wikipedia page on Fourier series (which are related to Fourier transforms, but easier to understand), which has a number of nice animations to illustrate this, such as this one:
You can also play with Paul Falstad’s Java applet to see how to construct functions using sine and cosine waves interactively – very instructive! In part 2 of this post I will explain how astronomers use 2D Fourier transforms to assemble images of the radio sky.
We’re pleased to announce that Radio Galaxy Zoo has been
translated to traditional character Chinese. Many thanks
to the Zooniverse’s Chris Snyder for getting all the technical
things set up for the translation to go live and Mei-Yin Chou
at Academia Sinica’s Institute of Astronomy & Astrophysics
(ASIAA) for helping verifying the translation. What follows
is an announcement describing Radio Galaxy Zoo and the translation
in traditional character Chinese and then in English: 電 波星系動物園[中 文版]歡 迎你的加入！在此我們欣然宣佈本計畫中文版開始啟用。
感謝中央研究院天文及天文物理研究所 Dr. Meg Schwamb (Meg是 參與Planet
Hunters 和Planet Four計 畫的科學家)以 及天文推廣團隊成員黃珞文協助，
再將它 們和紅外圖像進行比較及匹 配，這麼一來，在你的協助下，
噴流和宿主星系間本來付之闕如的關聯 性，未來將可建立成形。 http://radio.galaxyzoo.org/?lang=zh_tw
Welcome to Radio Galaxy Zoo (Chinese)! It is with great
pleasure that we announce the launch of the Chinese version
of our project. We are very grateful to Dr Meg Schwamb
(from Planet Hunters and Planet Four) and Lauren Huang from
Academia Sinica’s Institute of Astronomy & Astrophysics
(ASIAA) for their help with translating our project from
English to traditional Chinese characters. Supermassive black holes (~several hundred million times
the mass of our Sun) lie deep in the cores of many galaxies.
And though we cannot directly see these black holes, we do
sometimes see the huge radio jets originating from the galaxy
cores. Galaxies in the radio sky can look quite different from
the one seen in the optical wavelengths by instruments such
as our own eyes. Some galaxies do not have any central radio
emission but only radio jet(s) emanating outwards. Sometimes
these jets are straight but at other times, they can be blobby,
one-sided or bent. With very large all-sky radio surveys, we
have many hundreds of thousands of radio jets and blobs that
need to be matched to their host galaxies. Therefore we invite you to see the Universe as you have never
seen before and help us map the radio sky by matching the radio
jets and filaments to the galaxies (as seen in the infrared images) from
whence they came. http://radio.galaxyzoo.org/?lang=zh_tw
Radio Galaxy Zoo participants have been swamping the Science Team with an incredible number of interesting objects via Talk. Many of these are challenging our understanding of how radio galaxies work, both at their launching sites in supermassive black holes, and in the ways that the ejected jets of radio plasma interact with their environment.
We’ll be highlighting some of these curious discoveries in subsequent blogs, but here’s a recently found one that’s just “too good to be true.”
Today, we know that a galaxy’s redshift (the measure of how fast it is moving away from us — we use z = velocity/speed of light, approximately) is an excellent indicator of distance. This is due to the overall expansion of the Universe. So a galaxy with z=0.049 is moving away at 14,700 km/s, and is located about 650 million light years away, while a galaxy with z=0.26 is moving at about 89,000 km/s and is 3 billion light years away.
How, then, could two such galaxies each be a source of radio emission which appear to be connected with a thin radio filament? That’s exactly what the following picture shows, where the optical picture, in green, is from the Sloan Digital Sky Survey (SDSS) , and the purple structure, outlined in white contours, is radio emission from the Faint Images of the Radio Sky at Twenty cm (FIRST, from the Very Large Array).
Radio Galaxy Zoo participants have looked at approximately 40,000 systems so far, so in such a large collection, this unusual object is likely just a coincidence, rather than some failure in our understanding of cosmic expansion. However, it would be nice to get some higher resolution radio images to see what the structure really looks like.
If you haven’t given Radio Galaxy Zoo a try yet – please join us at http://radio.galaxyzoo.org. We’re finding all kinds of fascinating new structures, while simultaneously creating a large database matching up radio emission with the supermassive black holes from which they were born.
I’m happy to announce that thanks to the hard work of more than 80,000 volunteers, we’ve recently completed classifying the infrared images of galaxies taken from the UKIDSS survey! There were more than 70,000 images of galaxies on the site that you helped to classify; once the data are reduced, one of our main goals is to compare your classifications to those from the Galaxy Zoo 2 project and study how morphology changes as a function of the wavelength in which those galaxies are observed. Melanie Galloway, a PhD student at the University of Minnesota, will be focusing on these this summer as part of her thesis work.
Some early results have shown that, as we predicted, features like galactic bars are often more prominent in the infrared. Below is a nice example of this phenomenon: the image is of the same galaxy (SDSS J115244.84+054059.1). In the optical image on the right (from GZ2), you can see a spiral galaxy with lots of star-formation, but the clumpy morphology of the gas clouds can hide the shape of the bar in the galaxy. In the UKIDSS image on the right, the blue gas clouds from star formation aren’t picked up in the infrared, and the stellar bar is much more clearly visible. This is supported by your classifications: the probability of a bar jumps from just 25% in GZ2 to 67% in GZ:UKIDSS.
This marks the third set of galaxy images we’ve completed since the relaunch of Galaxy Zoo in 2012 (following the high-redshift CANDELS images from Hubble and the artificially-redshifted images from FERENGI). There are still tens of thousands of galaxies from the SDSS left to classify in Galaxy Zoo, though, and we’ll be adding new sets of images in the coming months. Thanks again for your help, and we’ll report on the results of the UKIDSS data as it comes in!
Last October, Galaxy Zoo began including new images from the UKIDSS survey on the main site. These are many of the same galaxies that were classified in GZ2, but the images come from a completely different telescope and a different wavelength — the infrared. While there’s a lot of science we’ll be able to do comparing galaxy morphologies at different wavelengths, many volunteers have noticed artifacts (features that aren’t real astronomical objects) in the UKIDSS images that can look very different from what you’re used to seeing in the SDSS or Hubble images:
- green squares
- rings and ghosts
- grid patterns and speckles
These are only a small percentage of the images we’re looking at, but it’s important to identify them and try to separate them cleanly from the galaxies we’re classifying. So here’s our “spotter’s guide” to UKIDSS image artifacts.
All of the UKIDSS images you see in Galaxy Zoo are what we call “artificial-color” — we use images captured by the telescope’s infrared detector, and then combine the different infrared wavelengths into a single color image. For our images, we use data from the Y-band filter (1.03 microns) for the red channel, J-band filter (1.25 microns) for green, and K-band (2.20 microns) for the blue channel.
The images in Y, J, and K were taken at separate times and with different detectors and filters. So for changes in either the camera or the sky, these will often only show up in one color in the GZ images.
Some users have identified a persistent pattern in the images that looks like four little green pixels arranged in a square (looks a little like the UKIDSS logo!). This is from the J-band images.
The origin of the squares comes from the way that UKIRT processes data. Each patch of the sky is imaged in multiple exposures, and then these exposures are combined to get the final, deeper image. So each pixel in the image comes from four different locations on the detector. In the case of J-band images, the telescope actually took 8 different exposures during the dither pattern. For a few of the observing runs, the telescope lost the guidestar which keeps it positioned at the correct location; that means that the expected number of counts at the position of a bright star is lower due to the bad frame in the interlaced data. Normally, the software algorithms in UKIDSS drop the bad frames and correct for this effect; as GZ volunteers have identified, though, there are some cases where it didn’t work perfectly. (Many thanks to UKIDSS Survey Scientist Steve Warren at Imperial College London for his help in explaining this phenomenon.)
Since the exposure pattern is in a square, the bad pixels will show up where there’s a bright star and one of the four frames is bad (meaning counts are lower than they should be). That’s the origin of the pattern showing up in some images.
As mentioned above, the telescope takes multiple exposures for each part of the sky that it images. To improve this, for some of the bands, it images the same part of the sky for a second round, but offsets the location of these by either an integer or half-integer pixel. The reason for this is so we can improve the angular resolution of the telescope – that is, distinguishing small features in the galaxy that are normally blurred out by either the Earth’s atmosphere or the limiting power of the telescope itself.
In the final data products, images from these offset frames are combined onto a fixed pixel scale in a process called interleaving. In some sources (bright ones especially), the gridding isn’t perfect and you can see some of the scale for this in the images.
Another feature people have spotted are what have been called “ghosts”: these can be either regular or irregularly shaped objects appearing in a couple specific colors. There might be multiple causes for these, but one of the most common is the presence of an actual contaminant (a speck of dust, for example) that got into the optics of the telescope. Since the telescope isn’t designed to focus on nearby objects, the point source is distorted, usually into a ring-like shape. The color of these images, like the green squares, depends on what band they were imaged in; red for Y-band, green for J-band, or blue for K-band.
Here’s one example: you can see the green and blue ring to the right of the galaxy in the color GZ image. The raw data (in black and white) shows the same ring in multiple locations, which tells us that it remained in the same position on the detector, but appears several times as the telescope moves over the sky.
We hope this has been useful, but please continue to discuss these in Talk and on the forums; particularly if there are any artifacts that impede your ability to make a good galaxy classification. Happy hunting, and thanks for continuing to participate with us on Galaxy Zoo.
Several of the Galaxy Zoo science team are together in Taipei this week for the Citizen Science in Astronomy workshop. If we’ve been a bit quiet it’s because we’re all working hard to get some of the more recent Galaxy Zoo classifications together from all of your clicks into information about galaxies we can make publicly available for science.
But we thought we’d take this opportunity of all being in the same place to run a live Hangout. We might end up talking a bit about the process of combining multiple clicks into classifications, as well as some of the recent Galaxy Zoo science results. And we’re of course happy to take questions, either as comments below, as Tweets to @galaxyzoo or via the Google+ interface.
We plan to do this during our lunch break – probably about 12.00pm Taipei Standard Time tomorrow (which is, if I can do my sums, 4.00am UK time, or Wednesday 5th March at 11.00pm EST, 8.00pm PST). As usual the video will also be available to watch later:
You know those odd features in some SDSS images that look like intergalactic traffic lights?
They aren’t intergalactic at all: they’re asteroids on the move in our own solar system. They move slowly compared to satellite trails (which look more like #spacelasers), but they often move quickly enough that they’ve shifted noticeably between the red, green, and blue exposures that make up the images in SDSS/Galaxy Zoo. When the images from each filter are aligned and combined, the moving asteroid dots its way colorfully across part of the image.
These objects are a source of intense study for some astronomers and planetary scientists, and the SDSS Moving Object Catalog gives the properties of over 100,000 of them. Planetary astronomer Alex Parker, who studies asteroids, has made a video showing their orbits.
I find their orbits mesmerizing, and there’s quite a lot of science in there too, with the relative sizes illustrated by the point sizes, and colors representing different asteroid compositions and families. There’s more information at the Vimeo page (and thanks to Amanda Bauer for posting the video on her awesome blog).
One of the most common questions we receive about asteroids from Galaxy Zoo volunteers is whether there will ever be a citizen science project to find them. So far, as the catalog linked above shows, the answer has been that computers are pretty good at finding asteroids, so there hasn’t been quite the need for your clicks… yet. There are some asteroids that are a little more difficult to spot, and those we’d really like to spot are quite rare, so stay tuned for a different answer to the question in the future. And in the meantime, enjoy the very cool show provided by all those little traffic lights traversing their way around our solar system.