hydra.asutexas.edu

 


 


 




 

HET Data Reduction Tips


Below are data reduction tips. This page does not tell you how to reduce your data; it just gives the general data reduction procedure.

Choose an instrument:

retained for legacy purposes:

VIRUS Data Reduction Tips

As of Apr 2019, a single IFU of VIRUS data can be quickly and easily reduced with Remedy ( https://github.com/grzeimann/Remedy ). PIs can run this on TACC to get a quick look at their data. Full observations are also being reduced - ask for more details.

Note that currently there is no solution in place for reducing observations of targets which fully fill the focal plane of the telescope (e.g., Andromeda galaxy, etc). Plans to handle this type of data are underway but as yet incomplete.

LRS2 Data Reduction Tips

Note that both LRS2-R and LRS2-B are read out for every exposure. LRS2-B and LRS2-R have two chips each and each chip is read out through 2 amps. Thus there are 4 fits files each for LRS2-R and LRS2-B. This means that if your target is on LRS2-B (IFU 056) you will also see 8 fits files, (4 for LRS2-R and 4 for LRS2-B) and you can discard or use the LRS2-R fits files as you like.

Data reduction scripts

As of Jan 2019, all LRS2 data are automatically reduced by the Panacea pipeline on TACC. You will receive an email the morning after your observations are taken with instructions and a link to your reduced data. The Panacea pipeline is available here: https://github.com/grzeimann/Panacea https://github.com/grzeimann/Panacea and is supported by HET Data Scientist Greg Zeimann (grzeimann@gmail.com).











Data reduction pre-2019

(The following are older notes on pre-pipeline LRS2 reduction options. As of Jan 2019, most programs should be able to use the Panacea pipeline mentioned above. Read on if you must.)

Briana Indahl has created a very useful python reduction script for LRS2 called reduction_wrapper_lrs2.py, which has a configuration file lrs2_config.py that you can edit to control the reduction steps that are desired. The reduction is based on the Cure package developed for VIRUS. Levels of reduction range from basic through sky subtraction, to creating a data cube that can be collapsed, if desired. The data cube is linearized and can be viewed with ds9 or better QFitsView: http://www.mpe.mpg.de/~ott/QFitsView/

A presentation about the script, which includes detailed instructions about how to find your data and run the script on the Texas Advanced Computing Center (TACC) Maverick cluster is available at: https://speakerdeck.com/brianalane/introduction-to-the-lrs2-cure-based-reduction-pipeline We recommend you get a Maverick account and run the software there.

A key feature of this script is that it will organize all the data for a given target and perform an automatic reduction to a point where the data is linearized and can be analyzed with external tools. It identifies the target by the name entered by the HET Resident Astronomer in the target list. If the script doesn’t find the target it will list all target names from that night so you can identify your object. This will be particularly helpful for new users of the instrument and those unfamiliar with integral field data.

This data reduction script produces a data cube that is oriented such that the parallactic angle is pointing “up” parallel to the short axis of the IFU. No other WCS information in the FITS header is correct except the plate scale of the pixels that are resampled onto a rectangular grid from the native hexagonal lenslet pattern. We are working on the WCS for a future update.

As mentioned above, there is a mode in this software to help with sky subtraction for extended objects when no specific sky data has been observed. The script can search for suitable sky data from other observations obtained that night, and will make an adjustment for exposure time. It also has the option to adjust the normalization manually.

If you are interested in receiving updates about Briana’s script, please send her an email so she can add you to her distribution list (Briana Lane Indahl blindahl(at)astro.as.utexas.edu )

UV chip before November 2016

Note that in November 2016 the UV chip was replaced so in data before that the UV amplifier (labeled LU) was very noisy and thus was nearly useless. There were also reports of problems with the other UV amplifier (labeled LL) so be careful with any data before November 2016.

fits file/ IFU orientation

Instrument Chip Amp position Notes
LRS2-B UV LU top Rotate 90 degrees |
LL bottom
orange RL top Rotate 90 degrees
RU bottom
LRS2-R red LU top Rotate 90 degrees
LL bottom
far-red RL top Rotate 90 degrees
RU bottom

At the telescope we visualize the LRS2 spectra with the following command:

alias lrs2view 'ds9 -view vertgraph -view minmax yes -geometry 1200x800 -zscale -zoom 0.25 -rotate 180 *0\!:1LU* *0\!:1RL* *0\!:1LL* -rotate 180 *0\!:1RU*'

Do a lrs2view 56 or lrs2view 66 (for B and R respectively) then the orientations are that with short wavelength on the left and long wavelength on the right and spatially it is correct in the cross dispersion direction.

LRS Data Reduction Tips

Fixing pixels

There is a bad pixel mask available on rhea in the calibrations/LRS directory. This pixel mask was created by Joe Tufts and was based on a 1x1 binned images taken by Phillip MacQueen. It has been re-binned to 2x2 (correcting for the odd number of pre-scan pixels). The file is called bpm2.fits .

Flat fielding

There is a white light illumination flat field available on rhea in the calibrations/LRS directory. This was created by Joe Tufts and was based on a 1x1 binned images taken by Phillip MacQueen. It has been re-binned to 2x2 (correcting for the odd number of pre-scan pixels). The file is called ffha2_swbin.fits .

Wavelength calibration

NOTE: The LRS_g3 grism has a dispersion that runs in a direction opposite to the LRS_g1 and LRS_g2 grisms.

MRS Data Reduction Tips

HRS Data Reduction Tips

The FITS Extensions

The HRS CCD is actually 2 CCDs saved within a single fits image with fits image multi-extensions. The zeroth extension, test.fits[0] is the header that contains most of the telescope and instrument information. The first extension, test.fits[1], contains a short header with CCD specific information and the red HRS (MM1) CCD. The second extension, test.fits[2], contains a short header with CCD specific information and the blue HRS (MM1) CCD. IRAF has a package for dealing with fits extensions: mscred. Most useful of this package is the task mscsplit, which allows the extended image to be split into 3 separate iraf images with 3 separate headers. This allows each CCD to be reduced separately. Keep in mind that the telescope and instrument information will not be in the headers of each data frame.
There is now a script,
hsplit.cl which will split the sections and translate the header information from the [0] header into the [1] and [2] headers. This task can also correct some errors in the CCDSEC which will allow CCDPROC to work on HRS files. Please note that after a recent ICE upgrade there may be a small change required to the hsplit.cl code (look at the code for the documentation).

Bias

Bias frames are very useful for the removal of roughly half of the bad columns found on the red HRS CCD. We suggest that the 5 taken on your night be compared with a Master bias that you could create from all of your bias frames to see if there is any change from night to night. BIAS should be subtracted from the red HRS CCD. The blue HRS CCD is very clean and thus bias subtraction is of little assistance in the data reduction process and can actually add noise if an unclean (non master) BIAS is used.

Darks

Dark frames are very useful for the removal of a quarter of the remaining bad columns found on the red HRS CCD. However, the overhead involved in creating a dark frame for each binning and exposure length is too much of a burden for the current calibration plan. We suggest using fixpix to remove these bad columns.

Fixing pixels

Using the iraf task fixpix or other such tasks can aid in the reduction process particularly in the tracing of apertures on the red HRS CCD. However, this should be done with care since it is cooking your data (replacing bad columns with dubious good data).

Scattered Light Removal

In standard long slit or even slit spectroscopy some people do not find it necessary to remove scattered light because the scattered light will be removed with the sky subtraction. THIS IS NOT ALWAYS TRUE FOR FIBER SPECTROSCOPY AND WHEN FRINGING IS PRESENT IS NOT TRUE FOR LONG SLIT SPECTROSCOPY. Because of the potentially different fiber throughput differences there may be a scaling factor in the removal of the sky light which will inappropriately scale the scattered light. In addition, if the scattered light is not monochromatic it will not produce fringing. Thus scattered light removal should be done for all HRS spectra and done BEFORE flat field correction. In the IRAF echelle package the task for the removal of scattered light is apscatter. This task will fit a 2d surface to the scattered light. It first fits in the cross-dispersion direction leaving out any defined apertures. It then fits the surface to the dispersion direction. Earlier versions of IRAF sometimes fail to subtract the scattered light correctly unless the task is run interactively. I suggest testing your version in both the interactive and non-interactive modes.

Flat Fields

NOTE: Flat calibrations taken before Feb 6 2005 were contaminated by emission lines in the lamp itself. In the flat fields Li and Na emission lines are obvious. These should be removed using fixpix or a similar task. I have found that the first flat is often not contaminated and can be used exclusively for these regions.

Creating flats fields with the HRS is difficult and confusing. The first thing that you should realize is that these flats are internal to the HRS (a separate calibration fiber) and not taken through the science fibers. The flat will cover the same aperture as the science and sky fibers. There are several methods for creating a flat field and we will cover three:

apnormalize: This task requires that the user set apertures and trace the orders to the flat field. It can remove the blaze function but does not normalize in the cross-dispersion direction. Because the calibration fiber does not fill the slit in the same way as the science fibers, the flat will often have an inappropriate shape in the cross-dispersion direction in comparison to the science fibers. This has the effect of weighting you science data in an inappropriate way. We do not recommend using apnormalize.

flat1d: This task fits a spline to every line or column (column in our case). If you set the order of the spline to a very high value, say 91, then you will create a reasonable flat. This task has the advantage of being able to flatten near the edge of the CCD where only partial orders are seen. This flat will have some strange ringing at the edges of the flat aperture but is preferable to apnormalize. The task also has the disadvantage of smoothing out correlated pixel variations such as fringing at wavelengths above 670nm.

apflatten: This task requires that the user set apertures and trace the orders to the flat field. It can remove the blaze function and normalizes in the cross-dispersion direction. When the parameters are optimized this task can produce superb flats.

For all of the above it is important to pay attention to the flux levels and the S/N that will be derived. Because the pixel to pixel variation in the blue is small, a few tenths of a percent, and the throughput of the flat field lamp in the blue is low it is quite possible to actually harm your data by flatfielding with a noise flat. I suggest that the threshold parameter in the all of the above tasks be set such only data above 10,000 electrons in the summed flat be included. This threshold can be lowered in the red where the fringing can be worse than a 1% effect.

Finding Orders or Apertures

Because the HRS uses fibers the apertures are quite flat topped and thus marking the orders in the apfind ( an IRAF echelle package) task is difficult. We find better success by setting the nsum parameter to a much higher value, such as 30 to 100.

Tracing Orders or Apertures

Tracing the sky fibers is difficult because of the (hopefully) low flux levels. If you don't have a sky fiber and have a high S/N spectrum ( S/N > 50) then you probably won't have any problems tracing your orders particularly if you do a bias subtraction and either subtract a dark frame or fixpix the remaining bad columns. If you do have a sky frame then you have two options for tracing the sky fibers:

  • trace your science fiber and offset the apertures (using the s keystroke in the apedit task. You should do this in the ALL mode ( a keystroke) and look for the sky lines. You can interactively change the row that the apedit uses with a :line command.
  • use the sky flat or dome flat that should have been provided in the calibrations for your setup and then use those apertures as a reference apertures.
  • Background Subtraction

    In traditional slit spectroscopy the background region would be set for the removal of the scattered light and sky light in the extraction process. Because of the potentially different fiber throughput differences there may be a scaling factor for the removal of the sky light. Thus for best affect no background should be subtracted during the extraction and the sky subtraction should be done later. However, for quick and dirty first order extraction the background can be used to remove most of the sky light. Should you subtract sky is a separate question and will be addressed in the sky subtraction section later. To set the background region interactively for a single aperture in the apedit task with the keystroke b. The default parameters for the background can be set in apdefault. In addition to redefining the sample region, I suggest setting a few of the following parameters: b_naver=-1, b_niter=3, and b_high_=2.5. When you run the extraction, either apall or apsum, you should set the background parameter to either fit or median.

    Extraction

    Extraction in IRAF is done either in the apall or apsum packages. We recommend that sky subtraction be done separately so that proper accounting for the throughput differences in the fibers can be made, the background should be set to none. There are a number of bells and whistles that can be set to optimize the extraction, such as cleaning and weighting. If you chose to use these then the extras options should also be turned on. This will allow you to look at the unweighted, non-cleaned data in a separate band of the extracted spectra. You can help the cleaning of your data if you set the saturation parameter to be slightly higher than the strongest real feature (often your spectrum plus a bright sky line).

    Sky Subtraction

    The first question one should ask is should I be subtracting any sky. If you do not have a sky fiber then it is moot. If you do have a sky fiber but do not see any sky then you should not subtract the sky. If you do have a sky fiber but only see sky emission lines then you may or may not want to subtract sky. You need to determine if you are in the read noise or the poisson noise limited regime. Even though the read noise of the MM1 CCD is low (~4 electrons) there are lots of pixels to sum over. Unbinned there are 20 pixels across the 3as fiber. That is a lot of read noise. Thus you should consider if subtracting the sky to get rid of a few sky lines is going to add a lot of read noise to your spectrum. If you are working in the blue on faint objects the answer might well be yes! Sky subtract with caution.

    If you do decide to subtract the sky fiber you should extract the sky in the same way that you extract you spectrum. You should have the sky subtraction parameter in the apsum (or apall) turned off and the aperture should be the same width as your science fiber. In fact I suggest that you use the science fiber as a template. Here is a potential set of steps:

  • apedit and aptrace the science aperture in the sky or dome flat
  • using the flat as a reference apedit the science spectrum and check that the aperture looks good on the science fiber in all lines
  • change any of the science aperture widths interactively
  • extract the science spectrum
  • using the science spectrum as a reference apedit and aptrace the flat again and offset to the sky fiber
  • copy the science spectrum to a new file
  • using the flat as a reference apedit the renamed science spectrum and check that the aperture looks good on the sky fiber in all lines.
  • extract the sky spectrum.
  • You will probably have to scale the sky spectrum flux to that of the science fiber. This scaling is the reason why you do not want to use the sky subtraction to remove things that do not come from the sky, e.g. scattered light, bias and dark current. To get a good idea of the relative throughput of the science and sky fibers I suggest that you look at the flux in the dome or sky flat (NOT THE INTERNAL FLAT). In theory you should just have to multiply the sky fiber by a constant to get the subtraction correct.

    It is possible (although very tricky) to subtract the sky lines and not the read noise (assuming that there is not continuum sky light in your sky fiber). To do this you will need to remove the same blaze function from the sky and the science fiber. Then you copy the science frame to a second science from where you reject all values below a certain level and set them to zero. This can be done with imcombine. It is very delicate procedure with limited uses but it might improve the quality of your data. I suggest that before you try this, you try out not subtracting the sky at all!

    Wavelength Calibrations

    Getting the wavelength calibration is tedious but quite straightforward. I use the ecid task and end up with a solution with xorder=7 and yorder=5, although I start at a much lower order until I have identified a lot of lines.

    See HRS wavelength calibration for a ThAr identifications by spectral order.

    JCAM Data Reduction Tips



    Last updated: Wed, 26 May 2021 14:15:57 +0000 stevenj



    Phase III

    Phase III Information for New Users

    HPF quick-look

    How to Retrieve Data

    World Coordinate System for HET frames

    Finding Calibration Files

    Updates to Your Target List

    Data Rejection Policy

    FAQs

    Data Reduction Tips

    Filling Factor Estimator