How to run offsite reduction and reconstruction.

1st Reduction

  1. See page0004 for some descriptiive info.

  2. As user highe run

  3. this runs highe1 and pickup_contained

  4. This puts files in /u/skdeca1c/highe/PICKUP_CONTAINED/

2nd, 3rd and 4th Reduction

  1. Use like this: -R /path/to -D output_dir file.dat ...

    This will put output into /path/to/output_dir/. You must have highc[234] and combine_decay in your path. Also, a directory called FLASHERS/ must exist and contain all the ``Danka Flashers'' for all DLTs worth of data to be processed.

  2. This will create highc4.contained.dlt???? files which are the contained events.

Sub Run Cuts

  1. See page0006 for some info.

  2. A ``skdbnt'' ntuple must be made using atmpd/src/highe/skdbntuple

  3. Dump DLTFILE value for all sub runs which pass runcuts-final.kumac and strip out blanks and any PAW delimiters and sort it. I have scripts to do this in my work/rundb/ area of my repo (which I should commit to highe/).

  4. Use this cleaned up file as input for check_dltfileT in highe/

  5. This step removes duplicate events and is a good time to collapses the gazillion reduction files into a single file.

Basic Reconstruction

  1. Basic reconstruction as defined by ca. 400 days analysis is enapsulated by the program atmpd/src/programs/hesummary/hesummary. Run this like any tunic base prog:

    hesummary -o file

    Where the input file is what came from the check_dltfileT step.

  2. The offsite ntuple is made using hentuple in same area as hesummary which for normal running just do:

    hentuple -r ntuple=basename hesummary_output_file

  3. This gets you evis, anisitropy, nmudk, 1-ring/multi-ring determination type things. Nothing which depends on multi-ring fitting.

Last modified: Mon May 17 16:27:52 EDT 1999