Description of the keywords of DBEDIT control language.
L. Petrov
Abstract:
This document provides detailed description of syntax of the language
used for specification of control files for program dbedit which is a part of
Mark IV VLBI Analysis System CALC/SOLVE.
Table of contents:
- 1 $Input_Source
-
- 1.1 input_source.Mark3_Directory
-
-
- 1.2 input_source.DEC_Directory
-
-
- 1.3 input_source.Mark4_Directory
-
-
- 1.4 input_source.Database
-
- 2 $Input_Filter
-
- 2.1 input_filter.Primary_elim
-
-
- 2.2 input_filter.Duplicate_elim
-
-
- 2.3 input_filter.Scan_check
-
-
- 2.4 input_filter.Sort
-
-
- 2.5 input_filter.Channels
-
-
- 2.6 input_filter.Time_Window
-
-
- 2.7 input_filter.Username
-
-
- 2.8 input_filter.Rejects_Path
-
-
- 2.9 input_filter.Save_TimeRejects
-
-
- 2.10 input_filter.Aprioris
-
-
- 2.11 input_filter.UT1-PM
-
-
- 2.12 input_filter.Ephemeris
-
-
- 2.13 input_filter.Input_lcode
-
-
- 2.14 input_filter.NoInput_Obser
-
- 3 $Substitutes
-
- 3.1 substitutes.Star
-
-
- 3.2 substitutes.Site
-
- 4 $Output_Database
-
- 4.1 output_database.DB_Format
-
-
- 4.2 output_database.Database
-
-
- 4.3 output_database.History
-
-
- 4.4 output_database.Band
-
-
- 4.5 output_database.Time_Window
-
-
- 4.6 output_database.Site-Star_Cat
-
-
- 4.7 output_database.Del_Base
-
-
- 4.8 output_database.Del_Site
-
-
- 4.9 output_database.Del_Star
-
-
- 4.10 output_database.Inc_Base
-
-
- 4.11 output_database.Inc_Site
-
-
- 4.12 output_database.Inc_Star
-
-
- 4.13 output_database.NoOutput_lcode
-
-
- 4.14 output_database.Calc
-
- 5 $EOF
1 $Input_Source
This section specifies the type of input information and name (database name
or directory name). More than one inputs of the same type may be specified.
Several databases or several Mark-3 or Mark-4 directories can be specified.
But care should be taken in mixing different type of data.
1.1 input_source.Mark3_Directory
{Mark3_Directory <path_name>}
This keywords declared that
a) the type of input data is Mark-3 correlator output in binary IEEE format;
b) the root of directory tree is <path_name>.
The files must be in the format basically described by the A. Whitney
memo (Appendix) of April 23, 1990. [Note: Dbedit accepts both the 5 character
coded root creation times specified in that document or the 6 character codes
later adopted as a standard.] The files to be processed should reside in a
a subdirectory of this tree directory, using the memo's Unix filename
conventions, and with floating point numbers in either A900 or IEEE format.
dbedit expect to find there files with extents of type 52 created by Fourfit.
Since it assumed that the data lie in "scan sub-directories", the user should
be careful to specify the directory name one level up from that of the raw
data. "K and L" band data can also be read. (The allowable bands are
specified in the FRREGX regular expression in ../includes/param.i.)
All sub-directories of the specified directory will then be searched for raw
data. Details on the input format are included in the files "corr.nam_conv"
(the Appendix just mentioned) and the correlator documentation files
"correl.doc" and "frnge.doc", all with this file in /mk3/src/dbedit/docs.
Also note that if root files do not exist, or if the first root file has an
experiment number of zero, the experiment number is assumed zero for all data.
This facilitates handling of data from (e.g.) the VLBA correlator.
1.2 input_source.DEC_Directory
{DEC_Directory <path_name>}
The same as Mark3_Directory, except it is assumed that the Fourfit ran
on DEC computers and therefore the format of binary output is native DEC format,
but not IEEE standard.
1.3 input_source.Mark4_Directory
{Mark4_Directory <path_name>}
This keywords declared that
a) the type of input data is Mark-4 correlator output in binary IEEE format;
b) the root of directory tree is <path_name>.
Dbedit expects to find data in standard Mark-4 order: subdirectories
with scanname grows from the <path_name> tree, each scan subdirectory
contains a root file and files of type 2xxx for each observation. dbedit
does't need files of type 1xxx or 3xxx.
1.4 input_source.Database
{Database <(keyname)>}
All input databases must conform to conventions for a data reduction
database; attempting to use a nonstandard database, such as a skeleton
or ephemeris database, will cause a crash. Note that the keyname is read
with the standard "A10,I2" format for the database name and version number.
Example:
database $80JAN02ZZ
database $89SEP17XM
database
2 $Input_Filter
The input data filter is the set of options that govern how input data is
handled. Only the first input filter block found in the control file will
be used.
2.1 input_filter.Primary_elim
{Primary_elim {[H [C or F or E or S or Q or N]] or
[L [D or R or N]] }
The primary duplicate elimination criterion specifies the value used to
decide which observations to drop in a block of duplicate observations.
(In case of a "tie" under the specified criterion, the criterion specified by
the Duplicate_Elim criterion below is used.) If this flag is not included the
defaults are set in the initl.f routine. Normally this default is set to pick
the observation with the highest quality code, and this is the preferred mode
of operation unless refringed data is being brought in (see below if that is
the case). E.g. the normal default is "Primary_Elim H Q".
Duplicate observations are those that have the same time, baseline, source,
and band. The criteria are summed up below, as well as the comparison sense of
"high" or "low". Both the "sense" and "criterion" must appear.
Sense Criterion Description
------------------------------------------------------
H C Largest coherent correlation.
H F Latest FRNGE processing date.
H E Highest extent number.
H S Highest signal-to-noise ratio.
H Q Highest quality code.
L D Lowest delay sigma.
L R Lowest rate sigma.
------------------------------------------------------
H or L N No primary duplicate elimination.
------------------------------------------------------
Example:
primary_elim h e
Primary_Elim L d
primary_elim H N
Primary_Elim H F <= Should be used in the following case:
Normally it is not necessary to specify the "Primary_Elim" option unless
newly fringed data is being mixed into an already existing database for the
same experiment. In this case it is important to use only the newly fringed
data whenever duplicate data exists. In many cases a channel was dropped for
particular baselines or stations during the refringing and it is important
that such data not be mixed with data using more channels.
2.2 input_filter.Duplicate_elim
{Duplicate_elim {[H [C or F or E or S or Q or N]] or
[L [D or R or N]] } }
The secondary duplicate elimination criterion specifies the value used to
decide which observations to drop in a block of duplicate observations with
equal quality codes. If this flag is not included the defaults are set in the
initl.f routine. The current default is for NO secondary duplicate elimination
(e.g. "Duplicate_Elim H N").
Duplicate observations are those that have the same time, baseline, source,
and band. The Primary_Elim criteria is used for every determination but the
user may choose a secondary criterion in case of a "tie" for the primary
criterion. The criteria are summed up below, as well as the comparison sense
of "high" or "low". Both the "sense" and "criterion" must appear.
Sense Criterion Description
------------------------------------------------------
H C Largest coherent correlation.
H F Latest FRNGE processing date.
H E Highest extent number.
H S Highest signal-to-noise ratio.
H Q Highest quality code.
L D Lowest delay sigma.
L R Lowest rate sigma.
------------------------------------------------------
H or L N No secondary duplicate elimination.
------------------------------------------------------
Example:
duplicate_elim h e
Duplicate_Elim L d
duplicate_elim H N
2.3 input_filter.Scan_check
{Scan_check [YES or NO]}
Scan_check causes checks of scan consistecy. It checks whether there exists
observations which belong to the same scan but have different time tag.
Norammly, all observations of the same scan have the same taime tag. However,
experiments processed in 2000 and experiments processed with deviation from
standard procedure may have different refernce time tag. Scan_check finds
all such observations, gerenates the output file and stops dbedit just before
creation of output databases. Scan_check runs after removal the observations
specified in NOINPUT_OBSER file, sorting and duplicates elimination. The name
of the output file is the name of the scratch file and ".dup" suffix appended.
This file is removed if no scans with different time tags are found.
Format of the output file:
each pair of duplicate observations is presented in three formats:
1) Fourfit-format;
2) Sumry-format;
3) Cres-format.
Fourfit format contains full time tag up to 1 millisecont and name of
the Fourfit file. Sumry format is similar to that generated by program Sumry,
Cres format is simliar to that generated by Solve in mode "residuals output".
"*" character in the first column means comment.
The output file is grep-able. Lines in Fourfit format have signature "A:",
lines in Sumry format "B:" and lines in Cres format have signature "C:".
Example of the output file:
1A: 1 2000.12.20-12:22:27.000 9 355-1221_1726+455/FE.S.3.opsmzf
2A: 1 2000.12.20-12:22:34.000 9 355-1221_1726+455/FE.S.1.opzjwg
1B: 17 00/12/20 12:22:27.0 FORTLEZA-WESTFORD 1726+455 S 9
2B: 19 00/12/20 12:22:34.0 FORTLEZA-WESTFORD 1726+455 S 9
1C: 17 FORTLEZA/WESTFORD 1726+455 2000.12.20-12:22:27.0 S
2C: 19 FORTLEZA/WESTFORD 1726+455 2000.12.20-12:22:34.0 S
*
1A: 2 2000.12.20-12:11:08.000 9 355-1210_0804+499/KV.X.10.opsmmi
2A: 2 2000.12.20-12:11:09.000 0 355-1210_0804+499/KV.X.2.opzjnw
1B: 50 00/12/20 12:11:08.0 KOKEE -WETTZELL 0804+499 X 9
2B: 51 00/12/20 12:11:09.0 KOKEE -WETTZELL 0804+499 X 0
1C: 50 KOKEE /WETTZELL 0804+499 2000.12.20-12:11:08.0 X
2C: 51 KOKEE /WETTZELL 0804+499 2000.12.20-12:11:09.0 X
*
2.4 input_filter.Sort
{Sort [T or B or S or N]}
The input data is sorted by either time <T>, baseline , source <S>,
or "none" <N>. The data is also sorted on a secondary field determined by the
first and all data is divided into bands. Data sorted by time gets sorted by
baseline second, and data sorted by baseline or source gets sorted by time
second. Specifying "none" actually does sort the data by time, but not by any
secondary field and the data is not divided into bands.
Example:
sort T
Sort N
Sort
2.5 input_filter.Channels
{Channels [<(integer 0-28)> <(integer 0-28)>]}
Specifying channels causes the input filter to reject any observation not
having the exact number of channels for its particular band. The first number
is the X-band number of channels, the second is the S-band. Both numbers must
appear.
Example:
channels 6 8
channels 5 8
2.6 input_filter.Time_Window
{Time_Window {[earliest or <time_specifier>]
[latest or <time_specifier>]} }
A time window is an interval of time within which the input observations
must fall. Any outside this interval are discarded. The two designations
"earliest" and "latest" are actually extremely small and large values,
respectively. A time specifier is a standard format for designating time down
to the second by using a string of twelve two digit integers. The format is as
follows: YYMMDDHHMMSS where YY is the last two digits of the year, MM is
the number of the month, DD is the day of the month, and HHMMSS is the time in
24 hour format. An example for January 4th, 1989, at 3:29 p.m. and 23 seconds
would be 890104152923. Notice that there can be no spaces between any digits
and all single digit values must be represented with a leading zero.
Example:
time_window earliest 891102122345
Time_Window 891103010000 891103020000
time_window earliest latest
2.7 input_filter.Username
{Username <name>}
The name of the current user is placed in the output databases created by the
program. This serves as a way to track the creation of database versions on
the system. The name can only be 10 characters because it gets appended to
each history entry placed into the database.
Example:
Username GreggCooke
username Jim Ryan
2.8 input_filter.Rejects_Path
{Rejects_Path <pathname_of_rejects_file)> }
File name where rejected records will be written. Usually rejected records
are considered as garbage, but if you believe that they are a treasure you
can save them in the file with the name what you like.
2.9 input_filter.Save_TimeRejects
{Save_TimeRejects {<pathname_of_rejected_files>} }
Observations rejected from the input filter can be saved to a specified file
for future examination. The rejection criterion, the rejected value, and
identifying information are written for each rejected observation. Since there
can be quite a few observations rejected from a time window in the input filter
the toggle save_timerejects will tell the program to include time rejected
observations as well. If the path name is left blank, the file is stored with
a program defaulted name (as set in ../includes/param.i).
NOTE: the rejects_path flag also serves as a toggle to the feature of saving
rejected observations.
Example:
rejects_path /mk3/data/rfile
rejects_path rejectfile
save_timerejects
2.10 input_filter.Aprioris
Aprioris [YES or NO or <skeleton_database_keyname>]
NO -- means not to put in the output database apriori station coordinates,
axis offsets constants, ocean loading expansions, source positions
coordinates and etc.
YES -- means to put a priori information in the output database and the
priori data should be taken from the default skeleton database.
If the keyword is neither YES nor NO then it is interpreted as the name
of the skeleton databases which is the source of information for aprioris.
Default is YES.
2.11 input_filter.UT1-PM
UT1-PM [YES or NO or <path_to_UT1PM_file>]
NO -- means not to put in the output database values of apriori UT1,
pole coordinates, daily offsets of nutation angles.
YES -- means to put a priori Ut1, pole coordinates and (optionally) daily
nutation offsets in the output database and the apriori data should
be taken from the default binary ut1pm file.
If the keyword is neither YES nor NO then it is interpreted as the name
of the input ut1pm file is the source of information for aprioris.
NB: input ut1pm file may have and may not have daily nutation offset angles,
while it must contain Ut1 and pole coordinates. dbedit puts 15 values of EOP
around the date of the session.
Default is YES.
2.12 input_filter.Ephemeris
Ephemeris [YES or NO or <database_name>]
NO -- means not to put in the output database positions and velocities of
the Earth, Sun and Moon.
YES -- means to put in the output database positions and velocities of
the Earth, Sun and Moon. Positions should be taken from the default
ephemeris database.
If the keyword is neither YES nor NO then it is interpreted as the name
of the input ephemeris file is the source of information for aprioris.
NB: this is an obsolete feature and is left fro compatibility with something
very old. Normally, we don't need this information.
Default is NO.
2.13 input_filter.Input_lcode
{Input_lcode <lcodes_file>}
This keyword specifies names of the file which contains lcode names to be
used as input filter. In the case when the input source is a database(s) then
only those lcodes which are both in the databases and in this list are read.
This list is ignored when the input source is correlator output.
Format of <lcode_file> is simple:
a) Lines starting from ## are considered as comments and ignored
b) Lines which have only blanks are considered as comments and ignored
b) The first 8 characters of each line are considered as Lcode name. No
case conversion is done.
Example:
##
## LCODES supplied by Calc 8.2, Solve and dcalc
## which are to be removed during re-Calcing
##
## L. Petrov 2000.06.05_16:07:03
##
PRT FLAG
OBCLFLGS
CALCFLGV CALC flow control flags valu def
OCE STAT Ocean loading station status.
OBCLLIST Key to standard contribs config
2.14 input_filter.NoInput_Obser
{NoInput_obser [<file_name> or NO]}
This option allows to exclude the list of observations from the output file.
The observations to be excluded are listed in file. Comment: "observation"
for this option is either the notion used in Solve ( measurements of group
dema phse dlays at the particular baselines referred to the particular time
epoch ) or Fringe output file. The exclude list should specify tiem tag,
baseline and optionally source name and Fourfit output filename.
Three formats are supported:
1) fourfit-style format;
2) sumry-style format;
2) cres-style format,
Each format assumed that the file has lines no more than 80 characters
(longer lines are truncated). Character * in the first column means that this
line is a comment.
Fourfit-style format:
1) 1-23 -- UTC scan time tag in format yyyy.mm.dd-hh:mm:ss.sss
2) 26-26 -- Quality codes
3) 28-34 -- SNR (in format f7.2 )
3) 36-* -- Fourfit filename with respect to root file.
Example:
2000.12.20-12:22:27.000 9 34.76 355-1221_1726+455/FE.S.3.opsmzf
Comment: one or more columns separated by blank(s) may preceed the first
field provided that the stuff in these columns does not contains
characters .-/
Sumry-style format:
1) 1-8 -- Scan tag date in format yy/mm/dd
2) 10-19 -- UTC time tag in format hh:mm:ss.s
3) 21-28 -- Name of the first station of the baseline
4) 29-29 -- Deliimiter: -
5) 30-37 -- Name of the second station of the baseline
5) 39-46 -- Source name
6) 58-58 -- Band name (X or S)
Example:
00/12/20 12:22:27.0 FORTLEZA-WESTFORD 1726+455 S
Comment: one or more columns separated by blank(s) may preceed the first
field provided that the stuff in these columns does not contains
characters .-/
Cres-style format:
1) 1-8 -- Name of the first station of the baseline
2) 9-9 -- Deliimiter: /
3) 10-17 -- Name of the second station of the baseline
4) 19-26 -- Source name
5) 28-48 -- UTC time tag in format: yyyy.mm.dd-hh:mm:ss.s
6) 50-50 -- Band (X or S)
Example:
FORTLEZA/WESTFORD 1726+455 2000.12.20-12:22:34.0 S
Comment: one or more columns separated by blank(s) may preceed the first
field provided that the stuff in these columns does not contains
characters .-/ for example:
2C: 60 FORTLEZA/WESTFORD 1726+455 2000.12.20-12:22:34.0 X
1260> FORTLEZA/WESTFORD 1726+455 2000.12.20-12:22:34.0 X 72.2 102.22
3 $Substitutes
dbedit is able to rename station/sources.
Substitute source/site names are used during the output phase to change
the names of stars or stations to match those found in the skeleton database.
There can be any number of these blocks in the control file and each can
contain as many substitutions as needed.
3.1 substitutes.Star
{Star {<star_name_pair> ...} }
Sources are sometimes observed with names that are different than those
found in the skeleton database for the same object. Also, some sources are
observed that do not even exist in the skeleton database. For this reason
it is often useful to change the names of sites and sources found in input
data to what exists in the skeleton database. This is done by specifying a
name pair for each source that needs a name change. A name pair is the current
name (the name in the input data) and the target name (the name in the aprioris
- blokq.dat - file) separated by an equals sign. For example, to change the
name of NGC1052 to 0238-084 use the name pair NGC1052=0238-084. Notice that
there is no space around the equals sign. Also note that you can cram as many
name pairs on a line as will fit - but there must be space around each name
pair. Like the BSS filter, names that have spaces in them must be quoted.
Example:
star NGC1052=0238-084
3.2 substitutes.Site
{Site {<site_name_pair> ...} }
Sites are sometimes observed with names that are different than those found
in the skeleton database for the same object. Also, some sites are observed
that do not even exist in the skeleton database. For this reason it is often
useful to change the names of sites found in input data to what exists in the
skeleton database. This is done by a name pair for each site that needs a name
change. A name pair is the current name (the name in the input data) and the
target name (the name in the aprioris - blokq.dat - file) separated by an
equals sign. For example, to change the name of Fairbanks to Gilcreek use the
name pair FAIRBNKS=GILCREEK. Notice that there is no space around the equals
sign. Also note that you can cram as many name pairs on a line as will fit --
but there must be space around each name pair. Like the BSS filter, names that
have spaces in them must be quoted.
Example:
star NGC1052=0238-084
site 'nrao85 4'='nrao85 3'
site 'greenbnk'='NRAO85 3' 'Hawaii '=Kauai westford=haystack
NOTE: Be sure to read the special note about quoting names at the
end of this file.
Also note: It is generally good practice for the experimenter to check
ahead of time that all of the sites and source names used in a given experiment
have apriori values in the blokq.dat file (and therefore the skeleton database).
However, Dbedit will list all missing sites and sources the first time an
attempt is made to process an experiment (as opposed to only listing the first
missing name each time it is run).
4 $Output_Database
Keywords in this section contain the information necessary to create
output databases, including the output database filter. Each such block must
contain at the least the database name and status. There can be as many blocks
as needed but they are used in the order they appear in the file.
4.1 output_database.DB_Format
{DB_Format <format_name>}
Keyword DB_Format specifies the name of the output database. Only one format:
DBH is currently supported.
4.2 output_database.Database
Database [(N <keyname>) or (G <letter>) or (I <keyname>) or
(O <keyname>) or - ]
The qualifier after keyword Database determines the mode in which the
output is written:
[N] or [G] a brand new database is created.
[I] the next version of one of the input databases is created
[O] The new staff will be appended to the existing database without
re-sorting and written as the next version of this database.
[-] No database is created. This qualifier may be used for verification.
If the database is brand new you can have the program generate
a keyname for it based on the date of the database, its band, and an
identifying letter [G]. The keyname of the output database must be given,
unless the program is to generate the keyname, in which case only an
identifying letter is needed. This is one way to insure that your brand new
output database gets created. Lowercase is always converted to uppercase.
Care should be taken in using keyword O. for example if the input media was
the database and the output media was the database with the same name, the
result will be ugly and unusable: the new version of the database will have
the same records twice. Usually the output databases created with keyword
O should be re-sorted by a new call of dbedit with the output database
specified by a keyword I.
Example:
Database G J
database b $89JUN09XB
database N $89JUN09XB
4.3 output_database.History
{History <history_text>}
If the output database is new a history entry must be given to set up
the keyname in the catalog. Up to 80 character string can be stored (although
program catalog will only show the first 36 characters for the keyname).
Example:
history NAVNET31 X-band from raw data. GGC
History QUAKE1 data bucket.
4.4 output_database.Band
Band [A or X or S]
Unfortunately, Input data is separated by band when it is sorted so that
output databases of a single band can be created [X,S], although databases
having both bands can also be created [A], but Solve doesn't support them.
This is used
in the output filter.
Example:
band A
Band X
4.5 output_database.Time_Window
{Time_Window [earliest or <time_specifier>]
[latest or <time_specifier> ] }
A time window can also be specified for the output filter.
Any outside this interval are discarded. The two designations "earliest"
and "latest" are actually extremely small and large values, respectively.
A time specifier is a standard format for designating time down to the second
by using a string of twelve two digit integers. The format is as follows:
YYMMDDHHMMSS where YY is the last two digits of the year, MM is the number
of the month, DD is the day of the month, and HHMMSS is the time in 24 hour
format. An example for January 4th, 1989, at 3:29 p.m. and 23 seconds would
be 890104152923. Notice that there can be no spaces between any digits and
all single digit values must be represented with a leading zero.
Example:
time_window 890323145000 latest
4.6 output_database.Site-Star_Cat
{Site-Star_Cat <(pathname_of_site/source_catalog)> }
Sites and sources are tracked by the site/source catalog, which relates
sites and sources to the databases that contain them. This is a particularly
useful way to manage a data set. If the path name to the catalog is left blank
the system default is used (as set in ../includes/param.i).
NOTE: this flag also serves as a toggle to turn on the feature of updating
the site/source catalog.
Example:
Site-star_Cat /mk3/data/sscat
site-star_cat mysscat
Site-Star_Cat
4.7 output_database.Del_Base
{Del_Base <baseline> ... }
Any observation that contains the target baseline will not be permitted
to pass through the output filter. Note that the order of stations within a
baseline specification is significant, e.g. 'gilcreekwestford' is
not the same as 'westfordgilcreek'.
Example:
del_Base gilcreekwestford 'nrao85 3'wettzell
del_site Kauai Gilcreek
del_site 'NRAO85 3' HARTBSTK
del_star cygx3
NOTE: Be sure to read the special note about quoting names at the
end of this file.
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.8 output_database.Del_Site
{Del_Site <station_name> ... }
Any observation that contains the target station(s) will not be permitted
to pass through the output filter.
Example:
del_site Kauai
del_site 'NRAO85 3' HARTBSTK Gilcreek
NOTE: Be sure to read the special note about quoting names at the
end of this file.
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.9 output_database.Del_Star
{Del_Star <source_name> ... }
Any observation that contains the target source(s) will not be permitted
to pass through the output filter.
Example:
del_Star cygx3
NOTE: Be sure to read the special note about quoting names at the
end of this file.
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.10 output_database.Inc_Base
{Inc_Base <baseline> ... }
Keyword Inc_Base specifies the baselines which are to be included. This
overrides any specifications for stations or baselines marked for deletion
having the same name. Baselines not on its list are included anyway. This
serves as a way to re-include baselines that may get deleted by the del_base or
del_site option. Note that the order of stations within a
baseline specification is significant, e.g. 'gilcreekwestford' is
not the same as 'westfordgilcreek'.
Example:
inc_base gilcreekwestford
NOTE: Be sure to read the special note about quoting names
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.11 output_database.Inc_Site
{Inc_Site <station name> ... }
Stations can be specifically included using this flag. This overrides any
specifications for stations or baselines marked for deletion having the same
name. It provides a way to toggle a station on and off. Note that any station
NOT in the list specified by these flags will be deleted. This serves as a way
to re-include baselines that may get deleted by the del_base or del_site
option.
Example:
inc_site Kauai Gilcreek
inc_site 'NRAO85 3' HARTBSTK
NOTE: Be sure to read the special note about quoting names
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.12 output_database.Inc_Star
{Inc_Star <source_name> ... }
Sources can be specifically included using this flag. This overrides any
specifications for stations or sources marked for deletion having the same
name. It provides a way to toggle a station or source on and off. Note that any
source NOT in the list specified by these flags will be deleted.
Example:
inc_star cygx3
NOTE: Be sure to read the special note about quoting names
MORE IMPORTANT NOTE: These names are matched against those of the
skeleton database, NOT against those of any input data.
Aliases specified in the $Substitutes section above
apply here.
4.13 output_database.NoOutput_lcode
{NoOutput_lcode [<lcodes_file> or NO]}
This keyword specifies names of the file which contains lcode names to be
used as output filter. Lcodes from that list will not be put to the output
database.
Format of <lcode_file> is simple:
a) Lines starting from ## are considered as comments and ignored
b) Lines which have only blanks are considered as comments and ignored
b) The first 8 characters of each line are considered as Lcode name. No
case conversion is done.
Example:
##
## LCODES supplied by Calc 8.2, Solve and dcalc
## which are to be removed during re-Calcing
##
## L. Petrov 2000.06.05_16:07:03
##
PRT FLAG
OBCLFLGS
CALCFLGV CALC flow control flags valu def
OCE STAT Ocean loading station status.
OBCLLIST Key to standard contribs config
4.14 output_database.Calc
{Calc {HIST <calc_history_text>}
{(<calc_keyword_setup> <value>) ... }
{RUN }
{FILE <calcon_filename> }
}
Keyword CALC specifies whether the user wants to run Calc database(s) just
upon completion making the database. The hist flag allows the user to input
a Calc history entry for the database. The <calc_keyword_setup> keyword allows
the user to merely setup a calcon file to be used after the Dbedit run.
The user is presented the Calc run string with the given suppression number.
The default suppression (as set in ../includes/param.i) is '0', indicating
screen output. Setting the suppression to '-1' suppresses Calc screen output.
Hint: if you are going to make non-standard Calc run it is better to run Calc
independently.
Qualifier Run allows the user to run CALC directly from Dbedit with the
specified suppression #. The file flag allows the user to specify the path and
filename of the calcon file. The default path is the user's home directory,
and the file name is 'calcon'. The file flag must appear in the first output
database specification, otherwise the default calcon file path and filename is
used. The setup option in any output database specification overrides any
run option. The 1st example below specifies running Calc on the output database
with screen output, and to create the calcon file in the directory '/data/tmp'
under the filename 'calcon'. Since qualifier 'file' is specified this example
must be placed in the first output database specification.
Typing just 'calc' in a output database specification causes CALC to run
with full screen output using the default calcon file location (provided no
other output database specification includes the setup flag).
The calc flag list allows the user to set the various control flags used by
calc. The default value is 0 for all the flags (this is set at the end of
initl.f). The list of flags, and their functions are as defined, e.g. in the
Calc 9.1 users guide. The second example below shows the proper usage of the
flow flag options. The value after each flow flag is the value to be assigned
to that flag. The omission of a value after the KIONC flag is not an error.
Most flags are binary, so a flow flag without a value is interpreted by Dbedit
as a toggle of whatever value is the default. Notice also that specifying
a flag a second time will, of course, reset its value.
Example 1:
Calc Hist History entry for calced database.
Calc Run 0
Calc File /data/tmp/calcon
Example 2:
Calc Hist "Test of Calc under Dbedit control..."
Calc KATMC 1 KIONC KPANC 2
Calc KATMC 0
Example 3:
Calc Hist "Run when stations have no h. ocean loading"
Calc KOCEC 3
5 $EOF
If the user wishes to cut off processing of the control file at a certain
point he can use the $EOF flag. Placed anywhere in the file, it causes an
end-of-file to be sensed at that point.
Questions and comments about this guide should be directed to:
Leonid Petrov ( pet@leo.gsfc.nasa.gov )
Last update: 2001.07.11