There are several clones of SOLVE. In mid 80s a clone supported by a group in Muzusawa Observatorey diverted. That clone is called mSolve. It supports import and export data in FITS Database format (that is *not* compatible with FITS-IDI or FITS-image format). In 2008 a cline supported by NVI Inc diverted. It is called now cSolve. pSolve is currently supported by NASA Goddard Space Flight Center. The main difference are 1) a complete support of databases in the GVF (Geodetic VLBI Format), 2) replacement of obsolete Calc program with a modern VTD (VLBI Time Delay) library that computes a priori path delay, and 3) revision of the way how flags are handled for different observables. These new feature significantly simplify interactive analysis. In 2017 pSolve was converted from Intel compiler to gfortran, from 32-bit to 64 bits, and the maximum number of estimated parameters was raised to 100,000. In 2022 a new installation utility was deveoped, and pSolve was became a part of SGDASS (Space Geodesy Data Analysis Software Suite).
The present document explains the use of VTD/pSolve
pSolve supports both interactive and batch [1] modes. pSolve can be used in the following modes:
Environment variable DISPLAY should be set up properly. Refer to X11 user guide.
Then you have to activate the settings defined in ~/.Xdefaults by the command
Run a program psolve_reset if you don't have pSolve scratch files. NB: Scratch files should also be re-created after each pSolve upgrade! You should also be sure that your environment variable WORK_DIR has been set before running solve_reset: echo $WORK_DIR . Usage of solve_reset:
NB: The word database is used for describing parameters related to a given VLBI experiment, which are stored on 4-6 files. It has nothing related to a popular SQL program.
The file vcat.conf located in pSolve SAVE_DIR directory defines directories where all databases are located:
GVF_REP_NAMES:The keyword GVF_REP_NAMES defines the list of so-called reposiatory names. The repository name is in upper case and is limited to 4 characters. The keyword GVF_DB_DIR defines the directory where binary sections of databases of a given repository are located. The keyword GVF_ENV_DIR defines the directory where ascii files with database envelops of a given repository are located. The keyword VTD_CONF_FILE defines the control file for the VTD (VLBI Time Delay) library. pSolve computes theoretical path delays and partial derivatives when a database is loaded and every time when the function "update of theoretical delays" (~) is invoked. Keep in mind that the VTD control file usually contains parameters that are supposed to be periodically (f.e. once a day or once a week) updated. Among them, there are a priori Earth Orientation Parameters, displacements caused by mass loadings, slant path delay in neutral atmosphere, and path delay in the ionosphere. A user is supposed to launch a process to update these parameters (vtd_apriori_update.py from VTD library). If some parameters are not updated, pSolve will stop during process of loading a database ans issue an error message that explains the reason.... GVF_DB_DIR: directory GVF_ENV_DIR: directory ... GVF_DB_DIR: directory GVF_ENV_DIR: directory VTD_CONF_FILE: file name
If pSolve stops during an attempt to load a database, you need to update
the a priori parameters. All stations and sources used in the experiment being
analyzed must be defined in the a priori station and source catalogues.
pSolve requires terminal window with size not less than 80x24 characters.
NB: if you want to change terminal window size, you need to first quit
pSolve, then change the terminal size, then start pSolve. Otherwise, the program
will crash.
pSolve consists of a set of programs which call each other. In order to
launch interactive pSolve, enter a command
pSolve scratch files contain information about one or more sessions. To learn
the status of scratch files hit the key (X) from the main pSolve menu.
pSolve reports the database name, version number, total number of observations
(including non-detections), band and status.
Interactive pSolve allows the user to perform the following operations:
The first operation in the data analysis is to read the experiment and to
load it into the scratch files. Only one database can be loaded at any one time.
Option (G) or (CNTRL/G) in the OPTIN menu invokes program
GETDB, which loads the experiment in the scratch area. On launch, GETDB shows
the list of all databases in your database repository. OPTIN shows the current
repostory ioni the 6th line. You can change repostitory by hittin (R)
You can scroll the list using (ArrowUp), (Arrow Down),
(PageUp), and (PageDown) keys. When you find the database that
you want to load, position the cursor on that database and hit key
(space). However, if you have many databases in your database
directory, this method is inconvenient. As an alternative, hit key (T)
and pSolve will prompt you to enter the database name. If you enter complete
database name, including the version, pSolve will load the database
immediately. If you enter a partial database name, pSolve will show you the
list of databases with names that start with the substring that you have
entered.
Option (X) inquires about the current status of the scratch area.
The next step is to select the observable. In a case of single band
data you can select "Group delay X-band", "Phase delay X-band", or
"Single band X-band". It is suggested that beginners should use "Group
delay X-band". NB: for pSolve "X-band" means the upper band and
"S-band" means the lower band. For instance, in K/C observations,
pSolve will refer to the K-band data as "X-band" and the C-band data as
"S-band". This may sound confusing.
In a case of dual-band data you have more choices:
(Exception: there are a number of databases created in 1985--1993
where the S-band data are lost, but where the ionosphere calibration
is present. For these experiments, do not change solution type and
suppression method)
After you change the data type, re-compute theoretical path delays.
The ionosphere contribution depends on the solution type. This is a
general rule: every time if you change solution type, you need to re-compute
theoretical path delays. Failure to do so may bring you erroneous results.
Check your parameterization once more. Only clock polynomials of the 0,1,2
order should be set up. All other parameters should not be activated. Then run
LSQ solution by hitting (Q). Look at the listing of the solution. Check
once more that only clock polynomials are in the solution.
Check wrms. It should not exceed 1 microsecond. If the wrms exceeds that
value it means that you have trouble, e.g., there are several very strong
outliers.
Check the clock offsets and rates (CL 0) and (CL 1). If there are stations with
clock offsets greater by modulo than 10-4 sec -- 100 000 nsec and/or
the stations with clock rate greater by modulo than
10-9 -- 100 000 D-14, you should apply an a priori clock model for
those stations. The number of digits in a float number presentation is not enough
to handle the case when adjustments to clock parameters are too large. Rounding
errors may corrupt results. To overcome this problem, an a priori clock model
is added to the theoretical delays and delay rates. If you notice that clock
offsets and clock rates exceed the limit for all stations, it means that the
clock-reference station itself has anomalous clock offset or rate. Change the
clock-reference station in that case. You can find preliminary values for a
clock model from the correlation report if it is available.
Find all stations with anomalous clock offsets and/or rates. Write down the
values of clock and rate for these stations. pSolve supports an a priori clock
model for up to 4 stations. Go back to OPTIN menu. Then go to the last page
of SETFL. Then hit (<). You will see the menu of SET_ACM program.
Follow the SET_ACM
manual [2].
Make one more LSQ solutions after applying a priori clock model. Clock
parameters for the stations with applied clock models should be about zero.
If there is more than 3–5 observations with group delay ambiguities
determined incorrectly, these errors corrupt least square solution to that
extent that the residuals after subtraction the contribution of adjusted
parameters to the initial group delay are so severely distorted than it
becomes close to impossible to determine which observations caused solution
distortion by inspecting residuals. Therefore, a more sophisticated algorithm
is required.
When group delay ambiguities are resolved, the residuals that correspond
to observations affected by wrong ambiguities are aligned along lines
above or below the zero line on the plots of residuals versus time. The deviation
from the zero line is close to the N * S.
There is another factor that can lead to appearance of a significant number
of observations with residuals that are aligned in plots versus time
after the affected observations are identified and eliminated
from the solution: errors in fringe fitting algorithm that resulted in picking
up a sidelobe in the delay resolution function, or using another language,
a secondary maximum of the Fourier transform of the visibility spectrum.
If phase bandpass was not determined correctly or there were instrumental
factors that distorted fringe phases at some intermediate frequencies and
observations had relatively low SNR, the noise in the data may change
the amplitude of maximum of delay resolution function. The maximum that is
secondary for the undisturbed data may have greater amplitude than the
original main maximum. High level of sidelobes, narrow bandwidth of intermediate
frequencies, and low SNR increases a chance of an error in the fringe fitting
algorithm. Observations affected by this problem have errors in delay at
1–2 main sidelobes. After suppressing these points from the least square
solution, the residuals have a pattern that resembles errors in group delay
ambiguities. The main difference in plots is the that residuals affected by
group delay ambiguities have fixed spacings N*S, but observations affected
by errors in pickling the maximum have residuals that are not commensurate
to S. These phenomena called sub-ambiguities. Group delay ambiguities can
be resolved by pSolve. Sub-ambiguities cannot be resolved by pSolve only.
pSolve only marks them as outliers. The sub-ambiguities usually can be resolved
by re-running PIMA with a narrow fringe search window. This procedure is called
re-fringing. See section resolving sub-ambiguties for
details. To make things complicated, observations can be affected by
both group delay ambiguities and sub-ambiguities.
There are two ways to resolve group delay ambiguities: manual procedure and
automatic procedure GAMB. The automatic procedure should be used, except
rare cases which it currently doesn't support.
If you have dataset produced by Fourfit, read the
guideline for resolving group delay ambiguities, otherwise follow
this document.
Steps of manual sub-ambiguities resolution:
Start first examining the baselines which contains the reference
station. Position the cursor in the small box with residuals at a given
baseline and click the left or central mouse button. When the plot of
residuals appears, we may see several horizontal lines where the
residuals concentrate.
Residual plot supports four modes of editing. The current mode is
highlighted in the top of the plot. A number of new keys is defined
in the graphic application for residual analysis with respect to
standard DiaGI.
Repeat the ambiguity shifting for all baselines with the reference
station and run the solution again.
Then display the residual plot by hitting (P). List all baselines
sequentially and resolve remaining ambiguities using F2 mode (Group
ambiguity shift). Just position the cursor close to zero delay and
hit (CentralMouse). Keys (PgUp) and (PgDn) change
the current baseline in a forward or backward direction respectively.
The next step is to inspect residuals. Set estimation of baseline-dependent
clocks: hit (C) from the menu of the last SETFL page. Menu of the
program BCLOK will be displayed. First set all baselines by hitting (W),
then deselect a clock reference station by hitting the station code.
Make a solution by hitting (Q). Look at the listing. Normally the
total wrms should be in the range [500, 1500] psec. If it exceeds 2000 psec,
it means that probably either ambiguity resolution was not successful or
there are clock breaks at one or more stations.
You should check estimates of baseline-dependent clocks. If the estimates
exceed 1 nsec, it is an indication of remaining permanent ambiguities at
that baseline, i.e there are no jumps in ambiguities among observations at
all baselines but all observations at some baselines have incorrect
ambiguities what causes triangle misclosures to be a multiple of the ambiguity
spacing. You have to get rid of permanent ambiguities.
Set solution type X-band Group Delay or
S-band Group Delay on the last SETFL page in accordance with
the band affected by permanent ambiguities. Then hit key (B) and
enter program BASFE for selecting the baselines. Deselect the baselines
affected by permanent ambiguities by positioning the cursor on the line with
baseline name and hit (space) key. The reset baseline-dependent clock:
hit (O), (L), (C). You will enter BCLOCK program. Set
estimation of all baseline clocks by hitting (M) or deselect estimation
of baseline clock by hitting (Z). Usually deselecting all baseline
clock is desirable at this step. Then run least square solution by hitting
(Q). First, check that your solution is good. If the solution is
good, then restore deselected baseline by hitting (O), (B)
(W). Then compute residuals with respect to the previous solution
by hitting (O) and @. Then go to REPA by hitting (P)
and examine residuals. Find the baseline with residuals with the mean that
deviates from zero at a number that is multiple of the ambiguity spacing.
Enter the mode F2 by hitting key (F2). Position the cursor near 0
group delay and hit the (CentralMouse). If the plot bounding box does
not include zero group delay, you need to enter DiaGi mode by hitting
(Esc), then using (CentralMouse) adjust the bounding box,
end then enter F2 mode by hitting (F2).
You may have observations with ambiguities less than the group delay
ambiguity spacing. These are sub-ambiguities and they are caused by a wrong
choice of maximum in the delay resolution function. There is no way to resolve
sub-ambiguities in Solve. In order to resolve sub-ambiguities, i.e. fix
error of fringe fitting procedure, you need to re-run fringe fitting procedure.
A necessary, but not sufficient step is to suppress all the points with
sub-ambiguities. Look for section "Resolving section sub-ambiguities in
this document"
If you find a jump in the plot of residuals you may try to insert a clock
break. Be sure that it is not a jump in ambiguities at X- or S-band. In order
to insert a clock break hit (E) in the main OPTIN menu, then
list station's pages by hitting (N) or (P) till you find the
station where you are going to insert clock breaks. Then hit (*) several
times until you see a line insert. Then hit (C) and enter
the time tag of the clock break. Then a new epoch for clock polynomial appears
at the SETFL page for that station. Set the first three flags to 1 for the new
clock break. Then make a new solution and check the listing and residual plots.
Keep in mind that there should be enough observations between the start of
the session, clock break(s) and the end of the session. There should be no less
than 4 observations, otherwise your solution will be unstable or singular.
If it seems to you that the session has many clock breaks, it may indicate
another serious problems unrelated to clock behavior.
If you have a station with too few good observations (less than 5 at each
baseline), or you have a station with postfit residual scatter larger than
5 nsec you can deselect it. But it is a last resort. In general you should try
to keep as many stations/baselines as possible in the initial and intermediary
solutions, and to leave the final decision to the time of the final solution.
An analyst is able to select/deselect station/baseline in any time during
further solutions including batch runs, however the data should be edited
properly, otherwise selecting the baselines which have been suppressed during
the initial solution might degrade the solution due to the presence of
outliers.
Settings for intermediary solution:
Now the time came to start outlier elimination/restoration. Go to the main
OPTIN page and then hit (\). You will see the menu of program ELIM/MILE
for outliers elimination/restoration. There is extensive user documentation
about ELIM/MILE:
Set the following menu items for outliers elimination in the intermediary
solution:
Set the first line to elimination by hitting (T).
Eventually hit (P) for going ahead. ELIM will suppress outliers.
In order to quit ELIM with saving results hit (S).
The main parameter that controls outlier elimination procedure is
(C) Cutoff limit for outlier detection. There is not a strict number that
fits all the cases. Usually 5 σ is sufficient for beginning.
You can control how well ELIM eliminated outliers by examining residuals
with program REPA. Ater outlier elimination, you need to update weights.
You can do it inside ELIM by hitting (W), (I), (O).
After weight update you can re-run ELIM.
ELIM can run in the revere mode: resurrection previously suppressed
observations. When you hit (T) in ELIM, you enter MILE program
with the same menu. For intermediate solution, you select the Cutoff limit
for outlier detection 2 &sigma.
Check your solution.
If you have a poor or bad solution you have to find the reason. First, check
a) calibration; b) parameterization; c) clock breaks. Then examine the
residuals. Check clock breaks and the resolution of group delay ambiguities.
If you are sure that residuals at baselines with a certain station are much
greater than the residuals at other baselines, you may deselect this station.
If you find that your intermediary solution is good, or at least you find the
reason why it is poor, you can move to the final solution.
Then update weights upon ELIM completion. Hit (W) on the
ELIM menu. You will see UPWEI menu. Set floor to 10.0 psec by hitting
(L) and then hit (I). Then go back ELIM by hitting
(O) in UPWEI menu and execute outliers elimination once more.
If you notice that χ2/ndf (ndf stands for the number of
degrees of freedom) went noticeably away from 1.0 (e.g. below
0.95), update weights once more and then again execute outliers
elimination.
Look at the Clock Constraint Statistics at the
bottom part of the listing. Look at the station which produces
minimal RMS. If this station is not a station a) with clock
break orb) which observed less than 75% of total time, you
can take it is a new clock reference station. NB: if
you changed the clock reference station you have to reset
flags for estimation of baseline dependent clocks anew. Make
a new solution and look at the results. If NRMS becomes smaller
for many stations you made a better choice. You may repeat
this procedure several times. In general the choice of clock
reference station influences results only marginally except
the case when the station with anomalous clock behavior was
taken as a clock reference station.
Look at the Clock Constraint Statistics at the
bottom part of the listing. If NRMS has the share more than
1.30 for some station(s), it is necessary to investigate
the reason.
Look at the plot of segmented parameters. Call program MDLPL
by hitting (/) on the main OPTIN menu. Information
about usage of MDLPL you can find in the
manual of MDLPL_PLUS [8].
Look at the estimates of clock function modeled by linear
spline. Clock function describes behavior of the H-maser
and
instrumental noise. If you see a smooth curve with
a quasi-diurnal or a quasi-semidiurnal period you should not
worry. A noisy, saw-like curve is an indication of strong
instrumental errors.
You can raise the value of sigma of the constraints imposed
on the clock of an individual station (to make constraint
less strong): go to the last page of the SETFL menu, then
hit (") key. Key (*) toggles modes: site
dependent constraints versus session dependent constraints
(common for all stations). Set site dependent
constraints, position the cursor on the value of the
constraint for the station of interest, then hit the space bar.
SETFL will ask you to enter the value.
If you have a smooth curve of clock function you can set
a sigma of constraint which results in NRMS of the clock
function with share of about 1.00 (constraint sigma is about
the same as the RMS of clock the function). If you have
a saw-teeth-like, noisy clock function, then a stiffer
constraint (less sigma of constraint) should be imposed:
a constraint which results in the NRMS of clock function with
share in the range [1.5, 2.0].
As a rule of thumb baseline-dependent clocks should remain
only for the baselines which produce adjustments exceeding
3 formal uncertainties. Significant baseline-dependent
clocks may occur when some channels at station(s) were dropped
in final fringing. Estimates of baseline-dependent clocks
which exceed 1 nsec indicate incorrectly resolved group
delay ambiguities or sub-ambiguities. Reset the flags of
estimation of baseline-dependent clocks by invoking BCLOK
from the last page of the SETFL menu by hitting (C).
Invoke ELIM by hitting (\) from the main OPTIN menu.
Hit (W) in the ELIM menu in order to invoke UPWEI.
Then hit (D) (Display current weights). A list of
UPWEI statistics will be displayed on your screen. You can
navigate this list by using (ArrowUp),
(ArrowDown), (PageUp) and (PageDown)
keys. Hitting (Q) allows you to leave the mode of
displaying statistics and return to the UPWEI menu.
Examine χ2/ndf column in the source section.
χ2/ndf is the ratio of the sum of the squares
of the weighted residuals over the used observations of the
specific source to its mathematical expectation. Values of
χ2/ndf which are significantly greater than
1.0 indicate that the scatter of residuals for observations
of that source is greater than average. This may be due to
wrong a priori position of the source, significant contribution
of source structure, pointing errors and so on. These sources
are good candidates for coordinate adjustments.
χ2/ndf statistics is not representative if
the source had less than 3-5 good observations and we should
not try to adjust positions of such sources unless we have
another evidence that a priori coordinates of this source were
poor or the source has a noticeable apparent proper motion.
We should try to estimate positions of the sources with
χ2/ndf > 1.5 under condition that the number of
observations is higher than 5.
Write down the names of the sources with large
χ2/ndf. Leave
UPWEI display statistic mode by hitting (Q),
then leave UPWEI by hitting (O) and hit (O)
once more to leave ELIM.
Then hit (S) in the main OPTIN menu and you will see
an another SETFL menu for setting source coordinates
estimation flags. List the menu by hitting keys (B)
and (P), find the sources with positions which you
decided to adjust. Set to 1 in the field Right
ascension and 1 in the field declination, e.g.
Run a solution. Look at the bottom of the listing. Leave
estimation flag set for the sources with estimates greater
than 3 sigmas in declination or in right ascension. If you
have sources which produced adjustments which are less than
2.5-3.0 sigmas -- unset the flag for coordinate estimation for
those sources.
Call MDLPL program by hitting (/) from the main
OPTIN menu. Then hit (R) in the main MDLPL_PLUS menu.
Examine the plot residuals + clock function. If you
don't see a break in the plot of residuals where you have
inserted clocks breaks and you find in the listing that
adjustments to clock breaks are insignificant, remove clock
break(s). On contrary, if you see a noticeable jump in the
plot of a postfit residuals + clock function you
should try to insert a new clock break at the epoch of the
jump.
Update weights after ELIM completion. Hit (W) at the ELIM
menu. You will see UPWEI menu. Set floor to 10.0 psec by hitting
(L) and then hit (I). Then go back to ELIM by hitting
(O).
Now hit (T) to set restoration mode. Set cutoff the limit of
outlier detection to 4.0 sigma (or 3.5 sigma if the session has more than
5000 observations) and then proceed with restoration of the observations
which were previously suppressed by hitting the key (P).
Then set elimination mode, set the cutoff limit to 3.5 sigma and
eliminate outliers once more. After outliers elimination update weights
and proceed with outliers elimination once more.
Then set the flag for
estimation of coordinates of those sources and make a new solution.
Then you may wish to eliminate outliers among these observations. One
of the options is to do it manually in REPA, another way is to do it
semi-automatically in ELIM. You can set the flag Confirm each
action by hitting (N) in ELIM menu. ELIM will ask for
confirmation before suppression of each observation in this mode.
If there are large outliers among the observations, their presence
in the solution distorts parameter estimates and the residuals.
If you have few observations of that source (say, less than 10),
automatic outlier elimination procedure may not correctly identify
bad observation. Therefore, be careful when you restore observations
of the source with positions being estimated. Estimation of source
position makes solution less robust and more sensitive to outliers.
Set user partial program CABLE_PART: hit (<) on the main
OPTIN menu and then enter the name of the program: CABLE_PART. Run
a solution. Cable cal admittance will be computed for all stations in
this mode. Admittance about 1.0 means that the cable cal is OK,
admittance -1.0 means that the cable cal is OK but its sign is wrong.
Values around zero indicates that cable cal is probably wrong. If you
find that estimation of cable cal admittance improves fit by more than
2-3% (wrms is less, χ2/ndf becomes less) you can decide
to set the flag "not calibrate for cable cal" for some stations
permanently. However as a rule of thumb you should leave cable
calibration unless you have clear evidence that cable calibration
at certain stations degrades fit.
Don't forget to deactivate CABLE_PART program and to set the flag
cable cal for all stations except the ones which it degrades
fit. In order to deactivate user partial program CABLE_PART hit
(<) from the main OPTIN menu again and them hit (Return)
key in reply on Enter name of user partial program
(Return if none):.
Resolving sub-ambiguities is done differently for two cases when the
amplitude of the secondary maximum in delay resolution function is less than
0.96 or greater than 0.96. Below we consider a case when the secondary maximum
are less than 0.96 when the experiment is processed with PIMA.
We first suppress all outliers. When we are satisfied with solution, we
store residuals. First hit (L), then hit (A) to set
Print residu(A)ls: ON. The hit (O), and ; to rewind
the spool file and then hit key (C) in order to set
(C)hange Spooling current: on. Check that the spool file was
rewound, spooling is on and print residuals in ON. Hit (Q) to run
least square solution. Hit (space) twice in order to get listing,
then hit (O) and hit CNTRL/U in order to save the database.
You need to save database with version > 1.
After that terminate pSolve by hitting (T). Examine the spool file.
You will see residual section. Check that you have listing for only one
run. Copy the spool file into /vlbi/$exp/$exp_$band_init.spl file.
Here $exp is the low case experiment name and $band is low case band.
For instance /vlbi/bp192b0/bp192b0_c_init.spl . Check that 1) the PIMA
control file is /vlbi/$exp/$exp_$band_pima.cnt; 2) Keyword Band is the same
as $band, but in upper case. 3) EXP_NAME and EXP_CODE in file
/vlbi/$exp/$exp.desc is $exp in low case; 4) DB_NAME in /vlbi/$exp/$exp.desc
is the 10 character long GVF database name that pSolve just processed.
Run script pima_samb.csh:
If your experiment has two bands, you create files residuals for each
band separately. Then you first run pima_samb.csh for the lower band
(i.e. S-band for X/S observations) and specify no_db as the
fourth argument. Then you run pima_samb.csh for the higher band and
specify db as the third argument. The load the database
and run ELIM and MILE first with the lower band, then the with
higher band, then with the ionosphere-free linear combinations of
two bands.
What occurs behind the hood?
pima_samb.csh calls program samb. Program samb reads the residual file
and computes the narrow search window over group delay for consecutive
fringe search. The center of the search window corresponds to the
predicted group delay on the base of least square solution. Accuracy
of path delay prediction is within 2–3 wrms of positfit residual,
typically 100–300 ps. The fourth argument of pima_samb.csh defines
the semi-width of fringe search window. If the fourth argument is omitted,
a meaningful default is used.
pima_samb.csh generates a control file for PIMA with the narrow fringe
search window for each observation marked as an outlier in the residual
file. i.e. < or R in the 8th column. Then it runs that control file.
This forces PIMA to pick up the maximum in the delay resolution function
within the the specified window. Upon completion, pima_samb.csh runs
PIMA task mkdb and creates version 1 of the database file in GVF format.
Finally, pima_samb.csh runs program gvf_supr_promote . Since you lowered
the SNR detection limit during re-fringing, you need to carry the flag
of detection into the latest database version. Program gvf_supr_promote
does it.
Last update: 2015.12.05_22:35:35
Interactive pSolve
Overview
Interactive pSolve communicates with a the user by means of a set of menus. The
majority of programs display menus in a text window and prompt you to enter
a command. Each command has a one-letter code and an active field on the
screen. A user has two ways to activate a command: either to position a cursor
on the active field and then hit the space bar (space key) or to hit a command code
letter. Commands codes are enclosed in parentheses and emphasized in
bold in
this manual. Parentheses themselves are not a part of a command code. Many
programs have sub-menus which operate the same way.
Data loading
pSolve is able to read the data in GVF format. Although post-pSolve still
supports two other obsolete formats: MARK-3 DBH and superfile format, the use of
data in these formats is not explained in the document. Support of Mark-3 DBH
and superfile formats will be removed in the near future.
Initial solution
The initial solution is carried out when the experiment is analyzed the
first time. The purposes of the initial solution are:
The sequence of operations:
Initial settings
Hit (L) key. You should see at the bottom "Last page Parms used
/ Max parms available:". Select solution type and suppression method.
If you have a database generated by PIMA, you will find that the suppression
method is set to SUPMET__META. If your database was just converted from
Mark-3 DBH, and you loaded it to VTD/pSolve for the first time, it
may have a different suppression method. Then you need to change the
suppression method to SUPMET__META. Hit key (') and change suppression
method to SUPMET__META.
Setting initial parameterization
Go to the SETFL last page menu by hitting (L) in the OPTIN menu. Turn the
estimation of site coordinates off by hitting (#), set all EOP
flags to 0 (off), set the nutation estimation flag to zero by hitting (.).
Then set the clock parameters for all stations except the station
taken as a clock reference. Hit the key (E), then you'll see the "site
menu" of the SETFL program. It is irrelevant which station is taken as
a clock reference station at this step except the cases: a) when there are too few
good observations (say, less than 8) at that station; b) when the network is split into
two independent subnetworks. Set the clock polynomial flags to 1 for the first
three terms (clock shift, clock drift, frequency drift), f.e,
Setting a priori clock values.
If all clock offsets and rates are below the limit you can skip this step.
Group delay ambiguity resolution.
The next step is to check whether you have to resolve group delay ambiguities
at both bands. If your database was generated by PIMA, you rarely have
ambiguities in group delays. If your database was generated by Fourfit and then
converted to GVF format with program mark3_to_gvf you may have group delay
ambiguities. Group delay ambiguities is an artifact of Fourfit algorithm.
The way how Fourfit works, group delays are not determine in a unique way
buy as τ + N * S, where S is the so-called ambiguity spacing and N
is an arbitrary integer number. The group delay ambiguity spacing is
the quantity that is reciprocal to the minimal frequency separation between
intermediate frequencies if the delay is determined with Fourfit, and
the quantity that is reciprocal to the resolution of visibility spectrum
if the delay is determined with PIMA. Fourfit and PIMA set up the
initial value for N in order to keep the residual group delay, i.e. the
difference between the estimated group delay and theoretical group delay
below 1/2 of the group delay spacing. But accuracy of theoretical model
used by Fourfit is often not sufficient and as a result initial value is
often wrong and the for a subset of observation it needs to be changed.
This process is called group delay ambiguity resolution. Since spectral
resolution is much lower than the minimal frequency separation between
intermediate frequencies, group delay ambiguity spacing for delays determined
with PIMA are much greater, typically 1–10 microseconds, which is usually
significantly greater than errors in a~priori model.
Manual elimination of observations affected by sub-ambiguities.
Manual group delay ambiguity resolution is an alternative technique.
Group delay ambiguities are resolved.
REPA supports four modes: ESC, F1, F2, F3. The current mode is
highlighted in the top of the plot window. Keys (ESC),
(F1), (F2), (F3) switch the mode. Operations
bound on mouse keys (LeftMouse), (CenralMouse),
(RightMouse) depend on mode. In all modes key
(Backspace) undoes the previous operation for point
suppression status toggle and ambiguity resolution. All operations
in the current mode are stored in the stack and can be undone.
Change the mode clears the stack.
Manual re-distribution of permanent group delay ambiguities
First you have to decide which band is affected by permanent ambiguities.
If baseline-dependent clocks has an adjustment by a multiple of the group delay
ambiguity spacing at X-band -- then X-band. A permanent ambiguity at S-band
will contribute by fX/fS = 12 times less.
Thus, if you see baseline-dependent clock estimates less than one group delay
ambiguity spacing but still larger than 1 nsec, it is an indication that
there are S-band permanent ambiguities.
Inspection of residuals
Now you have to inspect residuals baseline by baseline. The purpose is
to check quality of data, check whether the ambiguities were resolved correctly
and check whether clock breaks have to be inserted. Call program REPA
by hitting (P) from the OPTIN menu. (NB: REPA conflicts with
some X-applications which grabs colors, like Netscape. You should close such
applications before running REPA.)
Intermediary solution
Intermediary solution is carried out upon completion of the initial solution
when the experiment is analyzed the first time. The purpose of the intermediary
solution is to remove strong outliers at the earlier steps. The intermediary
solution has incomplete parameterization. Such a parameterization facilitates
suppression of outliers or observations which look like as outliers.
Now make a solution by hitting (Q). Look at the listing. Check
parameterization: you should estimate a) station positions (except the
reference station), b) clocks; c) atmosphere path delay; d) baseline-dependent
clocks and nothing more.
Final solution
Final solution is carried out either during the first analysis of the
experiment or during re-analysis of the data. It is assumed that initial and
intermediary solutions have already been made. The final solution may be done
for different purposes. One of the objectives is to obtain a so-called quick
solution. The purpose of a quick solution:
min_obs for one source: 2
min_obs for one station: 5
min_obs for one baseline: 4
Database update
Some parameters related to the solution, such as group and phase delay
ambiguities, suppression status, clock and atmosphere parameterization,
baseline-dependent clock status, reweighting parameters can be saved in the
database. Database update is the final step of analysis. Hit (U) or
(CNTRL/U). You have a choice either update the current version of
the database (option 1) or create a new database with the updated version
counter (option 2). Usually, option 2 is used when the experiment is
analyzed the first time. This allows to start analysis anew if an error
in analysis is found. It is rarely needed to have version counter greater
than 2.
Resolving sub-ambiguties
In a case if the fringe fitting procedure picked a maximum in the Fourier
transform of visibilities that is not the global maximum, the estimate of the
group delay is wrong. The placement of secondary maximum depends on the frequency
sequence. A good frequency sequence has the amplitude of the secondary maximum
in a range of 0.5–0.8 provided there is no systematic phase offsets
between intermediary frequencies. The probability to pick up a wrong maximum
is low even at marginal SNR=6. However, in the presence of systematic phase
offsets and/or losses of IFs, the secondary maximum may become close or even
exceed the amplitude of the main maximum in a case of a lack of phase offsets.
If the excessive phase distortion is persistent over the experiment, the points
that correspond to observations where fringe fitting picked up secondary maximum
are aligned along horizontal lines on plots of residual group delays. This
points are called sub-ambiguities.
References
Some user documentation related to pSolve.
This document was prepared by
Leonid Petrov