What do Sigma a-priori a-posteriori, null hypothesis and Test value mean?
Roughly explained the situation is as follows:
- A-priori standard deviation –> errors assumed for the survey (set for the observation groups in adjustment configuration)
- Sigma a-priori -> scaling factor for numerical reasons; should be chosen in the order of magnitude of the a-priori standard deviations
- A-posteriori standard deviation -> accuracy measure obtained from adjustment (result of processing)
- Null-hypothesis -> the assumption that the standard deviations a-priori and a-posteriori are identical; i.e. we executed the survey with the assumed accuracy
- Test value -> (sigma a-posteriori)^2 divided by (sigma a-priori)^2, if both are equal the result is “1” which is the desired value;
If < 1: you’re a-priori assumption was too pessimistic; if > 1 your obtained result is not as good as your assumptions; - In principle if the test value is unequal “1” the null-hypothesis is dismissed. However, statistics allow a tolerance which depends on the redundancy. If large redundancy the threshold is tight, if low redundancy it is wide (Interpretation: if tight threshold we know exactly how good we are, if wide we have no idea)
Technical: The calculated test value is compared with a redundancy dependent value derived from a statistical function (either Chi^2-distribution or Fisher-distribution)
Intermediate Explanation: The whole purpose of this procedure is to evaluate the quality of the result; i.e. the identification of gross errors. Gross errors are those that deviate from the result’s average by more than three times the standard deviation (3-sigma). This doesn’t say anything about a good or bad result. It just says those observation fit to the rest. If the observation scatter largely we get large standard deviations (interpretation: inaccurate results) but we’ll still get a result. If we assumed these “bad” results (-> large sigma a-priori = sigma a-posteriori) the null-hypothesis will be accepted (see above); i.e. the result will be accepted and will enter into the database.
If we obtain a tight threshold (large redundancy) we will be able to identify outliers more significantly and will be able to eliminate those observations or de-weight them (e.g. via robust adjustment). That’s actually the reason why robust adjustment doesn’t make sense with low redundancy.
Summary: these accuracy assumption don’t have impact on the obtained coordinates, but only serve for the evaluation of the result.
- NC (normalized correction, see page 2) -> the adjustment obtained correction divided by its a-priori standard deviation. If sigma a-priori is large the NC-value will tend to be small, if small the NC-value value will increase. Now, what we usually expect is many results close to the correct result (i.e. small correction) and a few far away (large correction or even gross error) -> gaussian bell-curve.
This means magnification will “hit gross errors harder” -> easy identification!
Technical: if NC exceeds “3” (if I remember correctly @ significance level of 5% it’s “3.29”) you can interpret it as gross error.
Consequence: this observation should be neglected, i.e. removed from the observation (in robust adjustment it will be de-weight in iteration).
In case of problematic adjustment:
- Only one set-of-angles was measured -> no redundancy for any object point. Only redundancy comes from know coordinates of ref-points; not even on these points we have redundancy in observations (technically i.e. statistically the known coordinates are considered observation, that’s the only reason why the adjustment works at all).
In general this means we have no quality information on the observations. - The a-posteriori sigma is significantly larger than the a-priori assumption. In order to get the null-hypothesis accepted either the a-priori values need to be increased or the gross errors be removed e.g. by no longer assigning ref-status (which would lead to even less redundancy)
- Can you identify at least two points being critical?;
- Which one or if even both is not clear (see opposing algebraic signs)
- Whether this due to a false observation or caused by low quality of the ref-point’s coords is not clear since we don’t have repeated measurements
- By increasing sigma a-priori NC will probably drop below “3” and the statistical indication of gross errors will fail
To answer your questions briefly:
- Adjust sigma a-priori (and the a-priori standard deviations) until test value is close to “1”, then null-hypothesis will be accepted
- Check if the “standard error of position” is acceptable (NOTE: also bad results will be statistically accepted, see above)
- Gross errors have impact on sigma a-posteriori. If usually, with approved constellation, your null hypothesis is accepted check for gross errors (NC > 3) first and remove potentially erroneous observations first.
Honestly I think the presented constellation is hard to interpret on statistical basis due to the minimum of data. It would help a lot to combine e.g. three sets of angles. The number of ref-points is sufficient; of course based on the report In cannot evaluate their distribution over the horizon. I don’t know whether the ref-point’s cords stem from a local zero-measurement or have been provided from “outside”. In any case the cords require checking.