Skip to content
Snippets Groups Projects
Commit 437d222a authored by Luc Maisonobe's avatar Luc Maisonobe
Browse files

Completed on-line documentation.

parent 052847d4
No related branches found
No related tags found
No related merge requests found
<!--- Copyright 2013-2014 CS Systèmes d'Information
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
DEM intersection
------------
The page [technical choices](./technical-choices.html) explains how Rugged goes from an on-board pixel
line-of-sight to a ground-based line-of-sight arrival in the vicinity of the ellipsoid entry point. At
this step, we have a 3D line defined near the surface and want to compute where it exactly traverses the
Digital Elevation Model surface. There is no support for this computation at Orekit library level,
everything is done at Rugged library level.
As this part of the algorithm represents an inner loop, it is one that must use fast algorithms. Depending
on the conditions (line-of-sight skimming over the terrain near field of view edges or diving directly in
a nadir view), some algorithms are more suitable than others. This computation is isolated in the smallest
programming unit possible in the Rugged library and an interface is defined with several different
implementations among which user can select.
Three different algorithms are predefined in Rugged:
* a recursive algorithm based on Bernardt Duvenhage's 2009 paper
[Using An Implicit Min/Max KD-Tree for Doing Efficient Terrain Line of Sight Calculations](http://researchspace.csir.co.za/dspace/bitstream/10204/3041/1/Duvenhage_2009.pdf)
* an alternate version od Duvenhage algorithm using flat-body hypothesis,
* a basic scan algorithm sequentially checking all pixels in the rectangular array defined by Digital Elevation Model entry and exit points,
* a no-operation algorithm that ignores the Digital Elevation Model and uses only the ellipsoid.
It is expected that other algorithms like line-stepping (perhaps using Bresenham line algorithm) will be added afterwards.
The Duvenhage algorithm with full consideration of the ellipsoid shape is the baseline approach for operational
computation. The alternate version of Duvenhage algorithm with flat-body hypothesis does not really save anything
meaningful in terms of computation, so it should only be used for testing purposes. The basic scan algorithm is only
intended as a basic reference that can be used for validation and tests. The no-operation algorithm can be used for
low accuracy fast computation needs without changing the complete data product work-flow.
DEM loading
-----------
As the min/max KD-tree structure is specific to the Duvenhage algorithm, and as the algorithm is hidden behind
a generic interface, the tree remains an implementation detail the user should not see. The min/max KD-tree structure is
therefore built directly at Rugged level, only when the Duvenhage algorithm has been selected to perform localization computation.
On the other hand, Rugged is not expected to parsed DEM files, so the algorithm relies on the raw data being passed by the upper
layer. In order to pass these data, a specific callback function is implemented in the mission specific interface layer and
registered to Rugged, which can call it to retrieve parts of the DEM, in the form of small cells. The implicit KD-tree is then
built from leafs to root and cached.
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
--> -->
Global architecture Global architecture
=================== -------------------
Rugged is an intermediate level mission-independent library. It relies on Rugged is an intermediate level mission-independent library. It relies on
the Orekit library and on the Apache Commons Math library. It is itself the Orekit library and on the Apache Commons Math library. It is itself
...@@ -21,3 +21,53 @@ intended to be used from a mission-specific interface by one or more ...@@ -21,3 +21,53 @@ intended to be used from a mission-specific interface by one or more
image processing applications. image processing applications.
![architecture](../images/rugged-architecture.png) ![architecture](../images/rugged-architecture.png)
The Java platform provides the runtime environment, the Apache Commons
Math library provides the mathematical algorithms (3D geometry, root
solvers ...), the Orekit library provides the space flight dynamics
computation (frames transforms, orbits and attitude propagation and
interpolation ...). The Rugged library itself provides the algorithms
dealing with line-of-sight intersection with Digital Elevation Models
in a mission-independent way. Rugged does not parse the DEM models itself,
nor does it performs image processing. Mission-dependent parts (including
Digital Elevation Model parsing or instrument viewing model creation remain
under the responsibility of Rugged caller, typically using a mission-specific
library used by several image processing applications.
This architecture allows both the image processing application and the mission
specific interface to be as independent as possible from space flight dynamics and
geometry. These parts can therefore be implemented by image processing specialists.
The application itself can even be written in a programming language as C++, there is
no need for the Java language at this level. It is expected that the mission specific
interface is written using the Java language to simplify data exchanges with the lower
layers and avoid complex data conversion. Data conversion is performed only between the
image processing application and the interface layer, and is limited to very few high
level functions with few primitive types (raw arrays for pixels or ground coordinates).
The Rugged library is developed in the Java language and has full access to the Orekit and
Apache Commons Math libraries. It is designed and developed by space flight dynamics and
geometry specialists, with support from the image processing specialists for the API definition.
Functional Breakdown
--------------------
The following table sorts out the various topics between the various layers.
:--------------------------------:|:-----------------------:|:---------------------------------------------------------------------------------------------------|
Topic | Layer | Comment
Sensor to ground mapping | Rugged | Direct localization is the base feature provided
Ground to sensor mapping | Rugged | Inverse localization is another base feature provided
Individual pixels | Rugged |The API supports any number of pixels, defined by their individual line of sight provided by caller
Optical path | Interface |The folded optical path inside the spacecraft is taken into account by computing an overall transform combining all inside reflections, so each pixel position and line of sight can be computed later on by a single translation and rotation with respect to spacecraft center of mass
Line time-stamping | Interface/Rugged |The caller must provide a simple time-stamping model (typically linear) that will be applied
Orbit and attitude interpolation | Orekit |Both simple interpolation from timestamped position samples and full orbit propagation are available, thanks to Orekit streamlined propagator architecture
CCSDS Orbit/Attitude file parsing | Orekit |This is supported as long as standard CCSDS Orbit Data Message (CCSDS 502.0-B-2) and CCSDS Attitude Data Messages (CCSDS 504.0-B-1) are used
Custom Orbit/Attitude file parsing| Interface |Custom files can be loaded by mission specific readers, and the list or orbit/attitude states can be provided to Orekit which is able to handle interpolation from these sample data
Frames transforms | Orekit |Full support to all classical reference inertial and Earth frames is already provided by Orekit (including the legacy EME2000, MOD, TOD, but also the more modern GCRF, ICRF, TIRF or exotic frames like TEME or Veis1950, as well as several versions of ITRF)
IERS data correction | Orekit |All frame transforms support the full set of IERS Earth Orientation Parameters corrections, including of course the large DUT1 time correction, but also the smaller corrections to older IAU-76/80 or newer IAU-2000/2006 precession nutation models as well as the polar wander. The frames level accuracy is at sub-millimeter level
Grid-post elevation model | Rugged |Only raster elevation models are supported
Triangulated Irregular Network elevation model | Not supported |If vector elevation models are needed, they must be converted to raster form in order to be used
Geoid computation | Not in version 1 |The first version only supports Digital Elevation Models computed with respect to a reference ellipsoid. If needed, this feature could be added after version 1, either at Rugged or Orekit level, using Orekit gravity fields
Time-dependent deformations | Interface/Rugged |The caller must supply a simple line-of-sight model (typically polynomial) that will be applied
Calibration |Image processing or interface|The calibration phase remains at the mission-specific caller level (pixels geometry, clock synchronization …), the caller is required to provide the already calibrated line of sights
DEM file parsing | Interface |The elevation models are dedicated to the mission and there are several formats (DTED, GeoTIFF, raw data …).Rugged only deals with raw elevation on small latitude/longitude cells
<!--- Copyright 2013-2014 CS Systèmes d'Information
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
Overview
--------
The top level design describes the various libraries and their interactions. The lowest level
corresponding to the Apache Commons Math library is not shown here for clarity.
The following sequence and class diagrams show the three most important functions: initialization
of the libraries, direct localization and inverse localization.
### Initialization
The user of the Rugged library is responsible to provide its main program and a mission specific
Digital Elevation Model loader, in the form of a class implementing Rugged TileUpdater interface.
He also creates a LineSensor containing the geometry of the pixels line-of-sights. He then creates
an instance of the top-level Rugged class and provides it the created objects as well as its selection
of options for algorithm, ellipsoid and frame choices.
![initialization class diagram](../images/design/initialization-class-diagram.png)
The Rugged instance will store everything and create the various objects defining the configuration
(creating the algorithm, ellipsoid and frames from the identifiers provided by the user. Using simple
enumerates for frames or ellipsoid allow a simpler interface for regular users who are not space flight
dynamics experts. For expert use, the user can also create these objects directly and pass them to Rugged
if the predefined identifiers do not cover his needs. As shown in the following figure, several line sensors can be
added to a single Rugged instance, this is intended to compute correlation grid, when images coming from
two different sensors are expected to be accurately combined.
![initialization sequence diagram](../images/design/initialization-sequence-diagram.png)
### Direct localization
Direct localization is called a large number of times by the application, once for each sensor line.
The application only provides image processing related data to the configured Rugged instance, i.e. the
line number, and it expects the geodetic coordinates of the ground points corresponding to each pixels
in the sensor line. The Rugged instance will delegate conversions between frames to an internal
SpacecraftToObservedBody converter, the conversions between Cartesian coordinates and geodetic coordinates
to an internal ExtendedEllipsoid object, and the computation of the intersection with the Digital Elevation
Model to the algorithm that was selected by user at configuration time.
![direct localization class diagram](../images/design/direct-localization-class-diagram.png)
The pixels independent computation (orbit and attitude interpolation, Earth frame to inertial frame transforms,
transforms composition) are performed only once per date inside the caching combined transform provider set up
at initialization time and the resulting transform is applied for all pixels in the line, thus saving lot of
computing power.
The innermost loop is the correction of each pixel, which is split in the line-of-sight to ellipsoid intersection,
and followed by the Digital Elevation Model intersection. The callback to the mission specific interface to
retrieve DEM raw data is called from the inner loop but is expected to be triggered only infrequently thanks to a
caching feature done at Rugged library level.
![direct localization sequence diagram](../images/design/direct-localization-sequence-diagram.png)
The following figure describes the algorithm used for tile selection and how the underlying intersection algorithm
(Duvenhage in this example) is called for one tile:
![duvenhage top loop activity diagram](../images/design/duvenhage-top-loop-activity-diagram.png)
The recommended Digital Elevation Model intersection algorithm is the Duvenhage algorithm. The following figure
describes how it is implemented in the Rugged library.
![duvenhage inner recursion activity diagram](../images/design/duvenhage-inner-recursion-activity-diagram.png)
### Inverse localization
Inverse localization is called a large number of times by the application, typically on a regular grid in some
geographic reference like UTM. The application only provides image processing related data, i.e. the geodetic
coordinates of the ground points and expects the coordinates of the corresponding pixel (both line number of
pixel number). The pixels independent computation (orbit and attitude interpolation, Earth frame to inertial
frame transforms, transforms composition) are performed only once per line and cached across successive calls to
inverse localization, thus greatly improving performances.
![inverse localization sequence diagram](../images/design/inverse-localization-sequence-diagram.png)
The computation is performed in several steps. The line to which the points belong is first searched using a dedicated
solver taking advantage of the first time derivatives automatically included in Orekit transforms. It can therefore set
up a model of the angle between the target point and the mean sensor plane, and therefore compute in only two or three
iterations the exact crossing of this plane, and hence the corresponding line number. Then, the position of this
crossing along the line is searched using a general purpose solver available in Apache Commons Math. As all coordinates
are already known in spacecraft frame at this stage, no conversions are performed and this solver find the corresponding
pixel very fast. The last two steps correspond to fixing accurately the previous results, which can be important when
the various pixels in the line sensor do not really form an exact line and therefore when the previous computation which
were done using a mean plane do not represent reality. These final fixes are simple to do because instead of providing
simple values as results, the first step in fact provided a Taylor expansion, thus allowing to slightly shift the result
at will.
Focus point on Digital Elevation Model loading
----------------------------------------------
The Digital Elevation Model is used at a very low level in the Rugged library, but read at a high level in the mission
specific interface library. The following design has been selected in order to allow the lower layer to delegate the
implementation of the loading to the upper layer, and to avoid too many calls. The driving principle is to set up a cache
for DEM tiles, keeping a set of recently used tiles in memory up to a customizable maximum number of tiles, and asking for
new tiles when what is in memory does not cover the region of interest.
![DEM loading class diagram](../images/design/dem-loading-class-diagram.png)
The cache and the tiles themselves are implemented at Rugged library level. The loader is implemented at mission specific
interface level, by implementing the TileUpdater interface, which defines a single updateTile method. When this updateTile
method is called by the cache, one of its argument is an UpdatableTile instance that must be updated. The implementation
must first call once the setGeometry method to set up the global geometry of the tile (reference latitude and longitude,
latitude step size, longitude step size, number of rows and columns in the raster), and then call the setElevation method
for each element of the raster. The loader can therefore avoid to allocate by itself a large array that will in any case
be reallocated by the Tile. The loader only sees interfaces in the API and doesn't know anything about the real specialized
tiles that are used under the hood. Different DEM intersection algorithms can use different tiles implementations without
any change to the mission specific interface. One example of this independence corresponds to the Duvenhage algorithm, has
in addition to the raw elevation grid, the tile will also contain a min/max kd-tree, so there are both a dedicated specialized
tile and a corresponding TileFactory in use when this algorithm is run.
\ No newline at end of file
<!--- Copyright 2013-2014 CS Systèmes d'Information
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
Earth frames
------------
As Rugged is built on top of Orekit and Apache Commons Math, all the flight dynamics and
mathematical computation are delegated to these two libraries and the full accuracy available
is used. This implies for example that when computing frames conversions between the inertial
frame and the Earth frame, the complete set of IERS1 Earth Orientation Parameters (EOP)
corrections is applied if the IERS files are available. This may lead to results slightly
different from the one produced by some other geometry correction libraries that are limited
to the older equinox-based paradigm (Mean Of Date and True Of Date), apply only DUT1 and pole
wander corrections and ignore the other Earth Orientation Parameters corrections. The expected
difference with such libraries is due to the missing corrections (δΔε and δΔψ for equinox-based
paradigm) to the IAU-1980 precession (Lieske) and nutation (Wahr) models used in the legacy MOD
and TOD frames. The error is a combination of an offset, a global drift and several periodic terms.
This error was small in the 80's but is much higher now as it has reached the 3 meters level since
mid-2013. The error is steadily increasing, as the old precession and nutation models are not accurate
enough for current needs and are drifting with respect to real Earth motion.
![precession/nutation error](../images/precession-nutation-error.png)
These legacy models are very old and not recommended anymore by IERS since 2003. IERS also currently
still provides the correction for these models, but there is no guarantee they will do so indefinitely,
as they are now providing corrections with respect to newer and more accurate models. The newer frames
are based on a non-rotating origin paradigm and on different precession and nutation models (IAU-2000/2006),
which are much more accurate. The corresponding corrections (δx/δy, not to be confused with the xp/yp
pole wander) are smaller because the precession and nutation models are better than the former ones.
As Rugged delegates computation to Orekit, the full set of corrections (DUT1, pole wander, lod, δΔε/δΔψ
or δx/δy) are automatically loaded and applied. The final accuracy obtained when all EOP are considered
is at sub-millimeter level in position, and the expected difference with libraries ignoring δΔε and δΔψ
is at a few meters level, Rugged being the more accurate one.
Rugged is not limited to the legacy MOD and TOD frames and can use the newer IERS recommended frames as well.
From a user perspective, this is completely unnoticeable as user simply selects an Earth frame as an existing
predefined object by name, and doesn't have to care about the transforms and corrections. In fact at Rugged
level there is not even a notion of precession, nutation or EOP corrections. The only interfaces used are the
inertial and Earth frames and the date. From these three elements, Orekit computes all geometrical transform,
including both the theoretical motion models and the IERS corrections, thus greatly leveraging the computation.
As a summary, Rugged may give results slightly more accurate than other geometric correction
libraries, and is compatible with both the legacy frames and the newer frames.
Position and attitude
---------------------
The global geometry of the image depends on the spacecraft position and attitude. Both are obtained using any
Orekit provided propagators. Thanks to the architecture of the Orekit propagation framework, propagation can
be either a true propagation from an initial state (which is interesting in mission analysis and simulation
use cases) or can be an interpolation from a loaded ephemeris. From the caller point of view, there are no
differences between the two cases, as an ephemeris is a special case of propagator, using interpolation from its
loaded sample. Support for CCSDS-based ephemerides is already provided by Orekit, and it is possible to build
ephemerides from lists of states if a dedicated loader is developed to parse mission-specific files.
When ephemeris interpolation is selected as the underlying propagator, the number of points used for the
interpolation is specified by the user, so simple linear model is possible but higher degree interpolation is
available. The interpolation retains the raw state format, so if an ephemeris contains circular orbital
parameters, interpolation will be done using these parameters whereas if ephemeris contains position and velocity,
interpolation will be done using position and velocity. As velocity is the time derivative of position, in this case
a Hermite interpolation is performed, thus preserving derivatives consistency.
Dedicated algorithms are implemented in Orekit to deal with quaternions interpolation. Direct polynomial
interpolation of the four quaternion components does not work reliably, and even less if only linear interpolation
is performed, even if normalization is used afterwards. The first reason for this bad behaviour is very crude accuracy
of linear only models. The second reason is that despite quaternion Q1 and -Q1 represent the same rotation, interpolating
components between Q1 and Q2 or -Q1 and Q2 leads to completely different rotations, and the quaternions in an ephemeris
will typically have one sign change per orbit at some random point. The third reason is that instead of doing an
interpolation that respect quaternions constraint, the interpolation departs from the constraint first and attempts to
recover afterwards in a normalization step. Orekit uses a method based on Sergeï Tanygin's paper
[Attitude interpolation](http://www.agi.com/downloads/resources/white-papers/Attitude-interpolation.pdf) with slight
changes to use modified Rodrigues vectors as defined in Malcolm D Shuster's
[A Survey of Attitude Representations](http://www.ladispe.polito.it/corsi/Meccatronica/02JHCOR/2011-12/Slides/Shuster_Pub_1993h_J_Repsurv_scan.pdf),
despite attitude is still represented by quaternions in Orekit (Rodrigues vectors are used only for interpolation).
These changes avoid a singularity at π. Some other refinements have been added to also avoid another singularity at
2π, but these refinements are mostly useful for either spin-stabilized spacecrafts with high rotation rate or for
interpolation over large time spans when the attitude spans over more than a full turn, so they will probably not be
triggered in the context of Earth observation spacecrafts.
The different interpolation scheme is however expected to lead to only very small differences in numerical accuracy
in the traditional cases with respect to simple linear interpolation on quaternion components followed by normalization.
The reason for this unexpected good behaviour is because in traditional image processing applications, the step size
used for the quaternion are often very small. The bad behavior of linear interpolation of quaternion components appears
only for step sizes above one minute, which are seldom used in image processing.
As a summary, Rugged relies on either propagation or interpolation at user choice, and attitude interpolation is much
more sophisticated than linear interpolation of quaternion components, but no differences are expect at this level,
except for simpler development and validation as everything is readily implemented and validated in Orekit.
Optical path
------------
### Inside spacecraft
At spacecraft level, the optical path is folded due to the various reflections and positions of the sensors with respect
to the spacecraft center of mass. This path is considered fixed in spacecraft frame, i.e. no time-dependent or thermal
deformation effects are considered. Following this assumption, the path can be virtually unfolded using the laws of optical
geometry and replaced by a straight line in spacecraft vicinity, with virtual pixels locations and lines of sights defined
by simple vectors with respect to the center of mass. As both position and orientation are considered, this implies that
the pixels are not considered exactly co-located with the spacecraft center of mass, the offset is typically of meter order
of magnitude. If for example we consider a 3m long spacecraft with an instrument is on the front (+X), the offset would be
about 1.5m if center of mass were at spacecraft mid-length.
This path unfolding is done once at geometry loading by the interface layer above the Rugged library, so all further computation
are done with simple straight lines. Of course, if the spacecraft definition file does not include position informations, only
the various reflections are taken into account and the location of the sensor is co-located with spacecraft center of mass.
### Free space travel
As pixel/ground mapping is computed, all intermediate geometric computation (attitude, orbit, precession, nutation, EOP
corrections, Earth rotation, pole wander) are combined into a couple of accurate Transform instances. These transforms
are then applied a few thousand times to convert every pixels line-of-sight in Earth frame. The reason for this computation
scheduling is that the transform between inertial frame and Earth frame is computing intensive and only depends on date, so
factoring it out of the pixels loop is a huge speed-up. As Orekit provides a way to combine several Transform instances together
first and apply them to positions and directions later, a lot of computation steps can be saved by also including all conversions
up to spacecraft frame.
As observation satellites are about 800km above ground, the light coming from the ground points they look at left Earth about
2.7ms before arriving on the sensors. This implies that the exact position of the ground point must be computed at an earlier
time than the position of the spacecraft. The expected difference can be predicted as the rotation of Earth during the 2.7ms
light travel time, it is about 1.2m at equator, in the East-West direction. This effect is compensated by applying the so-called
light-time correction.
![light-time correction](../images/light-time-correction.png)
The delay is computed for each pixel as the travel time is shorter for pixels looking in the nadir direction than for pixels
looking at the edge of field of view. As Orekit frame transforms automatically include a local Taylor expansion of the transform,
compensating the differential Earth rotation during this 2.7ms delay is done without recomputing the full precession/nutation model,
so the computation savings explained in the paragraphs above are still available when this compensation is applied.
Aberration of light is another phenomenon that must be considered. Aberration of light is the apparent shift in direction of an
incoming light when seen from a sensor that is itself moving. This shift is independent of the motion of the source of the light,
it depends only on the current velocity of the sensor at time of arrival. It is a composition of two velocities, the velocity of
light and the velocity of sensor. This composition can be computed simply in classical mechanics or with a slightly more complex
equation with relativistic effects. As spacecraft velocities are limited, classical mechanics is sufficient for accurate correction.
This effect is a large one and can correspond to up to a 20m shift once projected on ground for classical Earth observing missions.
As shown in next figure, from spacecraft point of view, the light incoming from the ground point seems to come from a fictitious
point “ahead” of the real point.
![aberration of light correction](../images/aberration-of-light-correction.png)
As a side note, aberration of light and light time correction can be linked or considered to be two aspects of a similar phenomenon,
even in classical (non-relativistic) physics. It depends on the frame in which we look at the various elements. If the source is
moving and the observer is at rest (i.e. we do the computation in the observer frame), then there is only light time correction and
aberration of light is zero. If the source is at rest and the observer is moving (i.e. we do the computation in source frame),
then there is only aberration of light (this is how aberration of light was first experimentally identified, in the context of
astronomy, considering the motion of Earth from where astronomers observe stars) and light time correction is zero. In the Rugged
context, both source and observer move with respect to the inertial frame into which we do the correction computation: the source
moves due to Earth rotation, the observer moves due to spacecraft orbit. So in Rugged context, both phenomenoms exist and should be
compensated. Some other systems may consider only one of the two phenomena and produce accurate results, simply by computing the
correction in either Earth or spacecraft frame and considering the motion of the other part as a relative motion combining both Earth
and spacecraft: it is really only a matter of point of view.
Both light-time correction and aberration of light correction are applied in the Rugged library for greater accuracy, but both can be
ignored (independently) at user choice. One use case for ignoring these important correction is for validation purposes and comparison
with other libraries that do not take this correction into account. This use case is by definition restricted to validation phases and
should not apply to operational systems. Another use case for ignoring light-time correction and aberration of light correction occurs
when the effect is explicitely expected to be compensated at a later stage in the image processing chain, most probably using a
posteriori polynomial models. This use case can occur in operational products. It seems however better to compensate these effects early
as they can be computed to full accuracy with a neglectible computation overhead.
Arrival on ellipsoid
--------------------
Once a pixel line-of-sight is known in Earth frame, computing its intersection with a reference ellipsoid is straightforward using an
instance of OneAxisEllipsoid. The Orekit library computes this intersection as a GeodeticPoint instance on the ellipsoid surface.
The line-of-sight is a straight line in the Cartesian 3D space, and once converted to geodetic coordinates (latitude, longitude,
altitude), it is not a straight line anymore. Assuming line-of-sight remains a straight line in this space and can be defined by
computing only two points10 introduces yet another error, which is transverse to line-of-sight and reaches its maximum value roughly
at middle point. This assumption is a flat-body assumption, i.e. it correspond to locally approximating the ellipsoid to its tangential
plane. The error is the sagitta due to the bending of the real line-of-sight in the geodetic space.
![flat-body interpolation error](../images/flat-body-interpolation-error.png)
This error depends on the diving angle of the line-of-sight with respect to local vertical. It is zero for a diving angle of 90 degrees
(i.e. a pure nadir line-of-sight) and increases as the diving angle decreases. It can reach tremendous values (hundreds of meters or
more) for almost tangential observations. The previous figure shows the amplitude of the error as a function of both the diving angle and
the azimuth of the observation. It was computed for a ground point at intermediate latitude (about 54 degrees North, in Poland), and using
the two base points for the line-of-sight segment at 8000 meters altitude and -400 meters altitude.
The Rugged library fully computes the shape of the line-of-sight throughout its traversal of the Digital Elevation Model when the Duvenhage
algorithm (see next section) is used for DEM intersection. For testing purposes, another version of the algorithm assuming flat-body
hypothesis is also available (i.e. it consider the line-of-sight is a straight line in latitude/longitude/altitude coordinates) but its use
is not recommended. The computing overhead due to properly using ellipsoid shape is of the order of magnitude of 3%, so ignoring this on the
sake of performances is irrelevant.
Errors compensation summary
---------------------------
The following table summarizes the error compensations performed in the Rugged library which are not present in some other geometry correction libraries:
|-----------------------------------------------------------|-----------------------|-------------------------|:------------------------|
origin | amplitude | location | comment
δΔε and δΔψ corrections for precession and nutation models | > 3m | horizontal shift |up-to-date precession and nutation models are also available
quaternion interpolation | negligible | line-of-sight direction |the effect is important for step sizes above 1 minute
instrument position | 1.5m | along track |coupled with attitude
light time correction | 1.2m | East-West |pixel-dependent
aberration of light | 20m | along track |depends on spacecraft velocity
flat-body | 0.8m | across line-of-sight |error increases a lot for large fields of view
src/site/resources/images/aberration-of-light-correction.png

26 KiB

src/site/resources/images/flat-body-interpolation-error.png

137 KiB

src/site/resources/images/light-time-correction.png

24.1 KiB

src/site/resources/images/precession-nutation-error.png

7.15 KiB

...@@ -26,22 +26,25 @@ ...@@ -26,22 +26,25 @@
</bannerRight> </bannerRight>
<body> <body>
<menu name="Rugged"> <menu name="Rugged">
<item name="Overview" href="/index.html" /> <item name="Overview" href="/index.html" />
<item name="Getting the sources" href="/sources.html" /> <item name="Getting the sources" href="/sources.html" />
<item name="Building" href="/building.html" /> <item name="Building" href="/building.html" />
<item name="FAQ" href="/faq.html" /> <item name="FAQ" href="/faq.html" />
<item name="License" href="/license.html" /> <item name="License" href="/license.html" />
<item name="Downloads" href="/downloads.html" /> <item name="Downloads" href="/downloads.html" />
<item name="Changes" href="/changes-report.html" /> <item name="Changes" href="/changes-report.html" />
<item name="Contact" href="/contact.html" /> <item name="Contact" href="/contact.html" />
</menu> </menu>
<menu name="Design"> <menu name="Design">
<item name="Overview" href="/design/overview.html" /> <item name="Overview" href="/design/overview.html" />
<item name="Technical choices" href="/design/technical-choices.html" />
<item name="Digital Elevation Model" href="/design/digital-elevation-model.html" />
<item name="Preliminary design" href="/design/preliminary-design.html" />
</menu> </menu>
<menu name="Development"> <menu name="Development">
<item name="Contributing" href="/contributing.html" /> <item name="Contributing" href="/contributing.html" />
<item name="Guidelines" href="/guidelines.html" /> <item name="Guidelines" href="/guidelines.html" />
<item name="Javadoc" href="/apidocs/index.html" /> <item name="Javadoc" href="/apidocs/index.html" />
</menu> </menu>
<menu ref="reports"/> <menu ref="reports"/>
</body> </body>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment