PDS_VERSION_ID = PDS3
DATA_SET_ID = "JNO-J/SW-JAD-2-UNCALIBRATED-V1.0"
RECORD_TYPE = STREAM
OBJECT = TEXT
PUBLICATION_DATE = 2024-12-30
NOTE = "ERRATA.TXT reflects any known deficiencies
with files contained on volume JNOJAD_2002."
END_OBJECT = TEXT
END
ERRATA
This file contains a list of all deficiencies and irregularities that are
known to exist at the time of the publication date above. Any errors
detected with this volume will be published on subsequent volumes in this
volume set.
------------------------------------------------------------------------------
[Added 2020-Aug_27]
An incorrect ISSUES object was provided in Version 01 files for 3 records
(over 3 files) for these ion files in this dataset:
File: JAD_L20_HLC_ION_LOG_2017037_V01.DAT
The record with TIMESTAMP_WHOLE of 539633323 (UTC = 2017-037T06:05:28.801),
should have ISSUES = 0 (Not ISSUES = 1024).
All records in that file should be ISSUES = 0.
File: JAD_L20_HLC_ION_LOG_2017056_V01.DAT
The record with TIMESTAMP_WHOLE of 541255391 (UTC = 2017-056T00:39:54.506),
should have ISSUES = 0 (Not ISSUES = 1024).
All records in that file should be ISSUES = 0.
File: JAD_L20_HLC_ION_TOF_2017188_V01.DAT
The record with TIMESTAMP_WHOLE of 552721029 (UTC = 2017-188T17:33:38.093),
should have ISSUES = 0 (Not ISSUES = 1024).
All records in that file should be ISSUES = 0.
These will be corrected in any Version 02 (or greater) files.
The above 3 TIMESTAMP_WHOLE values also have the same incorrect ISSUES flag
in the Level 3 Version 02 files in PDS dataset JNO-J/SW-JAD-3-CALIBRATED-V1.0.
[The level 3 version 03 (or higher) files have the corrected ISSUES flag for
these TIMESTAMP_WHOLE values, but see the ERRATA.TXT file of that dataset.]
NOTE
2023-10-12: Instrument name corrected to be
INSTRUMENT_NAME = JOVIAN AURORAL DISTRIBUTIONS EXPERIMENT
Instead of
INSTRUMENT_NAME = JOVIAN AURORAL PLASMA DISTRIBUTIONS EXPERIMENT
(without PLASMA).
------------------------------------------------------------------------------
[Added 2024-Oct-27]
A bug was found in the decompression codes to go from Level 1 to Level 2 data.
However, it only seems to affect High Rate Science data, and perhaps 6 to 12
1-second records per perijove on average (i.e. perhaps a few JADE-E, a few TOF
and a few ion species or ion Logicals). The effect is to insert 4 (typically)
random numbers from 0-255 into the uncompressed DATA object when it is 1-byte
integers before it is scaled up to a two-byte or four-byte integer.
For instance, if a High Rate Science JADE-E record is affected, the Level 2
DATA object of size 64 x 51 will have 4 of those 3264 indices that are random
numbers, and those 4 indices will be adjacent to each other.
As of August 30th, 2024, the JADE production chain has been improved to make
this bug easier to identify, but the bug is still present and will not be
fixed. All High Rate Science data on the PDS before this data may be affected.
For files processed after this date, instead of four (typically) random
numbers, the four numbers are all 255, which will show up in the Level 2 data
(be it of type two-bytes or four-bytes) as a very large number, and hopefully
stand out to the user. Given it affects so little data, there are no plans to
fix, and we also do not know how to fix (or if it would require a flight
software update).
The route cause is either in the onboard compression of JADE data code, or in
the ground decompression c code used by the JADE team, which was provided to
the team by the spacecraft manufacturer. On the ground, the JADE production
code created an array of the appropriate size, passes it to the decompression
code, which fills in each element of the array as it decompressed, and passes
back. However, with this bug, the decompression code skips some indices
(silently, without error), thus whatever 1-byte number happened to be in the
memory location when the array was created remains. The values that happened
to be at that memory location when the array was created is essentially
random.
Our code improvement is to make the 1-byte array as before, but then set every
element of the array to 255 before passing it to the decompression routine.
The decompression code may still skip some of the indices, but now we know
those skipped indices have a value of 255. But do remember that an observed
value of 255 in the data is valid and possible too.
This bug was identified in that when we processed data to a L2 file, then
reprocessed it, the two outputs were occasionally different for a few bytes
(out of a file often of several Gigabytes in size) when this bug hit, due to
those skipped elements getting different random numbers each time.
Since this bug is in the Level 2 data, it is also present in the Level 3 data
and Level 5 datasets (which use lower level data each time).
|