Accessing and verifying ADL "BLOB" files

Data formats, HDF5, XML profiles, etc.
Post Reply
rayg
Posts: 13
Joined: Tue Jan 11, 2011 6:25 pm
Location: Madison, WI
Contact:

Accessing and verifying ADL "BLOB" files

Post by rayg »

I have an alpha-test version of an "adl_blob.py" module which allows the mapping of ADL data "blobs" (binary large objects) along with a patch for the glance dataset verification tool that allows it to make use of the module. It handles naturally-aligned blobs and can map both big-endian and little-endian files transparently, revealing them as pythonic data structures. It creates the structure definitions using standard "ctypes" and the numpy ctypes extension. It parses a narrow subset of the ADL XML corresponding to a given BLOB file in order to do this.

It is by no means feature-complete or rigorously tested, but some may find it useful. Future revisions will cover more of the XML schema, and may allow limited transcoding to/from NetCDF to allow other tools (e.g. MATLAB/IDL) a read/write bridge. Please note that this is not officially supported nor endorsed by JPSS and should be considered user-community contributed utility scripts.

Using the glance adapter, I've started doing verification runs on some of the test output, statistically comparing the big-endian truth output provided with the ADL against test data generated locally. An example of elementary 'stats' for one variable comparison is below. More elaborate HTML and graphical test reports can also be generated, I'll try to post some mapped-product and multi-channel examples soon.

Adl_blob.py needs python 2.5~2.7 with numpy, and the (very limited) self-test uses matplotlib to display some ATMS-FSDR data. My patched version of glance can be found at http://www.ssec.wisc.edu/~rayg/dist/alpha/ for the moment. You'll need to have the adl_blob.py module in your python library path (PYTHONPATH), and glance requires a lot of modules including pycdf, pyhdf, h5py, matplotlib, and mako and may be most easily handled with a copy of the "holyhandgrenade" python distribution we use. Ubuntu/Debian prebuilt packages may exist for the majority of the dependencies as well.

As I get more pieces of this working and tested, I'll post a more complete package and wiki some more documentation.

Code: Select all

ln -s /path/to/ADL2.0/ADL/data/output
ln -s /path/to/ADL2.0/ADL/xml
for fn in output/viirsCalTruthOutputs/*SDR; do ln -s $fn $(basename $fn).BE; done 
ln -s output/VIIRS*SDR .
ln -s xml/VIIRS*SDR.xml .
FORMAT=jpss_adl glance stats VIIRS-M9-SDR.BE VIIRS-M9-SDR >stats.txt
'stats' report excerpt --

Code: Select all

--------------------------------
Bt_refl

Finite Data Statistics
  a_finite_count: 2457600
  a_finite_fraction: 1.0
  b_finite_count: 2457600
  b_finite_fraction: 1.0
  common_finite_count: 2457600
  common_finite_fraction: 1.0
  finite_in_only_one_count: 0
  finite_in_only_one_fraction: 0.0

General Statistics
  a_missing_value: nan
  b_missing_value: nan
  epsilon: 0.0
  epsilon_percent: None
  max_a: 65535
  max_b: 65535
  min_a: 58
  min_b: 58
  num_data_points: 2457600
  shape: (768, 3200)
  spatially_invalid_pts_ignored_in_a: 0
  spatially_invalid_pts_ignored_in_b: 0

Missing Value Statistics
  a_missing_count: 0
  a_missing_fraction: 0.0
  b_missing_count: 0
  b_missing_fraction: 0.0
  common_missing_count: 0
  common_missing_fraction: 0.0

NaN Statistics
  a_nan_count: 0
  a_nan_fraction: 0.0
  b_nan_count: 0
  b_nan_fraction: 0.0
  common_nan_count: 0
  common_nan_fraction: 0.0

Numerical Comparison Statistics
  correlation: 1.0
  diff_outside_epsilon_count: 344
  diff_outside_epsilon_fraction: 0.000139973958333
  max_diff: 1
  mean_diff: 0.000139973958333
  median_diff: 0.0
  mismatch_points_count: 344
  mismatch_points_fraction: 0.000139973958333
  perfect_match_count: 2457256
  perfect_match_fraction: 0.999860026042
  r-squared correlation: 1.0
  rms_diff: 0.0
  std_diff: 0.0118302310047

Code: Select all


#!/usr/bin/env python
# encoding: utf-8
"""
adl_blob.py
Copyright 2011, University of Wisconsin Regents.
Licensed under GNU Public License (GPL) v3. See http://www.gnu.org/licenses/gpl-3.0-standalone.html

Parse ADL-generated XML files describing data structures
	XML validation is not required
Create python numpy+ctypes representations of one or more data structures from the XML parse tree
	Properly handle natural packed BLOBs in native endianness
Allow read-write access to BLOB files as Pythonic data structures, including numpy multidimensional arrays where appropriate with the following preliminary interface:
	map( adl-xml-pathname, blob-pathname, optional-writable-flag, optional-byteorder-flag ) => data structure
	create( adl-xml-pathname, optional-blob-pathname, optional-byteorder-flag ) => data structure
Transcode ADL blobs conforming to a given XML spec and a natural-packed BLOB in native-endian format into to NetCDF files.
	Effectively  adlxml + blob => netcdf3 or netcdf4, netcdf4 or netcdf3 + adlxml => blob
	This will be a ‘naive’ transcoding with little additional metadata other than the version of the library used to transcode, and minimal provenance information identifying the BLOB and XML data used.
Be usable both as a library and as a standalone program requiring the python runtime
	Python 2.6 or newer, 64-bit compiled on Linux or Darwin (OS X)
	numpy 1.3 or newer
	netcdf4-python 0.9.3 or newer when used for NetCDF transcoding
FUTURE functionality
	Allow transcoding of BLOBs between endiannesses.
	Allow direct access of non-native endian files for read-only access
	Allow direct access of non-native endian files for read-write access
	Allow alternate packing (non-natural) to be specified for transcoding input (but not output).
	Allow alternate packing (non-natural) to be specified for read-only access
	Mark-up BLOB files with provenance metadata as filesystem extended attributes (would require python xattr module) for bookkeeping purposes.
"""
__author__ = 'R.K.Garcia <rayg@ssec.wisc.edu>'
__version__ = '$Id: adl_blob.py 83 2011-03-14 20:31:28Z rayg $'
__docformat__ = 'Epytext'

import os,sys,logging
import xml.etree.ElementTree as ET
import ctypes as c
import numpy as np
import numpy.ctypeslib as npc
import mmap
from pprint import pformat

LOG = logging.getLogger(__name__)

# use different ctypes base classes to handle endianness
BIG_ENDIAN = c.BigEndianStructure
LITTLE_ENDIAN = c.LittleEndianStructure
NATIVE_ENDIAN = c.Structure

# dictionary of types that aren't covered by numpy
# #include <iostream>
# using namespace std;
# int main()
# {
#     bool a[4];
#     cout << int(sizeof(bool)) << endl;
#     cout << int(sizeof(a) / 4) << endl;
# }
TYPEMAP = { 'bool' : c.c_byte,
            'UInt8': c.c_uint8,  # bug in numpy 1.3 makes us need to do this manually
            'Int8' : c.c_int8 }


def ctype_from_str(typename):
    "return an appropriate ctypes-compatible type for a given ADL typename e.g. Float32"

    assert( type(typename)==str )
    # take advantage of numpy including data types matching spelling, except lowercase
    ctype = TYPEMAP.get(typename, None)
    if ctype is not None:
        return ctype
    ctor = vars(np).get(typename.lower())
    # FUTURE: do this without constructing a temporary object, it's kinda crufty
    ctype = type(npc.as_ctypes(ctor()))
    LOG.debug('%r found to be %r' % (typename, ctype))
    return ctype

    
def Dimension(node):
    "return name, width for a dimension node"
    def _(name, type=str):
        return type(node.find(name).text)
    name = _('Name')
    min_index = _('MinIndex',int)
    max_index = _('MaxIndex',int)
    if min_index!=max_index:
        LOG.warning('MinIndex != MaxIndex in Dimension')
    return name, max_index    

def Field(node, dims = None):
    "return a name, ctypes representation for a field xml node"
    assert(node.tag=='Field')
    def _(name, type=str):
        return type(node.find(name).text)
    name = _('Name')
    symbol = node.find('Symbol')
    if symbol is not None:
        LOG.debug('using %s is symbol instead of %s' % (symbol.text, name))
        name = symbol.text
    offset = _('FieldOffset', int)
    num_dims = _('NumberOfDimensions', int)
    dim_info = [Dimension(x) for x in node.getchildren() if x.tag=='Dimension']
    LOG.debug('dimension info: %r' % dim_info)
    ctype = _('DataType', ctype_from_str)
    num_data = _('NumberOfData', int)
    # fillvalue = _('InitialFill', data_type)
    if num_dims:
        from operator import mul        
        # compound each dimension using reduce
        LOG.debug('dimension reduction of %r' % dim_info)
        ctype = reduce(mul, reversed([x[1] for x in dim_info]), ctype)
        if dims is not None:
            dims.update(dict(dim_info))            
    # attrs = dict(offset = offset, fillvalue = fillvalue)
    return name, ctype

def ProductData(node, dims = None):
    """return a series of (name, ctypes representation) generated from a ProductData node 
    and optionally mark up dimension dictionary with dimension names and sizes
    """
    assert(node.tag=='ProductData')
    def _(name, type=str):
        return type(node.find(name).text)
    name = _('DataName')
    LOG.debug('processing ProductData %r' % name)
    field_type = _('ProductFieldType')
    # if field_type != 'Regular':
    #     LOG.warning('%s is %s and not a Regular field type' % (name, field_type))
    num_dims = _('NumberOfDimensions', int)
    num_fields = _('NumberOfFields', int)
    LOG.debug('%s has %d fields' % (name, num_fields))
    for child in node.getchildren():
        if child.tag != 'Field': 
            LOG.debug('skipping %s while looking for Fields' % child.tag)
            continue
        fname, fctype = Field(child, dims=dims)
        LOG.debug('field name is %s' % fname)
        yield fname, fctype

def NPOESSDataProduct(xml, base_class=NATIVE_ENDIAN, context=None):
    "return (name, ctypes representation) of a NPOESSDataProduct node"
    assert(xml.tag=='NPOESSDataProduct')

    dimensions = dict()
    fields = list()

    for node in xml.getchildren():
        if node.tag != 'ProductData': 
            LOG.debug('skipping %s' % node.tag)
            continue
        fields += list(ProductData(node, dimensions))

    name = xml.find('ProductName').text
    LOG.info('%s has %d fields' % (name,len(fields)))
    LOG.info( ', '.join(x[0] for x in fields) )
    LOG.debug(pformat(list(enumerate(fields))))
    LOG.debug('dimensions:\n%s' % pformat(dimensions))
    
    class _adl_struct_(base_class):
        _fields_ = fields
            
    return name, _adl_struct_


def from_file(xml_pathname, endian = NATIVE_ENDIAN, *product_names):
    """return name, ctypes structure definition for a given ADL product XML schema
    """
    xml = ET.fromstring(file(xml_pathname, 'rt').read())
    # FUTURE: multiple products in one file?
    # return [tuple(x) for x in NPOESSDataProduct(xml) if not product_names or (x[0] in product_names)]
    return NPOESSDataProduct(xml, endian)

def map(xml_pathname, blob_pathname, writable=False, endian=NATIVE_ENDIAN):
    """map a BLOB conforming to an XML specification
    e.g. map( 'ATMS_FSDR.xml', 'ATMS-FSDR' )
    optionally, map as read-write (writable=True)
    byte_order (not yet implemented) allows mapping of BIG_ENDIAN, LITTLE_ENDIAN, NATIVE_ENDIAN
    default byte_order is NATIVE_ENDIAN
    """
    # map file as readwrite and mmap access as write-through, or 
    # readonly and copy-on-write for read-only mode
    fflags, aflags = ('rb+', mmap.ACCESS_WRITE) if writable else ('rb', mmap.ACCESS_COPY)
    # open the file and map it as a buffer
    fp = file(blob_pathname, fflags)
    mm = mmap.mmap(fp.fileno(), 0, access=aflags)
    # parse the XML
    name, struct = from_file(xml_pathname, endian)
    # use ctypes to map a read-only copy or a read-write direct mmap
    data = struct.from_buffer(mm)
    # hang the mmap and its file from the data structure to hold reference count
    data._file = fp
    data._mmap = mm
    # add the sync operation; FUTURE consider doing this with multiple inheritance in from_file
    def sync(fp=fp if writable else None):
        if fp:
            fp.flush()
    data.sync = sync
    return data

def create(xml_pathname, blob_pathname = None, endian = NATIVE_ENDIAN):
    raise NotImplemented

def test1():
    fsdr = map('ATMS_FSDR.xml', 'ATMS-FSDR')    
    data = np.array(fsdr.correctedRayleighsTemperature[:])
    from pylab import plot, title, grid, show, figure
    figure()
    plot( data[0].transpose() )
    title('corrected rayleigh temperature ATMS-FSDR ADL 2.0 test scanline 0')
    grid()
    fsdr.sync()
    return fsdr

def test2():
    fsdr = map('ATMS_FSDR.xml', 'ATMS-FSDR.BE', endian = BIG_ENDIAN)    
    data = np.array(fsdr.correctedRayleighsTemperature[:])
    from pylab import plot, title, grid, show, figure
    figure()
    plot( data[0].transpose() )
    title('corrected rayleigh temperature ATMS-FSDR.BE ADL 2.0 test scanline 0')
    grid()
    fsdr.sync()
    return fsdr


def main():
    import optparse
    usage = """
%prog [options] adl-xml-filename

"""
    parser = optparse.OptionParser(usage)
    parser.add_option('-t', '--test', dest="self_test",
                    action="store_true", default=False, help="run self-tests")  
    parser.add_option('-v', '--verbose', dest='verbosity', action="count", default=0,
                    help='each occurrence increases verbosity 1 level through ERROR-WARNING-INFO-DEBUG')
    (options, args) = parser.parse_args()

    # make options a globally accessible structure, e.g. OPTS.
    global OPTS
    OPTS = options

    levels = [logging.ERROR, logging.WARN, logging.INFO, logging.DEBUG]
    logging.basicConfig(level = levels[options.verbosity])

    if options.self_test:
        test1()
        test2()
        from pylab import show
        show()
        return 2


    if not args: 
        parser.error( 'incorrect arguments, try -h or --help.' )
        return 9

    # split multiple filenames into a list if provided
    xml_filenames = args[0].split('+')
    
    # build a dictionary of data structures
    strux = dict(from_file(xml_filename) for xml_filename in xml_filenames)
    LOG.debug(repr(strux))
    LOG.info( 'found structures: %s' % ', '.join(strux.keys()) )
    
    # FIXME: transcode to/from netcdf

    return 0

if __name__=='__main__':
    sys.exit(main())        
geoffc
Posts: 34
Joined: Mon Feb 14, 2011 3:02 pm
Location: Madison, WI
Contact:

Re: Accessing and verifying ADL "BLOB" files

Post by geoffc »

I have used Ray Garcia's adl_blob.py to visualise the BLOB files for the VIIRS Cloud Mask IP (cloud
mask and cloud phase) and the VIIRS Aerosol Optical Thickness IP (aerosol optical thickness and
Angstrom exponent).

The BLOB files used are present in the ADL Virtual Appliance. There are the reference files in
the "/home/adl_user/ADL/data/output/*TruthOutputs" directories, and assuming you have run the
"/home/adl_user/support/runDemos" script there will be new BLOB files in the "/home/adl_user/ADL/data/output"
directory.

In our case the xml file, and the reference and new BLOB files for the VIIRS Cloud Mask IP are...

Code: Select all

/home/adl_user/ADL/xml/VIIRS_CM_IP.xml"
/home/adl_user/ADL/data/output/cloudMaskTruthOutputs/VIIRS-CM-IP
/home/adl_user/ADL/data/output/VIIRS-CM-IP"
and the xml file, and the reference and new BLOB files for the VIIRS Aerosol Optical Thickness IP are...

Code: Select all

/home/adl_user/ADL/xml/VIIRS_AEROS_OPT_THICK_IP.xml"
/home/adl_user/ADL/data/output/aerosolTruthOutputs/VIIRS-Aeros-Opt-Thick-IP"
/home/adl_user/ADL/data/output/VIIRS-Aeros-Opt-Thick-IP"
With the above files, now we can ingest some data with adl_blob.py, and plot the results. Below is some
python code to do so (assumes adl_blob.py is in the current directory)...

Code: Select all

import numpy as np
from numpy import ma as ma

from matplotlib import pyplot as ppl

import adl_blob as adl

ADL_PATH = "/home/adl_user/ADL"

### VIIRS Cloud Mask 

VCMIP_xmlFile = ADL_PATH+"/"+"xml/VIIRS_CM_IP.xml"
VCMIP_OrigOutputBlobFile = ADL_PATH+"/"+"data/output/cloudMaskTruthOutputs/VIIRS-CM-IP"
VCMIP_OutputBlobFile = ADL_PATH+"/"+"data/output/VIIRS-CM-IP"

vcmOrigObj = adl.map(VCMIP_xmlFile,VCMIP_OrigOutputBlobFile, endian=adl.LITTLE_ENDIAN)
vcmObj = adl.map(VCMIP_xmlFile,VCMIP_OutputBlobFile, endian=adl.LITTLE_ENDIAN)

vcmOrig = np.reshape(vcmOrigObj.vcm0[:],(768,3200))
vcm = np.reshape(vcmObj.vcm0[:],(768,3200))

# Bit mask and shift to get the Cloud Mask
vcmOrig = np.bitwise_and(vcmOrig ,12) >> 2
vcm = np.bitwise_and(vcm ,12) >> 2

fig = ppl.figure()
ax1 = ppl.subplot(211)
im1 = ax1.imshow(ma.masked_less(vcmOrig,0),interpolation='nearest',vmin=0,vmax=3)
txt1 = ax1.set_title('ADL VIIRS Cloud Mask (original)')
ppl.setp(ax1.get_xticklabels(), visible=False)
ppl.setp(ax1.get_yticklabels(), visible=False)

ax2 = ppl.subplot(212)
im2 = ax2.imshow(ma.masked_less(vcm,0),interpolation='nearest',vmin=0,vmax=3)
txt2 = ax2.set_title('ADL VIIRS Cloud Mask')
ppl.setp(ax2.get_xticklabels(), visible=False)
ppl.setp(ax2.get_yticklabels(), visible=False)

cb = ppl.colorbar(im2,orientation='horizontal')
ppl.show()
fig.savefig("VIIRS-CM-IP_compare.png",dpi=300)
ppl.close('all')

### VIIRS Cloud Phase

phaseOrig = np.reshape(vcmOrigObj.vcm5[:],(768,3200))
phase = np.reshape(vcmObj.vcm5[:],(768,3200))

# Bit mask and shift to get the Cloud Phase
phaseOrig = np.bitwise_and(phaseOrig ,7) >> 0
phase = np.bitwise_and(phase ,7) >> 0

fig = ppl.figure()
ax1 = ppl.subplot(211)
im1 = ax1.imshow(ma.masked_less(phaseOrig,0),interpolation='nearest',vmin=0,vmax=7)
txt1 = ax1.set_title('ADL VIIRS Cloud Phase (original)')
ppl.setp(ax1.get_xticklabels(), visible=False)
ppl.setp(ax1.get_yticklabels(), visible=False)

ax2 = ppl.subplot(212)
im2 = ax2.imshow(ma.masked_less(phase,0),interpolation='nearest',vmin=0,vmax=7)
txt2 = ax2.set_title('ADL VIIRS Cloud Phase')
ppl.setp(ax2.get_xticklabels(), visible=False)
ppl.setp(ax2.get_yticklabels(), visible=False)

cb = ppl.colorbar(im2,orientation='horizontal')
ppl.show()
fig.savefig("VIIRS-CM-IP_phase_compare.png",dpi=300)
ppl.close('all')

### VIIRS Aerosol Optical Thickness

AOTIP_xmlFile = ADL_PATH+"/"+"xml/VIIRS_AEROS_OPT_THICK_IP.xml"
AOTIP_OrigOutputBlobFile = ADL_PATH+"/"+"data/output/aerosolTruthOutputs/VIIRS-Aeros-Opt-Thick-IP"
AOTIP_OutputBlobFile = ADL_PATH+"/"+"data/output/VIIRS-Aeros-Opt-Thick-IP"

aotOrigObj = adl.map(AOTIP_xmlFile,AOTIP_OrigOutputBlobFile, endian=adl.BIG_ENDIAN)
aotObj = adl.map(AOTIP_xmlFile,AOTIP_OutputBlobFile, endian=adl.LITTLE_ENDIAN)

aotOrig = np.reshape(aotOrigObj.faot550[:],(768,3200))
aot = np.reshape(aotObj.faot550[:],(768,3200))

fig = ppl.figure()
ax1 = ppl.subplot(211)
im1 = ax1.imshow(ma.masked_less(aotOrig,0.),interpolation='nearest',vmin=0.,vmax=1.)
txt1 = ax1.set_title('ADL VIIRS Aerosol Optical Thickness (original)')
ppl.setp(ax1.get_xticklabels(), visible=False)
ppl.setp(ax1.get_yticklabels(), visible=False)

ax2 = ppl.subplot(212)
im2 = ax2.imshow(ma.masked_less(aot,0.),interpolation='nearest',vmin=0.,vmax=1.)
txt2 = ax2.set_title('ADL VIIRS Aerosol Optical Thickness')
ppl.setp(ax2.get_xticklabels(), visible=False)
ppl.setp(ax2.get_yticklabels(), visible=False)

cb = ppl.colorbar(im2,orientation='horizontal')
ppl.show()
fig.savefig("VIIRS-Aeros-Opt-Thick-IP_compare.png",dpi=300)
ppl.close('all')


### VIIRS Aerosol Angstrom Exponent

angexpOrig = np.reshape(aotOrigObj.angexp[:],(768,3200))
angexp = np.reshape(aotObj.angexp[:],(768,3200))

fig = ppl.figure()
ax1 = ppl.subplot(211)
im1 = ax1.imshow(ma.masked_less(angexpOrig,-800.),interpolation='nearest')
txt1 = ax1.set_title('ADL VIIRS Aerosol Angstrom Exponent (original)')
ppl.setp(ax1.get_xticklabels(), visible=False)
ppl.setp(ax1.get_yticklabels(), visible=False)

ax2 = ppl.subplot(212)
im2 = ax2.imshow(ma.masked_less(angexp,-800.),interpolation='nearest')
txt2 = ax2.set_title('ADL VIIRS Aerosol Angstrom Exponent')
ppl.setp(ax2.get_xticklabels(), visible=False)
ppl.setp(ax2.get_yticklabels(), visible=False)

cb = ppl.colorbar(im2,orientation='horizontal')
ppl.show()
fig.savefig("VIIRS-Aeros-Opt-Thick-IP_angExp_compare.png",dpi=300)
ppl.close('all')
VIIRS Cloud Mask
VIIRS Cloud Mask from IDPS and ADL BLOB files.
VIIRS Cloud Mask from IDPS and ADL BLOB files.
VIIRS-CM-IP_compare.png (47.49 KiB) Viewed 3516 times
VIIRS Cloud Phase
VIIRS Cloud Phase from IDPS and ADL BLOB files.
VIIRS Cloud Phase from IDPS and ADL BLOB files.
VIIRS-CM-IP_phase_compare.png (55.11 KiB) Viewed 3516 times
VIIRS Aerosol Optical Thickness
VIIRS Aerosol Optical Thickness from IDPS and ADL BLOB files.
VIIRS Aerosol Optical Thickness from IDPS and ADL BLOB files.
VIIRS-Aeros-Opt-Thick-IP_compare.png (87.57 KiB) Viewed 3516 times
rayg
Posts: 13
Joined: Tue Jan 11, 2011 6:25 pm
Location: Madison, WI
Contact:

Re: Accessing and verifying ADL "BLOB" files

Post by rayg »

Here's a freshened adl_blob r89.

Bugfix: Ignores empty Symbol entries (contributed by R.Hudson@UW)
Feature addition: Naive export of MATLAB .mat files from big- or little-endian naturally aligned BLOB files.
Feature: Where XML identifiers have spaces in them, replace with "_" so as not to require getattr() style access. Generates a WARNING when this happens.

Example of exporting MAT files:

Code: Select all

# prerequisites - scipy, numpy, python2.5 or newer; use HHG on Linux or fink HHG on Mac
export PATH=/opt/hhg-200911-x86_64/bin:$PATH

# run script, assume input BLOB is big-endian (-B), convert matlab using XML as guidebook
# -v -vv -vvv options are optional debug output to screen
python adl_blob.py -vv -B -M inst_sdr_NPP001212028121.mat \
    /path/to/data/xml/INST_SDR.xml \
    /path/to/data/data/inst/NPP001212028121/INST-SDR 

# test parse the XML if the above fails
python adl_blob.py -vvv /path/to/data/xml/INST_SDR.xml
There's also a -h option for a help listing, as well as some inline documentation within the module (and a couple basic unit test patterns). I plan to expand that to be more in line with Python standard module documentation.

This tool should work for SDRs and EDRs and Verified RDRs, but not "straight" RDRs which are mostly CCSDS.

Code: Select all

#!/usr/bin/env python
# encoding: utf-8
"""
adl_blob.py
Copyright 2011, University of Wisconsin Regents.
Licensed under GNU Public License (GPL) v3. See http://www.gnu.org/licenses/gpl-3.0-standalone.html

Parse ADL-generated XML files describing data structures
	XML validation is not required
Create python numpy+ctypes representations of one or more data structures from the XML parse tree
	Properly handle natural packed BLOBs in native endianness
Allow read-write access to BLOB files as Pythonic data structures, including numpy multidimensional arrays where appropriate with the following preliminary interface:
	map( adl-xml-pathname, blob-pathname, optional-writable-flag, optional-byteorder-flag ) => data structure
	create( adl-xml-pathname, optional-blob-pathname, optional-byteorder-flag ) => data structure
Transcode ADL blobs conforming to a given XML spec and a natural-packed BLOB in native-endian format into to NetCDF files.
	Effectively  adlxml + blob => netcdf3 or netcdf4, netcdf4 or netcdf3 + adlxml => blob
	This will be a ‘naive’ transcoding with little additional metadata other than the version of the library used to transcode, and minimal provenance information identifying the BLOB and XML data used.
Be usable both as a library and as a standalone program requiring the python runtime
	Python 2.6 or newer, 64-bit compiled on Linux or Darwin (OS X)
	numpy 1.3 or newer
	netcdf4-python 0.9.3 or newer when used for NetCDF transcoding
FUTURE functionality
	Allow transcoding of BLOBs between endiannesses.
	Allow direct access of non-native endian files for read-only access
	Allow direct access of non-native endian files for read-write access
	Allow alternate packing (non-natural) to be specified for transcoding input (but not output).
	Allow alternate packing (non-natural) to be specified for read-only access
	Mark-up BLOB files with provenance metadata as filesystem extended attributes (would require python xattr module) for bookkeeping purposes.
"""
__author__ = 'R.K.Garcia <rayg@ssec.wisc.edu>'
__version__ = '$Id: adl_blob.py 89 2011-03-23 22:35:49Z rayg $'
__docformat__ = 'Epytext'

import os,sys,logging
import xml.etree.ElementTree as ET
import ctypes as c
import numpy as np
import numpy.ctypeslib as npc
import mmap
from pprint import pformat

LOG = logging.getLogger(__name__)

# use different ctypes base classes to handle endianness
BIG_ENDIAN = c.BigEndianStructure
LITTLE_ENDIAN = c.LittleEndianStructure
NATIVE_ENDIAN = c.Structure

# dictionary of types that aren't covered by numpy
# #include <iostream>
# using namespace std;
# int main()
# {
#     bool a[4];
#     cout << int(sizeof(bool)) << endl;
#     cout << int(sizeof(a) / 4) << endl;
# }
TYPEMAP = { 'bool' : c.c_byte,
            'UInt8': c.c_uint8,  # bug in numpy 1.3 makes us need to do this manually
            'Int8' : c.c_int8 }


def ctype_from_str(typename):
    "return an appropriate ctypes-compatible type for a given ADL typename e.g. Float32"

    assert( type(typename)==str )
    # take advantage of numpy including data types matching spelling, except lowercase
    ctype = TYPEMAP.get(typename, None)
    if ctype is not None:
        return ctype
    ctor = vars(np).get(typename.lower())
    # FUTURE: do this without constructing a temporary object, it's kinda crufty
    ctype = type(npc.as_ctypes(ctor()))
    LOG.debug('%r found to be %r' % (typename, ctype))
    return ctype

def sanitize_field_name(name):
    "correct DataNames to be proper symbols"
    if ' ' in name:
        new_name = name.replace(' ', '_')
        LOG.warning("renaming %s as %s" % (name, new_name))
        return new_name
    return name
    
def Dimension(node):
    "return name, width for a dimension node"
    def _(name, type=str):
        return type(node.find(name).text)
    name = _('Name')
    min_index = _('MinIndex',int)
    max_index = _('MaxIndex',int)
    if min_index!=max_index:
        LOG.warning('MinIndex != MaxIndex in Dimension')
    return name, max_index    

def Field(node, dims = None):
    "return a name, ctypes representation for a field xml node"
    assert(node.tag=='Field')
    def _(name, type=str):
        return type(node.find(name).text)
    name = sanitize_field_name(_('Name'))
    symbol = node.find('Symbol')
    if (symbol is not None) and (symbol.text is not None):
        LOG.debug('using %s as symbol instead of %s' % (symbol.text, name))
        name = symbol.text
    offset = _('FieldOffset', int)
    num_dims = _('NumberOfDimensions', int)
    dim_info = [Dimension(x) for x in node.getchildren() if x.tag=='Dimension']
    LOG.debug('dimension info: %r' % dim_info)
    ctype = _('DataType', ctype_from_str)
    num_data = _('NumberOfData', int)
    # fillvalue = _('InitialFill', data_type)
    if num_dims:
        from operator import mul        
        # compound each dimension using reduce
        LOG.debug('dimension reduction of %r' % dim_info)
        ctype = reduce(mul, reversed([x[1] for x in dim_info]), ctype)
        if dims is not None:
            dims.update(dict(dim_info))            
    # attrs = dict(offset = offset, fillvalue = fillvalue)
    return name, ctype

def ProductData(node, dims = None):
    """return a series of (name, ctypes representation) generated from a ProductData node 
    and optionally mark up dimension dictionary with dimension names and sizes
    """
    assert(node.tag=='ProductData')
    def _(name, type=str):
        return type(node.find(name).text)
    name = _('DataName')
    LOG.debug('processing ProductData %r' % name)
    field_type = _('ProductFieldType')
    # if field_type != 'Regular':
    #     LOG.warning('%s is %s and not a Regular field type' % (name, field_type))
    num_dims = _('NumberOfDimensions', int)
    num_fields = _('NumberOfFields', int)
    LOG.debug('%s has %d fields' % (name, num_fields))
    for child in node.getchildren():
        if child.tag != 'Field': 
            LOG.debug('skipping %s while looking for Fields' % child.tag)
            continue
        fname, fctype = Field(child, dims=dims)
        LOG.debug('field name is %s' % fname)
        yield fname, fctype

def NPOESSDataProduct(xml, base_class=NATIVE_ENDIAN, context=None):
    "return (name, ctypes representation) of a NPOESSDataProduct node"
    assert(xml.tag=='NPOESSDataProduct')

    dimensions = dict()
    fields = list()

    for node in xml.getchildren():
        if node.tag != 'ProductData': 
            LOG.debug('skipping %s' % node.tag)
            continue
        fields += list(ProductData(node, dimensions))

    name = xml.find('ProductName').text
    LOG.info('%s has %d fields' % (name,len(fields)))
    LOG.info( ', '.join(x[0] for x in fields) )
    LOG.debug(pformat(list(enumerate(fields))))
    LOG.debug('dimensions:\n%s' % pformat(dimensions))
    
    class _adl_struct_(base_class):
        _fields_ = fields
            
    return name, _adl_struct_


def from_file(xml_pathname, endian = NATIVE_ENDIAN, *product_names):
    """return name, ctypes structure definition for a given ADL product XML schema
    """
    xml = ET.fromstring(file(xml_pathname, 'rt').read())
    # FUTURE: multiple products in one file?
    # return [tuple(x) for x in NPOESSDataProduct(xml) if not product_names or (x[0] in product_names)]
    return NPOESSDataProduct(xml, endian)

def map(xml_pathname, blob_pathname, writable=False, endian=NATIVE_ENDIAN):
    """map a BLOB conforming to an XML specification
    e.g. map( 'ATMS_FSDR.xml', 'ATMS-FSDR' )
    optionally, map as read-write (writable=True)
    byte_order (not yet implemented) allows mapping of BIG_ENDIAN, LITTLE_ENDIAN, NATIVE_ENDIAN
    default byte_order is NATIVE_ENDIAN
    """
    # map file as readwrite and mmap access as write-through, or 
    # readonly and copy-on-write for read-only mode
    fflags, aflags = ('rb+', mmap.ACCESS_WRITE) if writable else ('rb', mmap.ACCESS_COPY)
    # open the file and map it as a buffer
    fp = file(blob_pathname, fflags)
    mm = mmap.mmap(fp.fileno(), 0, access=aflags)
    # parse the XML
    name, struct = from_file(xml_pathname, endian)
    # use ctypes to map a read-only copy or a read-write direct mmap
    data = struct.from_buffer(mm)
    # hang the mmap and its file from the data structure to hold reference count
    data._file = fp
    data._mmap = mm
    # add the sync operation; FUTURE consider doing this with multiple inheritance in from_file
    def sync(fp=fp if writable else None):
        if fp:
            fp.flush()
    data.sync = sync
    return data

def create(xml_pathname, blob_pathname = None, endian = NATIVE_ENDIAN):
    raise NotImplementedError

def test1():
    fsdr = map('ATMS_FSDR.xml', 'ATMS-FSDR')    
    data = np.array(fsdr.correctedRayleighsTemperature[:])
    from pylab import plot, title, grid, show, figure
    figure()
    plot( data[0].transpose() )
    title('corrected rayleigh temperature ATMS-FSDR ADL 2.0 test scanline 0')
    grid()
    fsdr.sync()
    return fsdr

def test2():
    fsdr = map('ATMS_FSDR.xml', 'ATMS-FSDR.BE', endian = BIG_ENDIAN)    
    data = np.array(fsdr.correctedRayleighsTemperature[:])
    from pylab import plot, title, grid, show, figure
    figure()
    plot( data[0].transpose() )
    title('corrected rayleigh temperature ATMS-FSDR.BE ADL 2.0 test scanline 0')
    grid()
    fsdr.sync()
    return fsdr


def transcode_to_matlab(mat_filename, xml_filename, blob_filename, endian = NATIVE_ENDIAN):
    import scipy.io.matlab as mio
    data = map(xml_filename, blob_filename, endian = endian)
    fields = set( x[0] for x in data._fields_ )
    LOG.info('grabbing fields %r' % fields)
    mdict = dict()
    for key in fields:
        mdict[key] = np.array(getattr(data, key))
    mdict['_adl_xml_filename'] = xml_filename
    mdict['_blob_filename'] = blob_filename
    mdict['_adl_blob_version'] = '$Id: adl_blob.py 89 2011-03-23 22:35:49Z rayg $'
    LOG.debug(pformat(mdict))
    mio.savemat( mat_filename, mdict )


def main():
    import optparse
    usage = """
-- check XML parsing for one or more files
%prog [options] xmlfilename {xmlfilename...}

-- transcode blob to MAT file
%prog [options] -M matfilename xmlfilename blobfilename
"""
    parser = optparse.OptionParser(usage)
    parser.add_option('-t', '--test', dest="self_test",
                    action="store_true", default=False, help="run self-tests")
    parser.add_option('-B', '--big', dest="big_endian",
                    action="store_true", default=False, help="BLOB is big-endian format (default native)")
    parser.add_option('-L', '--little', dest="little_endian",
                    action="store_true", default=False, help="BLOB is little-endian format (default native)")
    parser.add_option('-M', '--matlab', dest="matlab",
                    action="store_true", default=False, help="convert to matlab: syntax MATfilename XMLfilename BLOBfilename")
    parser.add_option('-v', '--verbose', dest='verbosity', action="count", default=0,
                    help='each occurrence increases verbosity 1 level through ERROR-WARNING-INFO-DEBUG')
    (options, args) = parser.parse_args()

    # make options a globally accessible structure, e.g. OPTS.
    global OPTS
    OPTS = options

    levels = [logging.ERROR, logging.WARN, logging.INFO, logging.DEBUG]
    logging.basicConfig(level = levels[options.verbosity])

    if options.self_test:
        test1()
        test2()
        from pylab import show
        show()
        return 2

    if not args: 
        parser.error( 'incorrect arguments, try -h or --help.' )
        return 9

    if options.matlab:    
        endian = BIG_ENDIAN if options.big_endian else LITTLE_ENDIAN if options.little_endian else NATIVE_ENDIAN 
        mat, xml, blob = args
        transcode_to_matlab(mat, xml, blob, endian=endian)
    else:
        # split multiple filenames into a list if provided
        xml_filenames = args[0].split('+')
        
        # build a dictionary of data structures
        strux = dict(from_file(xml_filename) for xml_filename in xml_filenames)
        LOG.debug(repr(strux))
        LOG.info( 'found structures: %s' % ', '.join(strux.keys()) )
    
    return 0

if __name__=='__main__':
    sys.exit(main())        
rayg
Posts: 13
Joined: Tue Jan 11, 2011 6:25 pm
Location: Madison, WI
Contact:

Re: Accessing and verifying ADL "BLOB" files

Post by rayg »

I've added a glance starter write-up at https://groups.ssec.wisc.edu/groups/goe ... at-support. This has a basic run-through of getting basic text "stats" reports out of glance using adl_blob.

Minor documentation fixes only in r90, which is linked in the above article.
rayg
Posts: 13
Joined: Tue Jan 11, 2011 6:25 pm
Location: Madison, WI
Contact:

Re: Accessing and verifying ADL "BLOB" files

Post by rayg »

In working toward a VM-embedded glance and data analysis tools, I've updated the write-up above to include a minimal Fedora Core "yum" install of enough packages to run glance. I've also updated the glance alpha version linked to 20110411-adl-rkg, and eliminated some more dependencies currently unnecessary to ADL usage.
geoffc
Posts: 34
Joined: Mon Feb 14, 2011 3:02 pm
Location: Madison, WI
Contact:

Re: Accessing and verifying ADL "BLOB" files

Post by geoffc »

I have knocked up a bash frontend to Ray Garcia's adl_blob.py, called adlBlob2hdf5.py. This utility can be used to:
  • i) List the datasets contained in an ADL BLOB file
    ii) Write one, or all, datasets in an ADL BLOB file to a HDF5 file
adl_blob.py and adlBlob2hdf5.py can be obtained from...

https://groups.ssec.wisc.edu/users/rayg ... brary/view
https://groups.ssec.wisc.edu/users/geof ... f5.py/view

Dependencies for adl_blob.py include python 2.5~2.7 with numpy, and the (very limited) self-test uses matplotlib to display some ATMS-FSDR data. You'll need to have adl_blob.py in your python library path (PYTHONPATH). adlBlob2hdf5.py will further required pytables (http://www.pytables.org/moin) to handle HDF5 output, and can be executed in the following ways...

Code: Select all

$> python adlBlob2hdf5.py -x XMLFILE -b BLOBFILE -l

...or after making the script executable...

$> chmod 755 adlBlob2hdf5.py

$> ./adlBlob2hdf5.py -x XMLFILE -b BLOBFILE -l
As an example, we will list the contents of the BLOB file VIIRS-M4-FSDR, whose contents are specified by the XML file
VIIRS_M4_FSDR.xml

Code: Select all

$> python adlBlob2hdf5.py -x VIIRS_M4_FSDR.xml -b VIIRS-M4-FSDR -l

        Dataset        radiance is of type c_float_be_Array_3200_Array_768
        Dataset           Btemp is of type c_float_be_Array_3200_Array_768
        Dataset       scan_mode is of type c_ubyte_Array_48
        Dataset            mode is of type c_ubyte
        Dataset        padByte1 is of type c_ubyte
        Dataset        padByte2 is of type c_ubyte
        Dataset        padByte3 is of type c_ubyte
        Dataset       act_scans is of type c_int_be
        Dataset numOfMissingPkts is of type c_int_be_Array_48
        Dataset numOfBadCheckSum is of type c_int_be_Array_48
        Dataset numOfDiscardedPkts is of type c_int_be_Array_48
        Dataset QF1_VIIRSMBANDSDR is of type c_ubyte_Array_3200_Array_768
        Dataset    QF2_SCAN_SDR is of type c_ubyte_Array_48
        Dataset    QF3_SCAN_RDR is of type c_ubyte_Array_48
        Dataset    QF4_SCAN_SDR is of type c_ubyte_Array_768
        Dataset QF5_GRAN_BADDETECTOR is of type c_ubyte_Array_16
In the above output we have the scheme "Dataset <dataset> is of type <ctype>", where <dataset> is the
name of the dataset required to access the contents of that dataset.

Say that we now want to write out the contents of the dataset "radiance" to the file "VIIRS-M4-FSDR_radiance.h5"...

Code: Select all

$> python adlBlob2hdf5.py -x VIIRS_M4_FSDR.xml -b VIIRS-M4-FSDR -d radiance -e little -o VIIRS-M4-FSDR_radiance.h5

$> h5dump -n VIIRS-M4-FSDR_radiance.h5                                                                                                       
HDF5 "VIIRS-M4-FSDR_radiance.h5" {
FILE_CONTENTS {
 group      /
 dataset    /radiance
 }
}

$> h5dump -d /radiance VIIRS-M4-FSDR_radiance.h5
HDF5 "VIIRS-M4-FSDR_radiance.h5" {
DATASET "/radiance" {
   DATATYPE  H5T_IEEE_F32LE
   DATASPACE  SIMPLE { ( 768, 3200 ) / ( 768, 3200 ) }
   DATA {
   (0,0): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,8): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,16): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,24): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,32): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,40): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,

...

   (0,976): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,984): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,992): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,1000): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (0,1008): 61.655, 63.5404, 64.2615, 64.1941, 64.9728, 66.3473, 68.2211,
   (0,1015): 68.2689, 68.6146, 68.9891, 68.9889, 70.6887, 68.4122, 68.5081,
   (0,1022): 64.5, 60.5749, 59.9684, 60.0549, 57.879, 56.9254, 56.9156,
   (0,1029): 57.1466, 58.0902, 58.9566, 58.9276, 58.109, 57.8007, 57.7717,
   (0,1036): 57.3671, 58.4358, 59.8796, 59.8506, 60.1681, 59.4845, 59.4363,
   (0,1043): 56.6437, 54.9673, 54.63, 54.5527, 54.7838, 55.2655, 55.2846,
   (0,1050): 53.9161, 50.6959, 50.7344, 50.715, 52.3733, 53.3468, 53.3852,
   (0,1057): 52.1705, 51.6015, 53.0571, 53.0184, 52.8062, 52.1312, 52.1793,

...

  (767,3156): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3163): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3170): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3177): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3184): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3191): -999.7, -999.7, -999.7, -999.7, -999.7, -999.7, -999.7,
   (767,3198): -999.7, -999.7
   }
   ATTRIBUTE "CLASS" {
      DATATYPE  H5T_STRING {
            STRSIZE 6;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "ARRAY"
      }
   }
   ATTRIBUTE "FLAVOR" {
      DATATYPE  H5T_STRING {
            STRSIZE 5;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "numpy"
      }
   }
   ATTRIBUTE "TITLE" {
      DATATYPE  H5T_STRING {
            STRSIZE 1;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): ""
      }
   }
   ATTRIBUTE "VERSION" {
      DATATYPE  H5T_STRING {
            STRSIZE 4;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "2.3"
      }
   }
}
}
If instead we wish to output the entire contents of the BLOB file...

Code: Select all

$> python adlBlob2hdf5.py -x $XML -b $BLOB -d all -e little -o VIIRS-M4-FSDR_all.h5

$> h5dump -n VIIRS-M4-FSDR_all.h5 
HDF5 "VIIRS-M4-FSDR_all.h5" {
FILE_CONTENTS {
 group      /
 dataset    /Btemp
 dataset    /QF1_VIIRSMBANDSDR
 dataset    /QF2_SCAN_SDR
 dataset    /QF3_SCAN_RDR
 dataset    /QF4_SCAN_SDR
 dataset    /QF5_GRAN_BADDETECTOR
 dataset    /act_scans
 dataset    /mode
 dataset    /numOfBadCheckSum
 dataset    /numOfDiscardedPkts
 dataset    /numOfMissingPkts
 dataset    /padByte1
 dataset    /padByte2
 dataset    /padByte3
 dataset    /radiance
 dataset    /scan_mode
 }
}
Examination of the HDF5 file attributes will reveal the BLOB and XML file names, as well as other things...

Code: Select all

$> h5dump -A VIIRS-M4-FSDR_all.h5

HDF5 "VIIRS-M4-FSDR_all.h5" {
GROUP "/" {
   ATTRIBUTE "TITLE" {
      DATATYPE  H5T_STRING {
            STRSIZE 42;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "HDF5 file derived from ADL v2.0 BLOB file."
      }
   }
   ATTRIBUTE "_adlBlob2hdf5_version" {
      DATATYPE  H5T_STRING {
            STRSIZE 54;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "$Id: adlBlob2hdf5.py 000 2011-05-20 hh:mm:ssZ geoffc $"
      }
   }
   ATTRIBUTE "_adl_blob_version" {
      DATATYPE  H5T_STRING {
            STRSIZE 48;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "$Id: adl_blob.py 107 2011-04-20 16:31:11Z rayg $"
      }
   }
   ATTRIBUTE "_adl_xml_filename" {
      DATATYPE  H5T_STRING {
            STRSIZE 17;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "VIIRS_M4_FSDR.xml"
      }
   }
   ATTRIBUTE "_blob_filename" {
      DATATYPE  H5T_STRING {
            STRSIZE 13;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
      DATASPACE  SCALAR
      DATA {
      (0): "VIIRS-M4-FSDR"
      }
   }
...
}
}
For a list of the available options for adlBlob2hdf5.py, use the "-h" switch...

Code: Select all

$> python adlBlob2hdf5.py -h
Usage: adlBlob2hdf5.py [mandatory args] [options]

Provides a front end to adl_blob.py, primarily to transcode
ADL blob files to HDF5 and/or flat binary files.

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -v, --verbose         each occurrence increases verbosity 1 level from
                        ERROR: -v=WARNING -vv=INFO -vvv=DEBUG

  Mandatory Arguments:
    At a minimum these arguments must be specified

    -x XMLFILE, --xml_file=XMLFILE
                        The full path of the ADL XML file describing the BLOB
                        contents
    -b BLOBFILE, --blob_file=BLOBFILE
                        The full path of the ADL BLOB file

  Extra Options:
    These options may be used to customize behaviour of this program.

    -l, --list          List the datasets contained in this blob file, and
                        exit.
    -d DATASET, --dataset=DATASET
                        The name of the dataset to transcode. [default: none]
    -f TRANSFORMAT, --format=TRANSFORMAT
                        The possible transcoding formats. Possible values
                        are... ['hdf5', 'binary']
    -e ENDIAN, --endian=ENDIAN
                        The endianess of the input blob file. Possible values
                        are... ['big', 'little']
    -o OUTPUTFILE, --output_file=OUTPUTFILE
                        The full path of the transcoded output file. [default:
                        out.h5]
Make sure to provide feedback if you find this utility useful, or if you have any suggestions. It is likely that HDF5 transcoding
functionality will be provided in adl_blob.py (it already makes Matlab *.mat files).
Geoff P. Cureton, PhD
Cooperative Institute for Meteorological Satellite Studies
University of Wisconsin-Madison
1225 W. Dayton St.
Madison WI 53706, USA
Phone: +1 608 890 0706
Post Reply