Joe Sirott | 6 Jan 20:56 2009
Picon

(no subject)

<netcdfgroup <at> unidata.ucar.edu>
Subject: Netcdf data sharing Web application available
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Loop:  netcdfgroup <at> unidata.ucar.edu

I'm pleased to announce a beta version of a new Web application
(unnamed as yet!) that allows you to upload and store your CF or
COARDS compliant netCDF-3 datasets on our servers. You can also add data
from OPeNDAP servers, or from a netCDF file that is available via
http. Once your datasets are uploaded, you can share your data with
the public (if desired), and authorized users will be able to
visualize or download subsets of your data using our DChart
(http://dapper.pmel.noaa.gov) user interface.

For more info, take a look at the sign-up page:
   http://dapper.pmel.noaa.gov/dchart/admin/newAccount.html

or take a look at the FAQ:
   http://dapper.pmel.noaa.gov/dchart/admin/help/

Examples of datasets that are known to work are the CF and COARDS sample
datasets
available from Unidata (with the exception of RUC.nc) at:
   http://www.unidata.ucar.edu/software/netcdf/examples/files.html

Feel free to contact me if you encounter any problems.

Cheers, Joe Sirott

(Continue reading)

John Storrs | 16 Jan 13:18 2009
Picon

NetCDF4 for Fusion Data

We are evaluating HDF5 and NetCDF4 as archive file formats for fusion research
data. We would like to use the same format for experimental (shot-based) data
and modelling code data, to get the benefits of standardisation (one API to
learn, one interface module to write for visualization tool access, etc). A
number of fusion modelling codes use NetCDF. NetCDF for experimental data
will be new though, so far as I know. I've found some problems in shot data
archiving tests  which need to be resolved for it to be considered further.

MAST (Mega-Amp Spherical Tokamak) shot data (from magnetic sensors etc) is
mostly digitized in the range 1kHz to 2MHz. MAST shots are currently less
than 1 second in duration, but 5 second shots are forseen (some other
experiments have much longer shot times). We use up to 96-channel digitizers.
Acquisition start time and sample period is common to a digitizer, but the
number of samples per channel sometimes varies - that is, some channels may
be sampled for a longer time than others. Channel naming is hierarchical.

There are two NetCDF-related issues here. The first is how to store the
channel data, the second how to store time, both efficiently of course. We
want per-variable compression. We don't want uninitialised value padding in
variable data, even if it would be efficiently compressed. In the normal case
where acquisition start time and sample period is common to all channels in a
dataset, we would prefer to define just one dimension, not many if channel
data array sizes vary.

NetCDF4 tests with a single fixed dimension, writing varying amounts of data
to uncompressed channel variables, shows that the variables are written to
the archive file with padding, even in no_fill mode. The file size is
independent of the amount of data written.

NetCDF4 tests with a single unlimited dimension work for very small dimension
(Continue reading)

Jeff Whitaker | 16 Jan 16:48 2009

Re: NetCDF4 for Fusion Data

John Storrs wrote:
> We are evaluating HDF5 and NetCDF4 as archive file formats for fusion
research
> data. We would like to use the same format for experimental (shot-based) data
> and modelling code data, to get the benefits of standardisation (one API to
> learn, one interface module to write for visualization tool access, etc). A
> number of fusion modelling codes use NetCDF. NetCDF for experimental data
> will be new though, so far as I know. I've found some problems in shot data
> archiving tests  which need to be resolved for it to be considered further.
>
> MAST (Mega-Amp Spherical Tokamak) shot data (from magnetic sensors etc) is
> mostly digitized in the range 1kHz to 2MHz. MAST shots are currently less
> than 1 second in duration, but 5 second shots are forseen (some other
> experiments have much longer shot times). We use up to 96-channel
digitizers.
> Acquisition start time and sample period is common to a digitizer, but the
> number of samples per channel sometimes varies - that is, some channels may
> be sampled for a longer time than others. Channel naming is hierarchical.
>
> There are two NetCDF-related issues here. The first is how to store the
> channel data, the second how to store time, both efficiently of course. We
> want per-variable compression. We don't want uninitialised value padding in
> variable data, even if it would be efficiently compressed. In the normal case
> where acquisition start time and sample period is common to all channels in a
> dataset, we would prefer to define just one dimension, not many if channel
> data array sizes vary.
>
> NetCDF4 tests with a single fixed dimension, writing varying amounts of data
> to uncompressed channel variables, shows that the variables are written to
> the archive file with padding, even in no_fill mode. The file size is
(Continue reading)

Ed Hartnett | 16 Jan 17:27 2009
Picon

Re: NetCDF4 for Fusion Data

message of "Fri\, 16 Jan 2009 12\:18\:46 +0000")
Message-ID: <sadiqofqkrb.fsf <at> shecky.unidata.ucar.edu>
User-Agent: Gnus/5.11 (Gnus v5.11) Emacs/22.2 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
X-Loop:  netcdfgroup <at> unidata.ucar.edu

John Storrs <john.storrs <at> ukaea.org.uk> writes:

> We are evaluating HDF5 and NetCDF4 as archive file formats for fusion
research
> data. We would like to use the same format for experimental (shot-based) data
> and modelling code data, to get the benefits of standardisation (one API to
> learn, one interface module to write for visualization tool access, etc). A
> number of fusion modelling codes use NetCDF. NetCDF for experimental data
> will be new though, so far as I know. I've found some problems in shot data
> archiving tests  which need to be resolved for it to be considered further.
>
> MAST (Mega-Amp Spherical Tokamak) shot data (from magnetic sensors etc) is
> mostly digitized in the range 1kHz to 2MHz. MAST shots are currently less
> than 1 second in duration, but 5 second shots are forseen (some other
> experiments have much longer shot times). We use up to 96-channel
digitizers.
> Acquisition start time and sample period is common to a digitizer, but the
> number of samples per channel sometimes varies - that is, some channels may
> be sampled for a longer time than others. Channel naming is
> hierarchical.

I wonder if you could send the CDL of the test files you've come up
with (i.e. run ncdump -h on the files).
(Continue reading)

Mark V | 17 Jan 07:16 2009
Picon

Re: NetCDF4 for Fusion Data

On Sat, Jan 17, 2009 at 2:48 AM, Jeff Whitaker <jswhit <at> fastmail.fm> wrote:
> John Storrs wrote:

<snip>

>> Coming to storage of the time coordinate variable. If we actually store the
>> data, it will need to be in a double array to avoid loss of precision.
>> Aleternatively we could define the variable as an integer with a double
scale
>> and offset. Both of these sound inefficient.  Traditionally we store this
>> type of data as a (sequence of) triple: start time, time increment, count.
>> Clearly we can do that within a convention, expanding it in reader code.
>> How should we handle this?
>>
>
> The standard way is to create a time variable with units "<time
> increments> since <start time>".

I'm relatively new to HDF5/netCDF...

If storage size is the primary concern, then read time and write time
does not matter, is there any sense in adding dimensions to the data
set to store time, say dimensions for: hr,min,sec,msec ?

Mark

> HTH,
>
> -Jeff
>> Your comments would be appreciated.
(Continue reading)

John Storrs | 19 Jan 16:10 2009
Picon

Re: NetCDF4 for Fusion Data

<sadiqofqkrb.fsf <at> shecky.unidata.ucar.edu>
In-Reply-To: <sadiqofqkrb.fsf <at> shecky.unidata.ucar.edu>
MIME-Version: 1.0
Content-Type: Multipart/Mixed;
  boundary="Boundary-00=_OfJdJNQj58QQsKh"
Message-Id: <200901191510.06847.john.storrs <at> ukaea.org.uk>
X-OriginalArrivalTime: 19 Jan 2009 15:10:10.0258 (UTC)
FILETIME=[0556D320:01C97A48]
X-Scanned-By: MailControl A_08_51_00 (www.mailcontrol.com) on 10.69.0.166
X-Loop:  netcdfgroup <at> unidata.ucar.edu

--Boundary-00=_OfJdJNQj58QQsKh
Content-Type: text/plain;
  charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Ed, Jeff

Thanks for your prompt and useful comments.

I attach minimal C code showing the 'unlimited dimension' problem I reported.
It takes forever to run when the chunk definition call is disabled, but runs
fine when the call is enabled. I also attach a synthetic cdl file, generated
from a data acquisition configuration file, showing typical data
organisation. Our current archive file format (an old in-house system)
allows '/' as a character in flat variable names. Initially with NetCDF/HDF5
we'll stick to the old names using groups.

Further comments below:
(Continue reading)

John Storrs | 20 Jan 17:43 2009
Picon

Re: NetCDF4 for Fusion Data

<20090119151315.BCE15CB182 <at> laraine.unidata.ucar.edu>
In-Reply-To: <20090119151315.BCE15CB182 <at> laraine.unidata.ucar.edu>
MIME-Version: 1.0
Content-Type: text/plain;
  charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Message-Id: <200901201643.21286.john.storrs <at> ukaea.org.uk>
X-OriginalArrivalTime: 20 Jan 2009 16:43:18.0007 (UTC)
FILETIME=[324F6870:01C97B1E]
X-Scanned-By: MailControl A_08_51_00 (www.mailcontrol.com) on 10.69.0.124
X-Loop:  netcdfgroup <at> unidata.ucar.edu

Hi Ed

I've uncovered a couple of problems:

(1) Variables in an explicitly defined group, using the same 'unlimited'
dimension but of different initialized sizes, result in an HDF error when
ncdump is run (without flags) on the generated NetCDF4 file. No problems are
reported when the file is generated (all netcdf call return values are
checked in the usual way). The dimension is defined in the root group. Try
writing data of size S to one variable, and size < S to the next. This error
isn't seen if the variables are all in the root group. In that case, ncdump
fills all variables to the maximum size which I suppose is a feature and not
a bug. An ncdump flag to disable this feature would be useful.

(2) I'm doing tests in C and C++, using the C API in both cases. In some
function prototypes (eg nc_def_dim), char pointers are not defined as const,
causing g++ 4.3 to complain when a string constant argument is used.
(Continue reading)

John Caron | 20 Jan 19:03 2009
Picon

Re: NetCDF4 for Fusion Data

<20090120164431.67449CB186 <at> laraine.unidata.ucar.edu>
In-Reply-To: <20090120164431.67449CB186 <at> laraine.unidata.ucar.edu>
X-Enigmail-Version: 0.95.7
OpenPGP: id=4C522CF7
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
X-Loop:  netcdfgroup <at> unidata.ucar.edu

> > Are there any GUI data viewers for NetCDF4 along the lines of hdfview which
> cope at least with groups and unlimited dimensions?

try toolsUI, available at:

  http://www.unidata.ucar.edu/software/netcdf-java/

make sure you use the 4.0 version.

webstart version at:

  http://www.unidata.ucar.edu/software/netcdf-java/v4.0/webstart/netCDFtools.jnlp

go to the "viewer" tab and open the file. select and right click on variable
name for context menu. sorry the documentation is nonexistent.

let me know if you see any problems - im trying to finish it up.

_______________________________________________
netcdfgroup mailing list
netcdfgroup <at> unidata.ucar.edu
For list information or to unsubscribe,  visit: http://www.unidata.ucar.edu/mailing_lists/ 
(Continue reading)

Valliappa Lakshmanan | 20 Jan 21:32 2009
Picon

Re: How to read grib2 files using Netcdf Java API?

John,

I downloaded the latest netcdf-4.0.jar and tried it.  I still get the same problem.  Here's the stacktrace, in case the line numbers help isolate the issue:

    [java] 2009-01-20 14:26:46,867 [main] WARN  org.wdssii.ncingest.Grib2Ingest - java.io.IOException: Cant read /scratch2/lakshman/modelmining/WRFPRS_GrbF21.grib2: not a valid NetCDF file.
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:628)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:335)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:305)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:292)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:280)

The code that throws the exception is simply:
     NetcdfDataset.open("/scratch2/..../filename.grib2");

My classpath consists of the following jar files (netcdf dependencies in bold):

commons-logging.jar
jdom_1.1.jar
jnotify-0.91.jar
jstl.jar
junit.jar
log4j-1.2.14.jar
netcdf-4.0.jar
servlet-api.jar
slf4j-api-1.1.0.jar
slf4j-log4j12-1.1.0.jar
spring.jar
standard.jar

thanks
Lak


On Fri, Dec 12, 2008 at 6:06 PM, John Caron <caron <at> unidata.ucar.edu> wrote:
Hi Lak:

Im not having any trouble opening from either 4.0 or latest 2.2. Can you recheck your classpath? Also 4.0 has got less bugs in it at this point, so you might try it.

Valliappa Lakshmanan wrote:
> John, thanks for looking at this. The file (74MB) is now at:
>
> http://cimms.ou.edu/~lakshman/data/WRFPRS_GrbF21.grib2
>
> Lak
>
>
> On Fri, Dec 12, 2008 at 1:28 PM, John Caron <caron <at> unidata.ucar.edu
> <mailto:caron <at> unidata.ucar.edu>> wrote:
>
>     Hi Lak:
>
>     NetcdfDataset.open() should have worked.
>
>     Can you send us the file, and we'll check why its not working.
>
>     Valliappa Lakshmanan wrote:
>     > I know there must be a very simple way to do this, but I can't seem to
>     > find any example or documentation.
>     >
>     > I want to read a Grib2 file using the Java Netcdf API and tried:
>     >
>     > File file = new File("/tmp/WRFPRS_GrbF21.grib2");  // a grib2 file
>     > NetcdfFile ncfile = NetcdfDataset.open(file.getAbsolutePath());
>     >
>     > but get an IOException stating: Cant read
>     /tmp/WRFPRS_GrbF21.grib2: not
>     > a valid NetCDF file.
>     >
>     > I tried using ucar.dt.grid.GridDataSet.open() and got the same result.
>     >
>     > In case it was a classpath/jar problem, I added toolsUI.jar from
>     netcdf
>     > 2.2.22 to my classpath (has grib.jar).
>     > Same result.  What am I missing?
>     >
>     > thanks
>     > Lak
>     >
>     >
>     >
>     >
>     ------------------------------------------------------------------------
>     >
>     > _______________________________________________
>     > netcdfgroup mailing list
>     > netcdfgroup <at> unidata.ucar.edu <mailto:netcdfgroup <at> unidata.ucar.edu>
>     > For list information or to unsubscribe,  visit:
>     http://www.unidata.ucar.edu/mailing_lists/
>
>

_______________________________________________
netcdfgroup mailing list
netcdfgroup <at> unidata.ucar.edu
For list information or to unsubscribe,  visit: http://www.unidata.ucar.edu/mailing_lists/ 
Valliappa Lakshmanan | 20 Jan 21:33 2009
Picon

Obsolete logging dependency in toolsUI.jar

I tried  toolsUI-4.0.jar because I was having problems getting earlier versions of the netcdf API to read Grib2 files.


However, the toolsUI version seems to have a logging problem, and when I run my code (which uses log4j via Apache Commons), I get the following exception:

  [java] Exception in thread "main" java.lang.IncompatibleClassChangeError: Class org.apache.log4j.Logger does not implement the requested interface org.slf4j.Logger
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:635)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:335)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:305)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:292)
     [java]     at ucar.nc2.NetcdfFile.open(NetcdfFile.java:280)

( I do have slf4j-api-1.1.0.jar and slf4j-log4j12-1.1.0.jar in my classpath )

Searching on the internet for the error indicates (see:  http://www.nabble.com/Trying-to-use-log4j-over-slf4j-td20249172.html )
that it's because toolsUI.jar bundles the contents of nlog4.jar which is obsolete.

I can avoid the error by using netcdf-4.0.jar (which is what I did), but thought I'd report the problem.

Lak


_______________________________________________
netcdfgroup mailing list
netcdfgroup <at> unidata.ucar.edu
For list information or to unsubscribe,  visit: http://www.unidata.ucar.edu/mailing_lists/ 

Gmane