wiki:WikiStart
Last modified 9 years ago Last modified on 06/03/14 15:32:00

Parvis: Parallel Analysis Tools and New Visualization Techniques for Ultra-Large Climate Data Sets

The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations.

The bottleneck to producing climate model images is not always the drawing of the image on the screen but rather the calculations that must be performed on the climate model output before an image can be made. Parvis will speed up this post-processing, or analysis, phase of climate visualization by developing a new Parallel Gridded Analysis Library (ParGAL) that will vastly improve the speed of climate data analysis compared to the current serial tools. ParGAL will build on currently available software technology that will also permit calculations to be performed in parallel on many different numerical grids. We will interface ParGAL with popular climate modeling tool the NCAR Command Language (NCL).

We are also exploring the use of existing tools such as Swift and Pagoda to bring immediate speed-ups to climate model post-processing workflow.

GET UPDATES If you would like to receive announcements about ParVis software releases, beta-test opportunities and other information, please subscribe to the parvis-ann mailing list.

GET HELP If you need help using ParVis software or have a question, subscribe to the parvis-users mailing list

1.0.0 RELEASE ParNCL 1.0.0 will be released soon! Please see the Release Notes for more information about the 1.0.0 release. Please see the 1.0.0b2 Release Notes and 1.0.0b1 Release Notes for more information about the 1.0.0b2 and 1.0.0b1 releases.

Questions about ParVis? Contact Robert Jacob: jacob at mcs.anl.gov


ParGAL/ParNCL development

We will be building ParGAL on top of the MOAB, Intrepid, and PNetCDF libraries. ParNCL is an application built using the ParGAL library.

Information for ParGAL developers

Information for ParNCL developers

Datasets

GridConventions

GridPictures

HeaderExamples

NCLExamples

Discretizations

Current workflow improvements

Meetings

  • KickoffMeeting -- September 29-30, 2010, at Argonne National Laboratory.
  • SecondMeeting -- April 11-12, 2011 at National Center for Atmospheric Research.
  • ThirdMeeting? -- September 21, 2011 in Washington, D.C. as part of DOE ESM PI's meeting.
  • FourthMeeting? -- March 22-23, 2012 at Argonne National Laboratory.
  • FifthMeeting? -- Oct 25-26, 2012 at National Center for Atmospheric Research.

Hardware Model

The hardware model we envision for postprocessing of high-resolution output is a “Data Analysis Center” [as defined in this report from a workshop on climate at exascale]---computational infrastructure optimized and dedicated to the task of analyzing and visualizing large, complex data. One possible model for this is the Argonne Leadership Computing Facility (ALCF), which has several petabytes of disk storage space, with no per user quotas, attached to its IBM BlueGene/P computer. The same disk is attached to a DAV cluster (called “Eureka”) consisting of 100 dual-quad core (2.0 GHz Xeon) servers each with dual NVIDIA Quadro graphics cards. In this model, the original climate output remains at the ALCF, or wherever it is generated, and the DAV is performed using multiple nodes of Eureka (or equivalent resource) accessing the same physical disk. Eureka and its attached, shared disk are not a data analysis center because today’s climate DAV tools are unable to take advantage of more than one node of Eureka at a time.

For development, we will also be using Fusion, a linux cluster supported by Argonne's Laboratory Computing Resource Center. Fusion has 320 nodes each with dual quad-core Intel Nehalem 2.53GHz processors (2560 total processors). Each node has 36GB memory and 16 "fat" nodes have 64 GB. The interconnect is Infiniband QDR 4GB/s per link.


Project members

  • PI: Robert Jacob
  • Co-PI's: Mark Hereld, Don Middleton, Rob Ross, Ian Foster
  • Site contacts: Pavel Bochev (SNL), Karen Schuchardt (PNNL), Don Middleton (NCAR), Kwan-Liu Ma (UC-Davis)
  • Co-I's: Tim Tautges, Mike Wilde, Rob Latham, Jay Larson, Jayesh Krishna, Xiabing Xu, Sheri Mickelson (Argonne); David Brown, Richard Brownrigg, Mary Haley, Dennis Shea, Wei Huang, Mariana Vertenstein (NCAR); Kara Peterson, Mark Taylor (SNL); Jian Yin (PNNL).


Advisory Panel

To help this project maintain a focus on providing tools for climate scientists, we have formed an advisory panel consisting of users of data analysis and visualization software who are involved in each of the main areas within climate modeling.

  • Atmosphere: David Randall (CSU), Bill Gustafson (PNNL),
  • Ocean: Gokhan Danabasoglu (NCAR)
  • Sea-ice: Cecilia Bitz (Univ. Washington)
  • Land: David Lawrence (NCAR)

This research is sponsored by the Office of Biological and Environmental Research of U.S. Department of Energy's Office of Science.


Background

The workshop report Challenges in Climate Change Science and the Role of Computing at the Extreme Scale" (pages 25-33) provides background for the issues we're addressing.

The original call for proposals LAB10-05: Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets


Trac Starting Points

For a complete list of local wiki pages, see TitleIndex.

vBulletin statistic

Attachments