Thursday, May 13, 2010

Parallel Processing for Imaging Applications

Parallel Processing for Imaging Applications

Conference EI111

Part of program track on Image Processing

This conference has an open call for papers:

Conference Chairs

John D. Owens, Univ. of California, Davis; I-Jong Lin, Hewlett-Packard Labs.; Yu-Jin Zhang, Tsinghua Univ. (China)

Program Committee

Yen-Kuang Chen, Intel Corp.; Ngai-Man Cheung, Stanford Univ.; Ajay Divakaran, Sarnoff Corp.; Mei Han, Google Inc.; Michael Houston, Advanced Micro Devices, Inc.; Wen-Mei Hwu, Univ. of Illinois at Urbana-Champaign; Christopher R. Johnson, The Univ. of Utah; Kurt W. Keutzer, Univ. of California, Berkeley; Ron Kimmel, Technion-Israel Institute of Technology (Israel); David P. Luebke, NVIDIA Corp.; Thomas Malzbender, Hewlett-Packard Labs.; Marilyn C. Wolf, Georgia Institute of Technology; Robert A. Ulichney, Hewlett-Packard Labs.

Due Dates:

  • Abstract (500 words) and Summary (200 words): 28 June 2010
  • Manuscript for On-site Proceedings: 15 November 2010

Papers submitted to this conference should fuse parallel implementation design principles under physical constraints with an understanding of imaging applications.

Imaging translates information into and out of the visual system with today's computation engine of choice: digital electronic systems. While scalar architectures are no longer scaling at historical rates, we see a massive explosion in the total number of connected computation devices and the ways that hardware architectures and software parallel programming environments use these devices to work in concert and in parallel. From the computing cloud to map-reduce programming models and systems to multi-core CPUs to the regular layout of graphics processing units (GPUs) to the increasing capacity of FPGA fabrics, a range of parallel architectures and parallel programming environments are available to designers and researchers to solve computationally complex problems in efficient (and often real-time) imaging applications.

Under physical constraints such as power, speed, and/or cost, the data throughput and degree of data dependence of imaging applications suggest a good match between parallel architectures and imaging applications; similarly, the choice of parallel architectures often reflects the structure of the imaging problem targeted by the application. Thus, the duality of imaging problem definition and parallelism implies that the efficient implementation of parallelism for imaging offers insight into the mind's internal imaging computation. This duality also implies that measures of parallel efficiency can formalize the definition of many imaging problems. This conference explores this duality through new parallel designs for imaging and architectures and design tools to optimize parallelism in imaging algorithms.

We expect papers in this conference to combine principles and techniques for parallelism, such as:

  • cloud computing
  • GPU computing
  • high-level parallel programming constructs
  • design tools for extracting parallelism
  • efficient, scalable architectures
  • memory hierarchy design for parallel systems
  • metrics for parallelism and capacity planning
  • efficient algorithm mapping onto parallel hardware
  • algorithmic classification by efficient parallel architecture
  • algorithms for parallel scheduling and resource allocation

Other novel parallel programming techniques, constructs, abstractions, and implementations with an understanding of imaging applications, such as:

  • teleconferencing
  • medical imaging
  • remote sensing
  • image fusion
  • spectral imaging
  • volumetric imaging
  • compression
  • halftoning
  • color rendering
  • raster image processing
  • image analysis
  • computer vision
  • document analysis
  • forensics
  • resampling
  • computational optics
  • other novel imaging applications

No comments:

Post a Comment