PCFD05 Logo

May 24 - 27, 2005
University of Maryland, College Park Campus

  Book Graphic

Confirmed Invited Speakers:

Chris Allen. . . John Bell. . . Alan Gara. . . Roland Glowinski. . .
William Henshaw. . . Stephen Jardin. . . Steven Orszag. . . Gerhard Wellein

C. AllenChris Allen, University of Bristol, United Kingdom -- Dr. Allen is leader of the Computational Aerodynamics research group at the University of Bristol, UK. His research field is the development of computational methods for fluid flow, particularly unsteady aerodynamics. Applications include steady and unsteady flow about wings and rotor blades, unsteady incompressible flows, aerodynamic-structural coupling, unsteady potential flows, high-speed combustion modelling and unsteady vortex methods.

Parallel Simulation of Lifting Rotor Wakes

It is well known that numerical diffusion inherent in all CFD codes severely compromises the resolution of high flow gradients. This is a serious problem for rotor flow simulation, where the vortical wake capture is essential. Hover simulation requires the capture of several turns of the tip vortices to compute accurate blade loads, resulting in the requirement for fine meshes away from the surface, and a long numerical integration time for this wake to develop. Forward light simulation also requires accurate capture of the vortical wake but, depending on the advance ratio, fewer turns need to be captured, as the wake is swept downstream. However, not only does the entire domain need to be solved, rather than the single blade for hover, but the wake is now unsteady, and so an unsteady solver must be used, which is not only more expensive than the steady solver used for hover, but can easily result in even higher numerical diffusion of the wake. Hence, it is extremely expensive to simulate these flows. This problem is considered here. Of particular interest is the development and capture of the unsteady vortical wake and, hence, a grid dependence study has been performed. To this end, the simulation code has been parallelised to allow the use of very fine meshes. . . . more (pdf)



John Bell, Lawrence Berkeley National Laboratory, USA -- Dr. Bell received his B.S. (1975) degree from the Massachusetts Institute of Technology and his M.S. (1977) and Ph.D. (1979) degrees from Cornell University, all in mathematics. He is currently a Senior Staff Scientist and Group Leader for the Center for Computational Sciences and Engineering at Lawrence Berkeley National Laboratory. Prior to joining LBNL, he held positions at the Lawrence Livermore National Laboratory, Exxon Production Research and the Naval Surface Weapons Center. Dr. Bell's research focuses on the development and analysis of numerical methods for partial differential equations arising in science and engineering. He has made contributions in the areas of finite difference methods, numerical methods for low Mach number flows, adaptive mesh refinement, interface tracking and parallel computing. He has also worked on the application of these numerical methods to problems from a broad range of fields including combustion, shock physics, seismology, flow in porous media and astrophysics.

Parallel Adaptive Low Mach Number Simulation of Turbulent Combustion

Numerical simulation of reacting flows with comprehensive kinetics is one of the most demanding areas of computational fluid dynamics. High-fidelity modeling requires accurate fluid mechanics, detailed models for multicomponent transport and detailed chemical mechanisms. Spatial and temporal requirements based on integration of the compressible reacting Navier Stokes equations on a uniform mesh lead to prohibitive estimates for the computational cost. In this talk we describe methodology to reduce these requirements based on combining a low Mach number formulation with a local adaptive mesh refinement algorithm. The low Mach number equations, which are structurally similar to the incompressible Navier-Stokes equations, enable a larger time step while adaptive refinement reduces the number of degrees of freedom needed to represent the solution. However, integration of the low Mach number equations on adaptive grids introduces additional complexity into the discretization. We will discuss a variety of algorithmic and implementation issues that must be addressed in developing this approach. We will present an application of the resulting methodology to the simulation of a laboratory-scale turbulent premixed methane flame and provide comparisons between simulation and experiment.



Alan GaraAlan Gara, IBM, USA -- Dr. Gara is a research staff member at the IBM T.J. Watson Research center. He is the chief system architect of the BlueGene/L supercomputer and also leads efforts in peta-scale computing at IBM. Dr. Gara received his Ph.D. in physics from the University of Wisconsin, Madison in 1987. He is a 1998 Gordon Bell recipient for the QCDOC machine, a custom supercomputer optimized for Quantum Chromodynamics. He joined IBM Research in 1999 and has been leading high performance computing architecture and design efforts.

The BlueGene/L Supercomputer Architecture with a Look
to the Future

The BlueGene/L supercomputer has been developed by IBM as a research project focused on reducing the cost of supercomputing. This machine has attracted a great deal of interest and has resulted in IBM expanding the scope of this project such that there are now systems deployed at a number of supercomputing centers. This talk will give an overview and an update on the BlueGene/L project including some early application results.

Technology challenges of the future promise to result in a very different design point for supercomputing. The forces and possible directions that these forces will push computer architectures will be discussed. The impact on applications will also be addressed.



Roland GlowinskiRoland Glowinski, University of Houston, USA -- Dr. Glowinski is the Cullen Professor of Mathematics and Mechanical Engineering at the University of Houston. In addition, he is Docent Professor of Computational and Applied Mathematics at the University of Jyvaskyla, Finland; Professor Emmeritus at the University Pierre and Marie CurIe (Paris VI), France; Adjunct Professor of Mechanical Engineering at the University of Houston; Adjunct Professor of Computational and Applied Mathmatics at Rice University. He received a B.S. in Mathematics, Physics and Chemistry from Ecole Polytechnique, Paris, France (1960), an M.S. In Electrical Engineering from Ecole Nationale Superieur des Telecommunications, Paris, France (1963) and a Ph.D. in Mathematics, University VI, Paris, France(1970). He is the recipient of numerous honors, awards and prizes, including the Knight of the French Order of the Academic Palms, Knight of the French Order of the Legion of Honor, an Elected Member of the French National Academy of Technology, the 1996 Marcel Dassault Prize from the French National Academy of Sciences, the 1988 Seymour Cray Prize, and the 2004 SIAM Von Karman Prize. He is the author of six books and more than 320 articles on scientific computing.

Domain Embedding Methods for Particulate Flow

The main goal of this discuss the numerical simulation of particulate flow, in the particular case of incompressible viscous fluids mixed with rigid solid particles. The following issues will be addressed:

(i) Modeling of the particulate flow, the key idea here being to consider the mixture as a unique continuum.

(ii) Reformulation in a fixed flow region via a domain embedding technique.

(iii) Space-time discretization by finite element approximations and operator-splitting techniques.

(iv) Parallelization of the numerical schemes.

The results of the numerical simulations of 2-D and 3-D flows involving hundred to thousand of particles will be presented.     




W. HenshawWilliam Henshaw, Lawrence Livermore National Laboratory, USA -- Dr. Henshaw is an applied mathematician in the Computational Mathematics group in the Center for Applied Scientific Computing (CASC). His research interests lie in area of the numerical solution of partial differential equations. He is a participant in the Overture project, an object oriented framework for the solution of PDEs on overlapping grids. He has worked on grid generation and the numerical solution of the Navier-Stokes equations. Dr. Henshaw earned his Ph.D. in Applied Mathematics from the California Institute of Technology in 1985. He also holds a B.Math. from the University of Waterloo, with a double major in Applied Mathematics and Computer Science. He joined the Lawrence Livermore Laboratory in 1998.

Solving the Compressible and Incompressible Navier-Stokes Equations on Moving and Adaptive Overlapping Grids

Techniques for the solution of partial differential equations on overlapping grids will be described. An overlapping grid consists of a set of structured component grids that cover a domain and overlap where they meet. Overlapping grids provide an effective approach for developing efficient and accurate approximations for complex, possibly moving geometry. Topics to be addressed include solving the high-speed reacting Euler equations, the incompressible Navier-Stokes equations and solving elliptic equations with the multigrid algorithm. Recent developments coupling moving grids and adaptive mesh refinement will also be described.



S. JardinStephen Jardin, Princeton Plasma Physics Laboratory, USA -- Dr. Jardin is the Principal Research Physicist and Deputy Head of the Plasma Science and Technology Department in the Princeton University Program in Plasma Physics. He is also a lecturer with Rank of Professor in the Astrophysics Department at Princeton University. His research interests include computational physics, magnetohydrodynamics and Tokamak Design. He earned his Ph.D. in Astrophysics from Princeton University (1976), an MS. in Physics and an M.S. in Nuclear Engineering from MIT (1978) and holds a B.S. in Engineering Physics from the University of California (1970).

Finite Element Calculations of the Magnetohydrodynamics of Magnetic
Fusion Devices and Magnetic Reconnection

Many aspects of the physics of toroidal magnetic fusion experiments can be described by a set of "Extended Magnetohydrodynamic" (E-MHD) equations for the evolution of the fluid-like quantities describing the high-temperature plasma and the magnetic field. We have had success in developing programs for solving these equations that scale to over 1000 processors using a mixed representation with finite elements in the cross-sectional plane, and either high-order finite differences or a spectral representation in the toroidal angle. Initial efforts utilized linear finite elements, but it is now recognized that higher-order elements offer significant advantages. A new effort to solve these E-MHD equations with a fully implicit algorithm utilizing high-order elements with C1 continuity will be described. This leads to a compact representation and efficient solution algorithm. Examples will also be presented of model problems in magnetic reconnection.



S. OrszagSteven Orszag, Yale University, USA -- A Percey F. Smith Professor of Mathematics, Dr. Orszag specializes in the areas of computational fluid dynamics, turbulence theory and numerical analysis. He is also noted for his work in applied mathematics, and his research has had an impact on aeronautics, weather forecasting and the electronic chip manufacturing industry. In the areas of computational fluid dynamics, he achieved the first successful computer simulations of three-dimensional turbulent flows. He also developed methods that provide a fundamental theory of turbulence. Another primary research interest has been the development of techniques for the simulation of electronic chip manufacturing processes, some of which have been applied extensively throughout the industry. His accomplishments in the area of spectral methods include the introduction of fast surface harmonic transform methods for global weather forecasting and filtering techniques for shock wave problems. He earned his Ph.D. at Princeton University and his B.S. degree at the Massachusetts Institute of Technology.

Some Small Ideas on the Large Turbulence Problem

In this talk, we shall discuss the status of computing complex turbulent flows at high Reynolds numbers. A variety of applications will be given, especially for external and internal aerodynamic flows involving today’s and tomorrow’s automobiles. Emphasis will be given to lattice-based techniques for very-large eddy
simulation (VLES) and to their extension to provide good data at 'all' scales.



G. WelleinGerhard Wellein, Erlangen-Nürnberg University, Germany --Dr. Wellein is head of the HPC group at the computing center of the University of Erlangen-Nuremberg. He works on performance optimization of large scale technical and scientific applications, with a special focus on methods and algorithms in CFD and many particle physics.   In this context he is also studying in the interplay of the architectural concepts of modern supercomputers and application performance. As an HPC consultant he supports the procurement process for large scale supercomputer systems at different supercomputing centers in Germany.   Dr. Wellein received a diploma in Physics (1994) and a Ph.D. in Physics (1998) at the University of Bayreuth (Germany). He is a member of the Bavarian Network for High Performance Computing (KONWIHR).

Architecture and Performance of Terascale Computers

The tremendous compute power and memory resources of terascale computers offer new possibilities in scientific computing. In particular, large clusters with Intel or AMD processors are widespread nowadays and have frequently replaced traditional supercomputer architectures. This development has forced an intense dispute, wether the "dinosaurus" of High Performance Computing, such as the well-known CRAY vector machines, can be fully replaced by cluster systems or massively parallel computers with thousands of processors. Briefly introducing the latest NEC/CRAY vector machines, the SGI Altix architectures and a typical cluster solution, the talk will highlight the different architectural approaches used in terascale systems. Performance characteristic and potential pitfalls of these systems will be discussed using a parallel Lattice-Boltzmann application code. Considering both price and application performance a surprising picture can emerge in the discussion of cost efficiency in terascale systems.



Abstract Submission
Full Paper Submission
Conference Registration
Conference Hotel
Special Sessions
Invited Speakers
Area Information
Scientific Committee


    Line Graphic  

Last updated 04/14/2005

Site design by Teri Deane, TDE, Inc.