What “Grand Challenge Problems” Will We Be Able to Solve with the Next-Gen Computing Platforms?
Our lead users work on really hard and interesting problems. At our Pioneers meeting in early October, Tom Morgan gave us a glimpse into the project he has been involved with--helping with the design of the Cyclops/C64 computer chips in a research project that is a collaboration between IBM Research, The University of Delaware, and the Institute for Mathematics and Advanced Super Computing, at Polytechnic University, Brooklyn, New York.
Tom is currently working there on his doctorate in mathematics. The IBM architect is Monty Denneau. For those of you who have followed trends in scientific computing, the Cyclops architecture is an alternative approach to the one being taken in the more highly publicized “blue gene” project. In a paper entitled, “Evaluation of a Multithreaded Architecture for Cellular Computing,”
Denneau’s team describes the architecture this way:
“Cyclops is a new architecture for high performance parallel computers being developed at the IBM T. J. Watson Research Center. The basic cell of this architecture is a single-chip SMP system with multiple threads of execution, embedded memory, and integrated communications hardware. Massive intra-chip parallelism is used to tolerate memory and functional unit latencies. Large systems with thousands of chips can be built by replicating this basic cell in a regular pattern]”[1].
The breaking news is that this “processor in memory” C64 chip should be in actual production early in 2007. This single chip will be replicated 32,000 times in the “final” massively parallel configuration. This aggressively simple architecture will enable petaflop operations at a total system cost of under $10 per gigaflop. But equally important, this new architecture will support up to one million concurrent threads and is capable of supporting one quadrillion double precision floating-point operations per second. Wow! Oh, and it will be possible to program using general purpose parallel scientific computing approaches.
All of a sudden, really big processing problems are within range. Now what?
Tom Morgan explained that this machine is intended to work on “Grand Challenge” problems. According to Wikipedia, a grand challenge problem “exhibits at least the following characteristics:
- The problem is demonstrably hard to solve, requiring several orders-of-magnitude improvement in the capability required to solve it.
- The problem cannot be unsolvable. If it probably can't be solved, then it can't be a Grand Challenge. Ideally, quantifiable measures that indicate progress toward a solution are also definable.
- The solution to a Grand Challenge problem must have a significant economic and/or social impact.
Another, more simple definition is:
A grand challenge problem is one that cannot be solved in a reasonable amount of time with today's computers.
Fundamental scientific problems currently being explored generate increasingly complex data, require more realistic simulations of the processes under study, and demand greater and more intricate visualizations of the results. These problems often require numerous large-scale calculations and collaborations between people with multiple disciplines and locations.
The following are some examples of Grand Challenge problems:
- Applied Fluid Dynamics
- Meso- to Macro-Scale Environmental Modeling
- Ecosystem Simulations
- Biomedical Imaging and Biomechanics
- Molecular Biology
- Molecular Design and Process Optimization
- Cognition
- Fundamental Computational Sciences
- Grand-Challenge-Scale Applications
- Nuclear power and weapons simulations”[2]
Tom Morgan expects one of the first commercial applications of the Cyclops/C64 will be to exploit this massive parallelism as a co-processor to standard imaging systems to support real-time medical imaging. Tom explained, “there are lots of medical imaging applications that will now be possible, but the easy one to describe is high resolution, real time MRI. The idea would be to have a live movie coming out of an MRI machine, rather than the current fairly low resolution image, that takes minutes to create.”
On the research side, one of the first applications will no doubt be the “protein folding” problem. According to Tom, “this involves simulating from first principles, from the equations of physics, what happens when two proteins interact. The motivation here is for the study of drugs and disease. If these sorts of simulations were possible, there are big implications for the study of disease and for drug discovery.
Unfortunately, this is a very big problem. This paper has an estimate that, even with a petaflop computer, it would take three years to simulate 100 microseconds of reaction.”
So somewhere between real-time MRI and protein folding lurks a whole host of complex problems that we have not been able to solve for lack of the sheer computer brainpower required to simulate them. Maybe it’s time to start thinking about the “grand challenges” that you’re passionate about.
Free Computing Cycles and Resources for Solving Optimization Problems
Many of our clients in financial services, logistics, and other industries wrestle with hard problems around optimization. They’re trying to optimize picking, packing and shipping, routing, or financial algorithms. Tom Morgan has also had an opportunity to work with state-of-the-art optimization programs recently in his work on “floor planning” for the Cyclops/C64 chip (the layout and wiring of the major blocks of circuit elements of the chip).
He explained that he has learned “optimizing doesn't work in all the places you might like it to. It works well for 'smooth, stable' problems, ones where local properties are a good predictor of more distant properties (like a smooth hill, if you are down hill right here, there's a good chance you will continue down hill for awhile). It doesn't work so well for assignment, matching, packing, scheduling, where the answer is discrete (either you put the box in this rack, or you don't).”
However, Tom and his grad students found a wonderful free resource that provides access to some of the best optimization software available at no cost. It’s called the NEOS Server for Optimization. Apparently, there are no restrictions on the kinds of work you may do using this system, including developing and testing optimization routines for use in for-profit applications. In fact, one of Tom’s graduate students who went to work for a financial firm on Wall Street proudly showed off a program he had developed using NEOS that enables him to determine a competitor’s investment portfolio. Tom also recommends A Modeling Language for Mathematical Programming--AMPL as the best programming environment to use.
What’s next in optimization? Tom is working on ways to think about applying the optimizers that work on smooth and stable problems to work on discrete and unstable problems. For more information, email Tom Morgan.
Comments