[HPGMG Forum] Acceptable rounding errors

Benson Muite benson.muite at ut.ee
Mon Aug 3 19:26:51 UTC 2015

Jed and Mark,

Thanks for your answers.  My hope would be a specification that gets
machine architects, algorithm designers and computational scientists
talking to each other - none of them on their own is likely to enable
efficient problem solution which is what the benchmark should be a very
rough indicator of. This may be too much to ask though.

If you fix the algorithm too much, how do you make HPGMG more
representative then a combination of simpler computer system benchmarks
such as stream and MPI benchmarks?


On 8/3/15 9:21 PM, Jed Brown wrote:
> Benson Muite <benson.muite at ut.ee> writes:
>> Fixing algorithm seems not such a good idea. A level of accuracy for
>> final result is a better specification, then encourage people to explain
>> or post the implementations they have used. This doesn't mean CS people
>> need to work on the algorithmic/math part, just that they will likely
>> have more choices when choosing what to use to optimize and obtain
>> results with. 
> I'd agree with this if we were hosting an algorithm competition or
> trying to determine the best architecture for solving a particular
> science/engineering problem.  But this is a benchmark that is supposed
> to be representative of a broad range of applications while being
> simpler than any of them.
>> It would also mean that they may be better able to advise people
>> running applications using multigrid what algorithms work well on
>> their systems. Finally, this allows comparison of FE and FV
>> implementations.
> If I gave you a smooth elliptic problem specification defined in a box,
> along with a target accuracy, you'd probably consider full spectral or
> other very high-order methods.  But such methods are more fragile in the
> sense that they don't generalize to less smooth problems or those with
> complicated geometry, for example.  Moreover, the computational
> structure is significantly different from the low-order methods that are
> used for those more general problems encountered in production.
> We could perhaps choose problem data (like coefficients and forcing)
> such that third or fourth order methods deliver the best results, but
> even that usually depends on the target accuracy (thus problem size and
> machine size).  I.e., as target accuracy changes, you *should* change
> the discretization and algorithms.
> Even if we specify fine-grid discretization, if the problem is small
> enough, one could use a higher order method to solve the problem to
> target accuracy on a much coarser grid, then simply interpolate to the
> target fine grid and use the answer.
> We don't want tricks like this in a benchmark for computers because it
> makes the benchmark less representative and it makes it more complicated
> for people to understand and optimize.  With HPGMG, we've tried to make
> the algorithm as lean as possible in that there shouldn't be easy
> shortcuts that would require arbitrary legislation to exclude.  But I
> don't think it makes sense to eliminate rules about the basic
> algorithmic structure.

Hajussüsteemide Teadur
Arvutiteaduse Instituut
Tartu Ülikool
J.Liivi 2, 50409
Research Fellow of Distributed Systems
Institute of Computer Science
University of Tartu
J.Liivi 2, 50409
Tartu, Estonia

More information about the HPGMG-Forum mailing list