[HPGMG Forum] Acceptable rounding errors

Brian Van Straalen bvstraalen at lbl.gov
Mon Aug 3 19:43:09 UTC 2015


  There are two roles that HPGMG is filling.  One is a computer platform benchmark, and one is as a codesign research problem.  If you want to benchmark a platform then you need to stay within the well-defined algorithm as specified. You are free to implement in other programming models and different execution models (PGAS, threading, tasks, pipelines, replication, communication-avoiding, code transformations, etc) but not play with the math.  I think that leaves the space of variation that is not easily captured with a set of simpler stream and MPI benchmarks.

    As a codesign vehicle you can explore precision, compression, smoothing schemes, etc.  It is possible that some of these novel algorithm changes could be shown to be equivalent to the benchmark test and be used in a ranking, but it would probably be on a case-by-case basis.  I kind of look forward to a paper presenting an HPGMG code that significantly improves a platforms performance but does not strictly satisfy our present submission requirements.  That would give us a data point and a challenge to study.

   Brian


> On Aug 3, 2015, at 12:26 PM, Benson Muite <benson.muite at ut.ee> wrote:
> 
> Jed and Mark,
> 
> Thanks for your answers.  My hope would be a specification that gets
> machine architects, algorithm designers and computational scientists
> talking to each other - none of them on their own is likely to enable
> efficient problem solution which is what the benchmark should be a very
> rough indicator of. This may be too much to ask though.
> 
> If you fix the algorithm too much, how do you make HPGMG more
> representative then a combination of simpler computer system benchmarks
> such as stream and MPI benchmarks?
> 
> Benson
> 
> On 8/3/15 9:21 PM, Jed Brown wrote:
>> Benson Muite <benson.muite at ut.ee> writes:
>>> Fixing algorithm seems not such a good idea. A level of accuracy for
>>> final result is a better specification, then encourage people to explain
>>> or post the implementations they have used. This doesn't mean CS people
>>> need to work on the algorithmic/math part, just that they will likely
>>> have more choices when choosing what to use to optimize and obtain
>>> results with.
>> I'd agree with this if we were hosting an algorithm competition or
>> trying to determine the best architecture for solving a particular
>> science/engineering problem.  But this is a benchmark that is supposed
>> to be representative of a broad range of applications while being
>> simpler than any of them.
>> 
>>> It would also mean that they may be better able to advise people
>>> running applications using multigrid what algorithms work well on
>>> their systems. Finally, this allows comparison of FE and FV
>>> implementations.
>> If I gave you a smooth elliptic problem specification defined in a box,
>> along with a target accuracy, you'd probably consider full spectral or
>> other very high-order methods.  But such methods are more fragile in the
>> sense that they don't generalize to less smooth problems or those with
>> complicated geometry, for example.  Moreover, the computational
>> structure is significantly different from the low-order methods that are
>> used for those more general problems encountered in production.
>> 
>> We could perhaps choose problem data (like coefficients and forcing)
>> such that third or fourth order methods deliver the best results, but
>> even that usually depends on the target accuracy (thus problem size and
>> machine size).  I.e., as target accuracy changes, you *should* change
>> the discretization and algorithms.
>> 
>> Even if we specify fine-grid discretization, if the problem is small
>> enough, one could use a higher order method to solve the problem to
>> target accuracy on a much coarser grid, then simply interpolate to the
>> target fine grid and use the answer.
>> 
>> We don't want tricks like this in a benchmark for computers because it
>> makes the benchmark less representative and it makes it more complicated
>> for people to understand and optimize.  With HPGMG, we've tried to make
>> the algorithm as lean as possible in that there shouldn't be easy
>> shortcuts that would require arbitrary legislation to exclude.  But I
>> don't think it makes sense to eliminate rules about the basic
>> algorithmic structure.
> 
> 
> --
> Hajussüsteemide Teadur
> Arvutiteaduse Instituut
> Tartu Ülikool
> J.Liivi 2, 50409
> Tartu
> http://kodu.ut.ee/~benson <http://kodu.ut.ee/~benson>
> ----
> Research Fellow of Distributed Systems
> Institute of Computer Science
> University of Tartu
> J.Liivi 2, 50409
> Tartu, Estonia
> http://kodu.ut.ee/~benson <http://kodu.ut.ee/~benson>
> 
> 
> _______________________________________________
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org <mailto:HPGMG-Forum at hpgmg.org>
> https://hpgmg.org/lists/listinfo/hpgmg-forum <https://hpgmg.org/lists/listinfo/hpgmg-forum>
Brian Van Straalen         Lawrence Berkeley Lab
BVStraalen at lbl.gov         Computational Research
(510) 486-4976             Division (crd.lbl.gov)




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20150803/fa375cc1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20150803/fa375cc1/attachment-0001.bin>


More information about the HPGMG-Forum mailing list