[HPGMG Forum] Acceptable rounding errors

Benson Muite benson.muite at ut.ee
Mon Aug 3 15:45:49 UTC 2015


Hi,

Fixing algorithm seems not such a good idea. A level of accuracy for
final result is a better specification, then encourage people to explain
or post the implementations they have used. This doesn't mean CS people
need to work on the algorithmic/math part, just that they will likely
have more choices when choosing what to use to optimize and obtain
results with. It would also mean that they may be better able to advise
people running applications using multigrid what algorithms work well on
their systems. Finally, this allows comparison of FE and FV implementations.

Benson

------------------------------ Message: 2 Date: Mon, 3 Aug 2015 08:04:23
-0700 From: Mark Adams <mfadams at lbl.gov> To: Brian Van Straalen
<bvstraalen at lbl.gov> Cc: HPGMG Forum <hpgmg-forum at hpgmg.org> Subject:
Re: [HPGMG Forum] Acceptable rounding errors Message-ID:
<CADOhEh4T65zVtPM+aXR4SuCT9h4HA-5Yd05Cf5pywdJBH7poqQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8" On Fri, Jul 31, 2015 at 2:01
PM, Brian Van Straalen <bvstraalen at lbl.gov> wrote:
>> I?ve been wondering about how to automate some level of correctness
>> testing in HPGMG.   I?m trying figure out if there are some computable
>> limits to how much a compliant implementation *could* deviate from other
>> compliant implementations.
>>
>> The concern is not trivial.  I?ve spent some time re-reading Precimonious
>> paper (eecs.berkeley.edu/~rubio/includes/sc13.pdf) and I realize that it
>> would not be hard to make a faster version of FMG using mixed precision.
>> There have been papers over the last few years using 4-byte AMG as a
>> preconditioner and that seems to work well for many applications.
>>
> I have been thinking that we would legislate double precision everywhere.
> We need to legislate things like order of prolongation, smoother
> algorithm.  It is a design decision as to what spaces we allow to be
> optimized.  HPGMG is so simple that we need to defend against optimizations
> (games if you like) that are not readily available to most apps.  I am
> thinking that mixing precision is not a space that we want to open up.
>
> Also, I have always wanted the math to be precisely defined so that the CS
> people, including center staff that are running and optimizing, do not have
> to think about math and can just think about making stuff run fast. (HPCG
> does not share this philosophy so perhaps I am in the minority here.)
>
>
>> FPGA computing platforms or accelerators also will want to push this
>> envelope.
>>
>> This is added to the other issues like faster FMAs that might not be
>> totally standard, or fast division.    Our initial condition is also based
>> on trig functions, which have the Table Maker?s Dilemma. The older x86 fsin
>> instruction had a wide variability.   We could try to mitigate these
>> effects (supply the trig function code expressed as in basic operations ie.
>> Hart and Cheney, replace divisions in the reference code with
>> multiplications when we know they are correct replace 1/(h*h) with
>> DIM*DIM).  Or perhaps we do none of these things and see what kind of
>> results people dare to submit.
>>
>> I would guess the TOP500 effort is aware of this with the introduction of
>> mixed-precision LINPACK in the last few years (like GHPL).
>>
>> So, thoughts?
>>
>> Brian
>>
>>
>> Brian Van Straalen         Lawrence Berkeley Lab
>> BVStraalen at lbl.gov         Computational Research
>> (510) 486-4976             Division (crd.lbl.gov)
>>
>>
>>
>>
>>




More information about the HPGMG-Forum mailing list