[HPGMG Forum] HPGMG release v0.1

Sam Williams swwilliams at lbl.gov
Mon Jun 9 13:37:04 UTC 2014

Are you referring to the time-to-solution figure or the DOF/s figure?

Currently, the x-axis is synonymous with problem size.  i.e 128^3, 256^3, 384^4, ... 5120^3.  Thus you can comment on the the differences in time-to-solution for the same problem size on differing processor architectures with the caveat that there are 128^3 DOF per numa node.

If you change this, then at a given x-coordinate, it would be showing time-to-solution for
 128x128x128 on Mira / K
 128x128x256 on Edison/Peregrine
 128x256x256 on Hopper
I'm not sure this helps.

Conversely, if you want a figure that shows DOF/s as a function of the number of nodes then there are still a few questions... 
- is 27 processes (= 27 numa nodes) on Hopper 6.75 nodes or 7 nodes ?
- do you want a figure (or table) that simply shows max DOF/s (at any scale)

On Jun 8, 2014, at 11:46 PM, Jed Brown <jed at jedbrown.org> wrote:

> Brian Van Straalen <bvstraalen at lbl.gov> writes:
>> I know that my own host site NERSC uses hours*nodes*cores/node which
>> would seem to indicate people are core-counting, but perhaps Edison is
>> the last of the truly Fat core platforms we will see and we will go
>> back to allocation awards being in units of nodes.
> NERSC has a different charge factor for Edison versus Hopper (and
> different factors for different queues).  The x axis is arbitrary,
> serving only to quantify run size in units that can be interpreted
> separately for each machine.  Slope and maximum value are the quantities
> that are meaningful to compare.
> Sam, I think that counting NUMA nodes, while principled and relevant to
> the implementation, is ultimately confusing and prone to
> misinterpretation.  Would you mind regenerating this figure with x axis
> representing compute nodes (the unit in which users of the machine
> count)?
> _______________________________________________
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org
> https://hpgmg.org/lists/listinfo/hpgmg-forum

More information about the HPGMG-Forum mailing list