[HPGMG Forum] HPGMG release v0.1

Constantinos Evangelinos cevange at us.ibm.com
Sun Jun 8 14:54:21 UTC 2014


In my mind at least users ask a queuing system in most cases for nodes as
node sharing is discouraged for obvious reasons. So nodes seems to me to be
the most useful x-axis choice. Cores is problematic as (a) they get added
in large block increments and (b) it stretches the axes a lot even without
thinking of GPUs with the relatively wimpy cores in BG/Q and Xeon Phi.

Constantinos

Sent from my iPhone so please excuse any typing errors

> On Jun 7, 2014, at 7:01 PM, "Sam Williams" <swwilliams at lbl.gov> wrote:
>
> Nominally, there would be a paragraph describing the setup for a figure.
For this data, the x-axis is what is colloquially defined today as a numa
node on the Cray machines.  There is one process per numa node.  Thus, for
all of these machines, there is one process per chip.
>
> K = 1 processes per compute node, 8 threads per process
> BGQ = 1 process per compute node, 64 threads per process
> Edison = 2 processes per compute node, 12 threads per process
> Peregrine = 2 processes per compute node, 12 threads per process
> Hopper = 4 processes per compute node, 6 threads per process
>
>
>
>
> On Jun 7, 2014, at 3:51 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> >
> >  I submit that even nodes or “sockets” is actually not completely
unambiguous
> >
> > On Jun 7, 2014, at 5:39 PM, Jeff Hammond <jeff.science at gmail.com>
wrote:
> >
> >> On Sat, Jun 7, 2014 at 3:35 PM, Barry Smith <bsmith at mcs.anl.gov>
wrote:
> >>>
> >>> On Jun 7, 2014, at 5:31 PM, Jeff Hammond <jeff.science at gmail.com>
wrote:
> >>>
> >>>> On Sat, Jun 7, 2014 at 3:26 PM, Barry Smith <bsmith at mcs.anl.gov>
wrote:
> >>>>>
> >>>>> The use of multicore processor == sockets as the independent
variable in the plot of aggregate performance seems arbitrary. Though
people should not use this kind of plot to compare machines they will. Now
just change sockets to nodes and boom suddenly the machines compare very
differently (since some systems have two sockets per node and some one).
Should cores be used instead? Or hardware threads? Or cores scaled by their
clock speed? Or hardware floating point units (scaled by clock speed?) ? Or
number of instruction decorders? Power usage? Cost? etc etc. Maybe have a
dynamic plot where one can switch the independent variable by selecting
from a menu or moving the mouse over choices ….?
> >>>>
> >>>  Yes, but how do we measure power? The actual amount being pulled
from the “wall socket”? Is that possible? Like the various hardware
features you mention I wouldn’t trust anything the vendor says about power.
> >>
> >> Assuming you run on more than one node, just use the total machine
> >> power that is used by Green500.  Granted, that is not ideal since it
> >> won't be measured for the same code, but at least there is a
> >> well-defined procedure for measuring it and hopefully it is at least
> >> roughly comparable between systems.
> >>
> >> But I agree that power is nearly as hard to get exactly right as
> >> anything else besides counting nodes.  That is about the only
> >> independent variable that seems unambiguous.
> >>
> >> Jeff
> >>
> >>>> The last suggestion is obviously the best one since it is the most
> >>>> general, but I think power is the best choice of independent
variable.
> >>>> Most of the other hardware features are bad choices because it is
very
> >>>> hard to determine some of these.  What is the clock speed of an
Intel
> >>>> socket that does dynamic frequency scaling?  How do you count cores
on
> >>>> a GPU?  NVIDIA's core-counting methodology is complete nonsense...
> >>>>
> >>>> Best,
> >>>>
> >>>> Jeff
> >>>>
> >>>>
> >>>>> On Jun 7, 2014, at 4:27 PM, Jed Brown <jed at jedbrown.org> wrote:
> >>>>>
> >>>>>> Mark Adams <mfadams at lbl.gov> writes:
> >>>>>>> We are please to announce that hpgmg.org and the associated
mailing
> >>>>>>> list hpgmg-forum at hpgmg.org is officially available.
> >>>>>>
> >>>>>> Thanks, Mark.  To help kick off the discussion, I would like to
call
> >>>>>> attention to our recent blog posts describing "results".
> >>>>>>
> >>>>>> The most recent announces the v0.1 release and includes a Kiviat
diagram
> >>>>>> comparing the on-node performance characteristics of CORAL apps
and
> >>>>>> several benchmarks running on Blue Gene/Q.
> >>>>>>
> >>>>>> https://hpgmg.org/2014/06/06/hpgmg-01/
> >>>>>>
> >>>>>>
> >>>>>> This earlier post shows performance on a variety of top machines:
> >>>>>>
> >>>>>> https://hpgmg.org/2014/05/15/fv-results/
> >>>>>>
> >>>>>>
> >>>>>> We are interested in better ways to collect and present the
comparison
> >>>>>> data as well as any characteristics that you think are important.
> >>>>>>
> >>>>>>
> >>>>>> In addition to the general principles on the front page, some
further
> >>>>>> rationale is given at:
> >>>>>>
> >>>>>> https://hpgmg.org/why/
> >>>>>>
> >>>>>> None of this is set in stone and we would be happy to discuss any
> >>>>>> questions or comments on this list.
> >>>>>>
> >>>>>>
> >>>>>> Please encourage any interested colleagues to subscribe to this
list:
> >>>>>>
> >>>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
> >>>>>> _______________________________________________
> >>>>>> HPGMG-Forum mailing list
> >>>>>> HPGMG-Forum at hpgmg.org
> >>>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
> >>>>>
> >>>>> _______________________________________________
> >>>>> HPGMG-Forum mailing list
> >>>>> HPGMG-Forum at hpgmg.org
> >>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Jeff Hammond
> >>>> jeff.science at gmail.com
> >>>> http://jeffhammond.github.io/
> >>>
> >>
> >>
> >>
> >> --
> >> Jeff Hammond
> >> jeff.science at gmail.com
> >> http://jeffhammond.github.io/
> >
> > _______________________________________________
> > HPGMG-Forum mailing list
> > HPGMG-Forum at hpgmg.org
> > https://hpgmg.org/lists/listinfo/hpgmg-forum
>
> _______________________________________________
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org
> https://hpgmg.org/lists/listinfo/hpgmg-forum
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20140608/2678ac7c/attachment-0001.html>


More information about the HPGMG-Forum mailing list