[HPGMG Forum] what is a compute resource

Theodore Omtzigt theo at stillwater-sc.com
Sun Jun 8 15:15:24 UTC 2014


For von Neumann architectures the resources that actually support
compute state should be the quantification of 'processor'.

In hyperthreaded architectures, there is duplication of the front-end
(decode to dispatch), and times-slicing on the back-end (functional
units) to better utilize the silicon.

In many-core architectures, which have very long pipelines to memory,
the front-end is duplicated on the state side (IP, control registers,
etc.) so that the hardware can multiplex these threads to fill the
memory pipelines.

The reason this is important is that these architectures are designed to
be 'used' that way. Otherwise stated, the queuing that goes on in the
hardware is optimized to 'compute' the optimal resource utilization only
when there is enough concurrency available. Secondly, the architecture
family is typically designed to scale across a spectrum of hardware
resource implementations, so the only element that is constant is the
architecture definition of the compute abstraction. If you use that to
report machine organization you are giving the hardware designers a much
better guidance on what the optimize for HPC.

>> On Jun 7, 2014, at 7:01 PM, "Sam Williams" <swwilliams at lbl.gov> wrote:
>>
>> Nominally, there would be a paragraph describing the setup for a figure.
> For this data, the x-axis is what is colloquially defined today as a numa
> node on the Cray machines.  There is one process per numa node.  Thus, for
> all of these machines, there is one process per chip.
>> K = 1 processes per compute node, 8 threads per process
>> BGQ = 1 process per compute node, 64 threads per process
>> Edison = 2 processes per compute node, 12 threads per process
>> Peregrine = 2 processes per compute node, 12 threads per process
>> Hopper = 4 processes per compute node, 6 threads per process
>>
>>
>>
>>
>> On Jun 7, 2014, at 3:51 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>>  I submit that even nodes or ?sockets? is actually not completely
> unambiguous
>>> On Jun 7, 2014, at 5:39 PM, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>>>> On Sat, Jun 7, 2014 at 3:35 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
>>>>> On Jun 7, 2014, at 5:31 PM, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>>>>>> On Sat, Jun 7, 2014 at 3:26 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
>>>>>>> The use of multicore processor == sockets as the independent
> variable in the plot of aggregate performance seems arbitrary. Though
> people should not use this kind of plot to compare machines they will. Now
> just change sockets to nodes and boom suddenly the machines compare very
> differently (since some systems have two sockets per node and some one).
> Should cores be used instead? Or hardware threads? Or cores scaled by their
> clock speed? Or hardware floating point units (scaled by clock speed?) ? Or
> number of instruction decorders? Power usage? Cost? etc etc. Maybe have a
> dynamic plot where one can switch the independent variable by selecting
> from a menu or moving the mouse over choices ?.?
>>>>>  Yes, but how do we measure power? The actual amount being pulled
> from the ?wall socket?? Is that possible? Like the various hardware
> features you mention I wouldn?t trust anything the vendor says about power.
>>>> Assuming you run on more than one node, just use the total machine
>>>> power that is used by Green500.  Granted, that is not ideal since it
>>>> won't be measured for the same code, but at least there is a
>>>> well-defined procedure for measuring it and hopefully it is at least
>>>> roughly comparable between systems.
>>>>
>>>> But I agree that power is nearly as hard to get exactly right as
>>>> anything else besides counting nodes.  That is about the only
>>>> independent variable that seems unambiguous.
>>>>
>>>> Jeff
>>>>
>>>>>> The last suggestion is obviously the best one since it is the most
>>>>>> general, but I think power is the best choice of independent
> variable.
>>>>>> Most of the other hardware features are bad choices because it is
> very
>>>>>> hard to determine some of these.  What is the clock speed of an
> Intel
>>>>>> socket that does dynamic frequency scaling?  How do you count cores
> on
>>>>>> a GPU?  NVIDIA's core-counting methodology is complete nonsense...
>>>>>>
>>>>>> Best,
>>>>>>
>>>>>> Jeff
>>>>>>
>>>>>>
>>>>>>> On Jun 7, 2014, at 4:27 PM, Jed Brown <jed at jedbrown.org> wrote:
>>>>>>>
>>>>>>>> Mark Adams <mfadams at lbl.gov> writes:
>>>>>>>>> We are please to announce that hpgmg.org and the associated
> mailing
>>>>>>>>> list hpgmg-forum at hpgmg.org is officially available.
>>>>>>>> Thanks, Mark.  To help kick off the discussion, I would like to
> call
>>>>>>>> attention to our recent blog posts describing "results".
>>>>>>>>
>>>>>>>> The most recent announces the v0.1 release and includes a Kiviat
> diagram
>>>>>>>> comparing the on-node performance characteristics of CORAL apps
> and
>>>>>>>> several benchmarks running on Blue Gene/Q.
>>>>>>>>
>>>>>>>> https://hpgmg.org/2014/06/06/hpgmg-01/
>>>>>>>>
>>>>>>>>
>>>>>>>> This earlier post shows performance on a variety of top machines:
>>>>>>>>
>>>>>>>> https://hpgmg.org/2014/05/15/fv-results/
>>>>>>>>
>>>>>>>>
>>>>>>>> We are interested in better ways to collect and present the
> comparison
>>>>>>>> data as well as any characteristics that you think are important.
>>>>>>>>
>>>>>>>>
>>>>>>>> In addition to the general principles on the front page, some
> further
>>>>>>>> rationale is given at:
>>>>>>>>
>>>>>>>> https://hpgmg.org/why/
>>>>>>>>
>>>>>>>> None of this is set in stone and we would be happy to discuss any
>>>>>>>> questions or comments on this list.
>>>>>>>>
>>>>>>>>
>>>>>>>> Please encourage any interested colleagues to subscribe to this
> list:
>>>>>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>>>>>>>> _______________________________________________
>>>>>>>> HPGMG-Forum mailing list
>>>>>>>> HPGMG-Forum at hpgmg.org
>>>>>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>>>>>>> _______________________________________________
>>>>>>> HPGMG-Forum mailing list
>>>>>>> HPGMG-Forum at hpgmg.org
>>>>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jeff Hammond
>>>>>> jeff.science at gmail.com
>>>>>> http://jeffhammond.github.io/
>>>>
>>>>
>>>> --
>>>> Jeff Hammond
>>>> jeff.science at gmail.com
>>>> http://jeffhammond.github.io/
>>> _______________________________________________
>>> HPGMG-Forum mailing list
>>> HPGMG-Forum at hpgmg.org
>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>> _______________________________________________
>> HPGMG-Forum mailing list
>> HPGMG-Forum at hpgmg.org
>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20140608/2678ac7c/attachment.html>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org
> https://hpgmg.org/lists/listinfo/hpgmg-forum
>
>
> ------------------------------
>
> End of HPGMG-Forum Digest, Vol 3, Issue 3
> *****************************************
>

-- 
*Dr. E. Theodore L. Omtzigt*
CEO and Founder
Stillwater Supercomputing, Inc.
office US: EST (617) 314 6424, PST (415) 738 7387
mobile US: +1 916 296-7901
mobile EU: +31 6 292 000 50
3941 Park Drive, Suite 20-354
El Dorado Hills, CA 95762
USA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20140608/e5219974/attachment.html>


More information about the HPGMG-Forum mailing list