[HPGMG Forum] Performance Versatility slides from Monday

John Shalf jshalf at lbl.gov
Sun Nov 30 03:44:33 UTC 2014

That is the problem.


On Nov 28, 2014, at 11:54 AM, Jeff Hammond <jeff.science at gmail.com> wrote:

> Good luck finding $ data on any of the big machines :-)
> Jeff 
> Sent from my iPhone
> On Nov 28, 2014, at 11:47 AM, Theodore Omtzigt <theo at stillwater-sc.com> wrote:
>> I would argue that, yes, the machine would be less (in)efficient if it is placed in a (less)more efficient environment: a metric that is looked at from a program management point of view. As DOF/J and DOF/$ are more 'program' metrics than machine metrics, I don't think it is unreasonable to sample the overall program space.
>> Having said that, I do see the problem of a highly ambiguous number and the potential of undermining the whole HPGMG concept as people misinterpret the metric and than go off the deep end arguing unrelated issues.
>> I don't care one way or another, and DOF/J and DOF/$ can be derived by interested parties if they have a J or $ measurement, so nothing lost except a little elbow grease by not providing it as a standard metric.
>> Theo
>> On 11/28/2014 11:04 AM, Jeff Hammond wrote:
>>> This incorrectly assumes different facilities have the same amount of power and dollar overhead above the machine operating costs.
>>> Total operation budget is not what it takes to deliver HPGMG performance, but rather the machine mission. If it takes more people to execute an NNSA mission than an NSF one, do you seriously think that should cause that machine to be less inefficient by your metric?
>>> As unpleasant as it is for scientists to not know everything, you all are going to have accept that in this case. This information is, like the smell of the breeze on Mars, not knowable in the foreseeable future.
>>> Jeff
>>> Sent from my iPhone
>>>> On Nov 28, 2014, at 6:57 AM, Theodore Omtzigt <theo at stillwater-sc.com> wrote:
>>>> On the DOF/J and DOF/$, could we start with the baseline of amortized
>>>> power and cost of the facility per year, than prorate it to the run-time
>>>> of the benchmark? power and cost of the facility per year should be
>>>> easily obtained from the respective powers that be. It would accurately
>>>> reflect all the peripheral power (networking gear, storage gear,
>>>> security gear, lights, offices of sys admins and security, CRACs, etc.)
>>>> and peripheral costs (facility real-estate, backup power capex and
>>>> amortization, possible interest payments, etc.)
>>>> A supercomputer as a capital asset would need to be budgeted for in this
>>>> way anyway, so there is a whole data collection machinery already in
>>>> place to get you those numbers. From the application run team's
>>>> perspective it may look a bit funny, as the J and $ are not going to be
>>>> proportional to the number of cores or nodes, but it would be reflective
>>>> of the actual power and $ needed to deliver the performance.
>>>> By and large, this would be a good step forward to understand the
>>>> proportional dynamics of DOF/J and DOF/$.
>>>> Theo
>>>> _______________________________________________
>>>> HPGMG-Forum mailing list
>>>> HPGMG-Forum at hpgmg.org
>>>> https://hpgmg.org/lists/listinfo/hpgmg-forum
>> -- 
>> Dr. E. Theodore L. Omtzigt
>> CEO and Founder
>> Stillwater Supercomputing, Inc.
>> office US: EST (617) 314 6424, PST (415) 738 7387
>> mobile US: +1 916 296-7901
>> mobile EU: +31 6 292 000 50
>> 3941 Park Drive, Suite 20-354
>> El Dorado Hills, CA 95762
>> USA
> _______________________________________________
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org
> https://hpgmg.org/lists/listinfo/hpgmg-forum

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://hpgmg.org/lists/archives/hpgmg-forum/attachments/20141129/87a41ad3/attachment.html>

More information about the HPGMG-Forum mailing list