<meta content="text/html; charset=ISO-8859-1"
<body bgcolor="#FFFFFF" text="#000000">
I would argue that, yes, the machine would be less (in)efficient if
it is placed in a (less)more efficient environment: a metric that is
looked at from a program management point of view. As DOF/J and
DOF/$ are more 'program' metrics than machine metrics, I don't think
it is unreasonable to sample the overall program space.<br>
Having said that, I do see the problem of a highly ambiguous number
and the potential of undermining the whole HPGMG concept as people
misinterpret the metric and than go off the deep end arguing
I don't care one way or another, and DOF/J and DOF/$ can be derived
by interested parties if they have a J or $ measurement, so nothing
lost except a little elbow grease by not providing it as a standard
<div class="moz-cite-prefix">On 11/28/2014 11:04 AM, Jeff Hammond
<pre wrap="">This incorrectly assumes different facilities have the same amount of power and dollar overhead above the machine operating costs.
Total operation budget is not what it takes to deliver HPGMG performance, but rather the machine mission. If it takes more people to execute an NNSA mission than an NSF one, do you seriously think that should cause that machine to be less inefficient by your metric?
As unpleasant as it is for scientists to not know everything, you all are going to have accept that in this case. This information is, like the smell of the breeze on Mars, not knowable in the foreseeable future.
Sent from my iPhone
<pre wrap="">On Nov 28, 2014, at 6:57 AM, Theodore Omtzigt <a class="moz-txt-link-rfc2396E" href="mailto:firstname.lastname@example.org"><email@example.com></a> wrote:
On the DOF/J and DOF/$, could we start with the baseline of amortized
power and cost of the facility per year, than prorate it to the run-time
of the benchmark? power and cost of the facility per year should be
easily obtained from the respective powers that be. It would accurately
reflect all the peripheral power (networking gear, storage gear,
security gear, lights, offices of sys admins and security, CRACs, etc.)
and peripheral costs (facility real-estate, backup power capex and
amortization, possible interest payments, etc.)
A supercomputer as a capital asset would need to be budgeted for in this
way anyway, so there is a whole data collection machinery already in
place to get you those numbers. From the application run team's
perspective it may look a bit funny, as the J and $ are not going to be
proportional to the number of cores or nodes, but it would be reflective
of the actual power and $ needed to deliver the performance.
By and large, this would be a good step forward to understand the
proportional dynamics of DOF/J and DOF/$.
HPGMG-Forum mailing list
<a class="moz-txt-link-abbreviated" href="mailto:HPGMG-Forum@hpgmg.org">HPGMG-Forum@hpgmg.org</a>
<a class="moz-txt-link-freetext" href="https://hpgmg.org/lists/listinfo/hpgmg-forum">https://hpgmg.org/lists/listinfo/hpgmg-forum</a>
<div class="moz-signature">-- <br>
<b style="color:blue">Dr. E. Theodore L. Omtzigt</b><br>
CEO and Founder<br>
Stillwater Supercomputing, Inc.<br>
office US: EST (617) 314 6424, PST (415) 738 7387<br>
mobile US: +1 916 296-7901<br>
mobile EU: +31 6 292 000 50<br>
3941 Park Drive, Suite 20-354<br>
El Dorado Hills, CA 95762<br>