[HPGMG Forum] Performance Versatility slides from Monday
jeff.science at gmail.com
Fri Nov 28 16:04:01 UTC 2014
This incorrectly assumes different facilities have the same amount of power and dollar overhead above the machine operating costs.
Total operation budget is not what it takes to deliver HPGMG performance, but rather the machine mission. If it takes more people to execute an NNSA mission than an NSF one, do you seriously think that should cause that machine to be less inefficient by your metric?
As unpleasant as it is for scientists to not know everything, you all are going to have accept that in this case. This information is, like the smell of the breeze on Mars, not knowable in the foreseeable future.
Sent from my iPhone
> On Nov 28, 2014, at 6:57 AM, Theodore Omtzigt <theo at stillwater-sc.com> wrote:
> On the DOF/J and DOF/$, could we start with the baseline of amortized
> power and cost of the facility per year, than prorate it to the run-time
> of the benchmark? power and cost of the facility per year should be
> easily obtained from the respective powers that be. It would accurately
> reflect all the peripheral power (networking gear, storage gear,
> security gear, lights, offices of sys admins and security, CRACs, etc.)
> and peripheral costs (facility real-estate, backup power capex and
> amortization, possible interest payments, etc.)
> A supercomputer as a capital asset would need to be budgeted for in this
> way anyway, so there is a whole data collection machinery already in
> place to get you those numbers. From the application run team's
> perspective it may look a bit funny, as the J and $ are not going to be
> proportional to the number of cores or nodes, but it would be reflective
> of the actual power and $ needed to deliver the performance.
> By and large, this would be a good step forward to understand the
> proportional dynamics of DOF/J and DOF/$.
> HPGMG-Forum mailing list
> HPGMG-Forum at hpgmg.org
More information about the HPGMG-Forum