In foggy Santa Barbara today, a group of scientists and industry leaders has gathered to discuss new technologies, policies regarding energy efficiency, and the research environment at universities contributing to both.
As we all know, 80% of energy needs are now supplied by burning hydrocarbons (oil, coal, and natural gas).
Fred Chong, Computing Solutions Group Head, at the Santa Barbara Institute of Energy Efficiency laid bare his plans for his 20 plus team and how they see their research mandate.
“We want more than
Goal #1 is 10x more efficient data centers within the next 5 years, by making energy use proportional to load. The pathway is virtualization and consolidation software, energy aware data management and network protocols, new server architectures.
in a description of the milestones achieved at Google, Luis Barroso talked about Energy Proportional Computing. It says the following: no work, no power consumed. some work, some power consumed, and lots of work, lots of power consumed. Therefore, since we know that processors are idle, energy use could be halved and peak data center power could be as well. we get there with better designed energy proportional components. He concluded by saying that energy proportionaly should be a first-order design goal because data centers have different needs than handhelds.
Feng Zhao, Assistant Managing Director of Microsoft Research Asia began with The Power Spectrum. he said that computing on a dime (10 to the minus 2 in watts) has 9 orders of magnitude in power difference with computing in a warehouse (10 to the 7th power in watts) with tradeoffs in energy and performance across the scale. He talked about an ongoing Microsoft project called "Data Center Genome" focusing on software and hardware sensors. The wireless sensors, 10,000 manufactured for Microsoft are deployed across MS data centers to save energy and improve operation efficiency by collecting, archiving and understanding operations data.
Tens of millions of concurrent users are on Windows MSN Messenger Connection Services at any time. The backend servers in clusters of 60, are doing authentication, address book etc, concurrently before a connection is established. This means that server loads fluctuate over time and they need to handle peak load worldwide. provisioning those machines for peak or very low power states is the goal to yeield significant energy savings. "It is tricky to do. To repurpose and consolidate workloads is difficult. while average utilization is predictive, black swan events where everyone goes online at once, requires a robust buffer and requires load-balancing and load-skewing strategies. We can save 30% in energy when we get the algorithms right," Feng Zhao said.
Software activity and hardware components have been detailed in an energy profile in Microsoft's "Joule Meter". Different models for different machines have been tried. These include trace collection, and resulting profiles for estimation errors, application energy, componentenergy and application/component. Having these models makes virtualization strategies more successful.
For Feng Zhao, making energy a 'first-class citizen in design' implies considering energy complexity, racknowledging that there are many opportunities to exploit relevant power knobs at multiple layers of systems and apps, and to think holistically across workload, performance and energy.
Have a green day!