Arpad Horvath, William W Nazaroff, Ashok J Gadgil 2_第1頁(yè)
Arpad Horvath, William W Nazaroff, Ashok J Gadgil 2_第2頁(yè)
Arpad Horvath, William W Nazaroff, Ashok J Gadgil 2_第3頁(yè)
Arpad Horvath, William W Nazaroff, Ashok J Gadgil 2_第4頁(yè)
Arpad Horvath, William W Nazaroff, Ashok J Gadgil 2_第5頁(yè)
已閱讀5頁(yè),還剩7頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、energy implications of economizer use in california data centersarman shehabi1, srirupa ganguly2,*, kim traber3, hillary price3arpad horvath1, william w nazaroff1, ashok j gadgil2, 1department of civil and environmental engineering, university of california, berkeley, ca 947202indoor environment dep

2、artment, lawrence berkeley national laboratory, berkeley, ca 947203rumsey engineers, oakland, ca 94607*author to whom correspondence should be addressed e-mail: srgabstractin the us, data center operations currently account for about 61 billion kwh/y of electricity consumption, which is

3、 more than 1.5% of total demand. data center energy consumption is rising rapidly, having doubled in the last five years. a substantial portion of data-center energy use is dedicated to removing the heat generated by the computer equipment. data-center cooling load might be met with substantially re

4、duced energy consumption with the use of air-side economizers. this energy saving measure, however, has been shown to expose servers to an order-of-magnitude increase in indoor particle concentrations with an unquantified increase in the risk of equipment failure. an alternative energy saving option

5、 is the use of water-side economizers, which do not affect the indoor particle concentration but require additional mechanical equipment and tend to be less beneficial in high humidity areas. published research has only presented qualitative benefits of economizer use, providing industry with inadeq

6、uate information on which to base their design decisions. energy savings depend on local climate and the specific building-design characteristics. in this paper, based on building energy models, we report energy savings for air-side and water-side economizer use in data centers in several climate zo

7、nes in california. results show that in terms of energy savings, air-side economizers consistently outperform water-side economizers, though the performance difference varies by location. model results also show that conventional humidity restrictions must by relaxed or removed to gain the energy be

8、nefits of air-side economizers. introductiondata centers are computing facilities that house the electronic equipment used for data processing, networking and storage. rapid growth in computational demand emerging from various sectors of the economy is causing strong rates of increase in servers and

9、 it-related hardware (idc 2007). server performance has doubled every two years since 1999, leading to increasingly higher densities of heat dissipation within data centers (belady 2007). a substantial proportion of energy consumption in data centers is dedicated to the cooling load associated with

10、electronic power dissipation (tschudi et al. 2003). a recent study estimates that us data centers account for 61 billion kwh or 1.5% of the nations annual electricity consumption (us doe 2007a). this corresponds to an electricity bill of approximately $4.5 billion in 2006 (epa 2007). the environment

11、al impact is substantial because 70% of the electricity in us is generated in power plants that burn fossil fuel (eia 2007). improved data center cooling technologies have the potential to provide significant energy savings. cost savings and environmental benefits might also accrue.a typical data ce

12、nter consists of rows of tall (2 m) cabinets or racks in which the servers, data storage and networking equipment are vertically arrayed. the cooling of data-center equipment is accomplished using computer room air conditioners (cracs), which supply cold air to a raised-floor plenum beneath the rack

13、s. the crac system air handler is placed on the data center floor while chilled water in transported from compressor-based chillers to the crac cooling coils. more efficient cooling systems employ low outside air temperatures to reduce chiller load. cooling towers that use ambient air to directly co

14、ol or precool the chilled water are known as water-side or fluid-side economizers. this type of system has been claimed to cut cooling-energy costs by as much as 70% (ashrae hvac fundamentals handbook 2005) during economizer operation. based on local weather data in san jose, water-side economizers

15、can be used for more than one-third of the year (pg&e 2006). an alternate data center arrangement uses air-handling units (ahu) and an air-side economizer. such systems directly provide outdoor air for cooling whenever the temperature of outside air is lower than the set-point for return-air tempera

16、ture in the data center. in san franciscos cool climate, outside air could contribute to some level of air-side cooling for nearly all hours of the year (syska hennessy 2007). the use of air-side economizers brings with it an associated concern about contamination including moisture from humidity th

17、at may possibly threaten equipment reliability. deliquescent sulfate, nitrate and chloride salts, in a humid environment ( 40% relative humidity) can cause corrosion, accumulate and become conductive, and may lead to electrical short-circuiting (rice et al. 1981; sinclair et al. 1990; litvak et al.

18、2000). in this paper, the energy implications of a data center using a crac system will be compared with alternative cooling systems using air-side or water-side economizers for five different california climate zones. the modeling results and discussion focus on understanding the energy implication

19、s for both type of economizers and their effectiveness in different climate zones. the equipment reliability concerns associated with air-side economizers are acknowledged to be important, but addressing it is beyond the scope of the present paper.methods data center design scenariosenergy-use simul

20、ations were performed for three different data center hvac design scenarios (figure 1). the baseline case considers a data center using conventional “computer room air conditioning” (crac) units. in this scenario, crac units are placed directly on the computer room floor. air enters the top of a cra

21、c unit, passes across the cooling coils, and is then discharged to the underfloor plenum. perforations in the floor tiles in front of the server racks allow the cool air to exit from the plenum into the data-center room. fans within the computer servers draw the conditioned air upward and through th

22、e servers to remove equipment-generated heat. after exiting the backside of the server housing, the warm air rises and is transported to the intake of a crac unit. most air circulation in the baseline scenario is internal to the data center. a small amount of air is supplied through a rooftop ahu to

23、 positively pressurize the room and to supply outside air for occupants. cooling is provided by a water-cooled chiller plant. refrigerant in the chillers is used to cool water through heat exchangers at the evaporator. the chilled water is then piped to the crac units on the data center floor. waste

24、 heat from the chiller refrigerant is removed by water through heat exchangers in the condenser. condenser water is piped from the cooling towers, which cools the water through interaction with the outside air. this baseline design is common to most mid- to large-size data centers (tschudi et al. 20

25、03; rumsey 2005; syska hennessy 2007).the water-side economizer (wse) scenario assumes a crac unit layout similar to that of the baseline case, except that additional heat exchangers are installed between the condenser water in the cooling towers and the chilled water supplied to the crac units. und

26、er appropriate weather conditions, the cooling towers can cool the condenser water enough to cool the chilled water in the crac units directly, without operating the chiller plant. the crac units and chiller plant are assumed to be the same as in the baseline scenario.the air-side economizer scenari

27、o (ase) requires a different type of air delivery than typically found in a data center with conventional crac units. ahus are placed outside of the data center room, commonly on the rooftop, and air is then sent to and from the computer racks through ducts. a ducted air delivery system creates grea

28、ter air resistance than a conventional crac unit layout, though this system better prevents cold and warm air from unintentionally mixing within the data center. when the outside air temperature is equal to or below the temperature of the air supplied to cool the server, the ahu can directly draw ou

29、tside air into the data center and exhaust all of the return air after it has passed across the computer servers. the movement of 100% outside air through the system can require more fan energy than the baseline case, as the economizer design requires more ducting, which increases air resistance thr

30、ough the system. however, during this 100% outside air mode the cooling is provided without operating the chiller, chilled water pumps, condenser water pumps, or the cooling tower fans. outside air is also provided instead of recirculated air whenever the outside air temperature is greater than the

31、supply air temperature but lower than that of the return air. under this condition the chiller must operate, but the cooling required of the chiller is less than in a case with complete recirculation. energy modeling protocolfor each design scenario, the model calculations assume a 30,000 ft2 (2800

32、m2) data center with an internal heat density of approximately 80 w/ft2 (0.86 kw/m2; 2.4 mw total) this size and power density are characteristic of data centers evaluated in previous studies (shehabi et al. 2008; greenberg et al. 2006; tschudi et al. 2003). the size of data centers varies greatly;

33、30,000 ft2 is within the largest industry size classification, which is responsible for most servers in the us (idc 2007). power density in data centers is rapidly increasing (uptime institute 2000) and a power density of 80 w/ft2 is currently considered to be of low- to mid-range (rumsey 2008). bas

34、ic properties of the modeled data center for all three scenarios are summarized in table 1. energy demand is calculated as the sum of the loads generated by servers, chiller use, fan operation, transformer and uninterruptible power supply (ups) losses, and building lighting. the chiller encompasses

35、coolant compressor, chilled water pumps, condensing water pumps, humidification pumps, and cooling-tower fans. energy demand for servers, ups, and lighting are constant, unaffected by the different design scenarios, but are included to determine total building-energy use. the base case and wse scena

36、rios assume conventional humidity restrictions recommend by ashrae (ashrae 2005). the ase scenario assumes no humidity restriction, which is an adjustment required to gain ase benefits as is typical in ase implementation (rumsey 2008). air-side economizers also require a different air distribution d

37、esign and the fan parameters associated with each design scenario are listed in table 2. the properties of other pumps and fans throughout the hvac system remain constant for all three scenarios. values are from previous data-center energy analyses (rumsey 2008; rumsey 2005).the energy modeling appr

38、oach used in this study applies a previously used protocol (rumsey 2008; rumsery 2005) and is based on a combination of fundamental hvac sizing equations that apply equipment size and efficiencies observed through professional experience. building energy modeling is typically performed using energy

39、models such as doe-2, which simultaneously models heat sources and losses within the building and through the building envelope. however, models such as doe-2 are not designed to incorporate some of the hvac characteristics unique to data centers. also, data centers have floor-area-weighted power de

40、nsities that are 15-100 times higher than those of typical commercial buildings (greenberg et al. 2006). this allows accurate modeling of data-center energy use to focus exclusively on internal heat load and the thermal properties of outdoor air entering the building. this is the approach taken in t

41、his study, as heat generated from data center occupants and heat transfer through the building envelope are negligible relative to the heat produced by servers. the building envelope may influence the cooling load in low-density data centers housed in older buildings that have minimal insulation. ev

42、aluating this building type is worthy of exploration, but the required analysis is more complex and outside the scope of the present paper. both air-side and water-side economizers are designed to allow the chiller to shut down or reduce chiller energy load under appropriate weather conditions. less

43、 overall energy is required for operation when the chiller load is reduced, but chiller efficiency is compromised. changes in chiller efficiency used in this analysis are shown in figure 2, representing a water-cooled centrifugal chiller with a capacity 300 tons and condenser water temperature of 80

44、 f. a chilled water temperature of 45 f, which is standard practice for data center operation, is used in the base case and ase scenario. the wse scenario uses a chilled water temperature of 52 f, which is common when using water-side economizers. this increases needed airflow rates but allows great

45、er use of the water-side economizers. the curves are based on the doe2.1e software model and apply coefficients specified in the nonresidential alternative calculation method (acm) approval manual for the 2005 building energy efficiency standards for residential and nonresidential buildings (cec 200

46、5). annual data center energy use is evaluated for each of the three configuration scenarios assuming that a data center building is located in each of the five cities shown in figure 3. weather conditions at each city are based on hourly doe2.1e weather data for california climate zones (cec 2005).

47、 results and discussionresults from each scenario modeled are presented in table 3 as a “performance ratio” which equals the ratio of total building energy divided by the energy required to operate the computer servers. lower value of the performance ratio implies better energy utilization of the hv

48、ac system. the performance ratio for the base case is 1.55 and, as expected, is the same for all the cities analyzed, since the operation of this design is practically independent of outdoor weather conditions. the base case performance ratio is better than the current stock of data centers in the u

49、s (epa 2007; koomey 2007) because the base case represents newer data centers with water-cooled chillers, which are more efficient than the air-cooled chillers and direct expansion (dx) cooling systems found in older data centers. the performance ratios for the ase and wse scenarios show air-side ec

50、onomizers consistently provide savings relative to the base case, though the difference in savings between the two scenarios varies. it is important the note that even small changes in the performance ratio results in significant savings, given the large amount of energy used in data centers. for ex

51、ample, reducing the performance ratio at the model data center in san jose from 1.55 to 1.44 represents a savings of about 1.9 million kwh/y, which corresponds to a cost savings of more than $130,000/y (assuming $0.07/kwh).figure 4 shows the disaggregation of the cooling systems annual energy use, n

52、ormalized by floor area, for each modeled data center by location and design scenario. the annual energy use dedicated to the servers, usp, and lighting is 584, 95, and 9 kwh/ft2, respectively. these energy values are independent of the climate and hvac design in scenario and not included in the gra

53、phs in figure 4. economizer use is typically controlled by combination of outside air temperature, humidity, and enthalpy; however results shown in figure 4 are for economizer use controlled by outside air temperature only. results show that the ase scenario provides the greatest savings in san fran

54、cisco while fresno provides the least ase savings. sacramento benefited the most from the wse scenario while minimal savings were realized in los angeles and san francisco. the san francisco wse scenario, where significant gains would be expected because of the cool climate, is hindered by chiller p

55、art-load inefficiencies. the relatively higher moisture content in the san francisco air increases the latent cooling load in the model and causes the chiller plant to reach the capacity limit of the first chiller more often, activating a second chiller. the second chiller shares the cooling load eq

56、ually with the first, resulting in a transition from one chiller at a high load factor (efficient operation) to two chillers at slightly above half the load factor (less efficient operation). the results from the wse scenario in san francisco emphasize the need for engineers to model the hour-by-hou

57、r load, rather than just the peak load, and to size chillers such that all active chillers at any moment will be running near their most efficient operating point.figure 5 shows that removing the humidity restrictions commonly applied to data centers is necessary to gain ase energy savings. as the r

58、elative humidity (rh) ranged is narrowed, energy use from the fans begins to sharply increase, surpassing the equivalent baseline energy in most of the cities. humidity levels are often restricted in data centers to minimize potential server reliability issues. ashraes guidelines released in 2005 fo

59、r data centers provide a “recommend” rh range between 40-55% and an “allowable” range between 20-80% (ashrae 2005). there is minimal cost in applying the more conservative ashrae rh restrictions in conventional data center design, such as the baseline in this study shown in figure 5. the influence of humidity on server performance, however, is poorly documented and the need for humidity restrictions is i

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論