HPC解決方案實施服務(wù) 交付指導(dǎo)書_第1頁
HPC解決方案實施服務(wù) 交付指導(dǎo)書_第2頁
HPC解決方案實施服務(wù) 交付指導(dǎo)書_第3頁
HPC解決方案實施服務(wù) 交付指導(dǎo)書_第4頁
HPC解決方案實施服務(wù) 交付指導(dǎo)書_第5頁
已閱讀5頁,還剩202頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

HPC解決方案實施服務(wù)V100R001交付指導(dǎo)書華為機密,未經(jīng)許可不得擴散第圖410所示圖STYLEREF1\s4SEQ圖\*ARABIC\s110node001配置信息配置項值Hostnamenode001MACAddress80:38:BC:07:63:D0Rack1Position1Height2UCategoryluster-clientipmi0IPaddress01BOOTIFipaddress01ib0IPaddress01至此,節(jié)點信息配置完成計算節(jié)點部署重啟開始部署計算節(jié)點的配置信息填寫完畢后,重啟計算節(jié)點,從PXE啟動,OS部署自動開始進行查看部署狀態(tài)點擊在“SoftwareImages”菜單項,在“ProvisioningStatus”Tab菜單頁面看到節(jié)點的部署狀態(tài)等待所有主機部署完成編譯安裝IB驅(qū)動集群節(jié)點需要安裝Lustreclient用于訪問Lustre文件系統(tǒng),當采用Infiniband網(wǎng)絡(luò)通信時,可以選擇通過IPoIB或RDMA訪問數(shù)據(jù),通常會選擇性能較優(yōu)的RDMA模式。從Mellanox官網(wǎng)下載的驅(qū)動默認不支持通過RDMA模式訪問Lustre文件系統(tǒng)(支持節(jié)點間通過RDMA模式通信),需要重新編譯該驅(qū)動。編譯安裝操作只需要在其中一個計算節(jié)點例如node001上進行操作即可,后續(xù)通過同步鏡像更新到其他所有節(jié)點。配置yum源上傳OS鏡像文件將CentosOS鏡像文件CentOS-6.6-x86_64-bin-DVD1.iso上傳到node001/home目錄下執(zhí)行命令掛載鏡像mkdir/home/iso/centos6.6-pmv/home/CentOS-6.6-x86_64-bin-DVD1.iso/home/isomount-oloop/home/iso/CentOS-6.6-x86_64-bin-DVD1.iso/home/iso/centos6.6mkdir/etc/yum.repos.d.bakmv/etc/yum.repos.d/*/etc/yum.repos.d.bak/配置repo文件創(chuàng)建/etc/yum.repos.d/CentOS-dvd.repo文件,內(nèi)容如下:[base]name=CentOS-Repobaseurl=file:///home/iso/centos6.6enabled=1gpgcheck=0gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6在node001上執(zhí)行以下命令安裝必要的軟件包:yuminstall-ypython-devel由于版本問題安裝失敗的rpm包到如下地址下載/centos/6/updates/x86_64/Packages/將下載好的IB驅(qū)動上傳到node001節(jié)點/home目錄下在node001執(zhí)行以下命令編譯Infiniband驅(qū)動:cd/home/tar-zxvfMLNX_OFED_LINUX-2.4-1.0.4-rhel6.6-x86_64.tgzcd/home/MLNX_OFED_LINUX-2.4-1.0.4-rhel6.6-x86_64./mlnx_add_kernel_support.sh-m/home/MLNX_OFED_LINUX-2.4-1.0.4-rhel6.6-x86_64--make-tgz安裝Infiniband驅(qū)動重新編譯mellanox驅(qū)動后,新生成的驅(qū)動包存放在/tmp目錄下,執(zhí)行以下命令將驅(qū)動包移動到/home目錄并解壓:mkdir/home/Support_o2ib/mv/tmp/MLNX_OFED_LINUX-*-ext.tgz/home/Support_o2ib/在node001執(zhí)行以下命令安裝必要的軟件包:yuminstall-ylsofnumactlgcc-gfortranlibxml2-pythonbctcltk在node001執(zhí)行以下命令安裝驅(qū)動包:cd/home/Support_o2ib/tar-zxvfMLNX_OFED_LINUX-2.4-1.0.4-rhel6.6-x86_64-ext.tgz./MLNX_OFED_LINUX-*-ext/mlnxofedinstall完成后不重啟,繼續(xù)進行下個小節(jié)配置編譯安裝Lustre客戶端編譯安裝操作只需要在其中一個計算節(jié)點例如node001上進行操作即可,后續(xù)通過同步鏡像更新到其他所有節(jié)點獲取Lustre客戶端安裝包將下載的ieel安裝包上傳到node001節(jié)點/home目錄并解壓:cd/hometarxvfieel-.tar解壓lustre-client安裝包cd/homemkdir./lustre_clientcpieel-/lustre-client-2.5.34-bundle.tar.gzlustre_client/cdlustre_client/tarxvzflustre-client-2.5.34-bundle.tar.gz解壓后lustre-client-source-2.5.34-2.6.32_504.12.2.el6.x86_64.x86_64.rpm即為所需源碼包重新編譯Lustre客戶端Lustre客戶端默認不支持RDMA方式訪問Lustre文件系統(tǒng),需要重新編譯。在node001節(jié)點上通過以下命令安裝lustre-client-source源碼包:rpm-ivhlustre-client-source-*.rpm在node001上通過以下命令重新編譯Lustre客戶端:cd/usr/src/lustre-2.5.34/./configure--with-o2ib=/usr/src/ofa_kernel/default/makerpms在node001上將編譯完成的Lustre客戶端軟件包拷貝至/home目錄下cd/usr/src/redhat/RPMS/x86_64cp*/home/安裝Lustre客戶端在node001上執(zhí)行以下命令安裝lustre客戶端:cd/homerpm-ivhlustre-client-modules-*.rpmrpm-ivhlustre-client-2.5.34-*.rpm同步鏡像到所有節(jié)點將node001狀態(tài)抓取成鏡像打開“Nodes”菜單,選中node001節(jié)點;在右方“Tasks”頁面下點擊“Grabtoimage”按鈕;圖STYLEREF1\s4SEQ圖\*ARABIC\s111抓取節(jié)點狀態(tài)到鏡像在彈出選擇框中選擇“l(fā)ustre-client-image”;取消勾選“Dryrun”;鏡像點擊“Yes”。圖STYLEREF1\s4SEQ圖\*ARABIC\s112注意界面下端日志欄的提示信息,等待抓取完成下發(fā)Lustre客戶端到其他所有節(jié)點點擊“Nodes”菜單,在右側(cè)“Tasks”目錄下,點擊“Updatenodes”。圖STYLEREF1\s4SEQ圖\*ARABIC\s113更新節(jié)點界面取消勾選“Dryrun”;在彈出確認框點擊“Yes”,確認更新圖STYLEREF1\s4SEQ圖\*ARABIC\s114確認更新等待更新完成Lustre并行文件系統(tǒng)部署說明本章節(jié)描述IntelEnterpriseEditionforLustrev的安裝過程,涉及IMLserver(IntelManagerforLustreserver)、MDS、OSS的安裝和配置,存儲網(wǎng)絡(luò)基于MellanoxInfiniband。安裝和配置Lustre文件系統(tǒng)詳細部署步驟請參考《IntelLustreInfiniband安裝部署指導(dǎo)書.docx》配置ldap為保證BCM集群普通用戶能夠正常訪問Lustre文件系統(tǒng),需要在MDS和OSS節(jié)點配置ldap,設(shè)定ldapserver為管理節(jié)點。在MDS和OSS節(jié)點執(zhí)行以下命令安裝ldap:[root@mds01~]#mount-oloop/iso/CentOS-6.6-x86_64-bin-DVD1.iso/iso/centos6.6[root@mds01~]#yuminstall-ynss-pam-ldapdLoadedplugins:fastestmirror,securitySettingupInstallProcessLoadingmirrorspeedsfromcachedhostfileResolvingDependencies-->Runningtransactioncheck>Packagenss-pam-ldapd.x86_640:0.7.5-18.2.el6_4willbeinstalledbase/filelists_db|3.3MB00:00...-->ProcessingDependency:/lib64/security/pam_ldap.soforpackage:nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64HighAvailiable/filelists_db|55kB00:00...-->ProcessingDependency:nscdforpackage:nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64-->Runningtransactioncheck>Packagenscd.x86_640:2.12-1.149.el6willbeinstalled>Packagepam_ldap.x86_640:185-11.el6willbeinstalled-->FinishedDependencyResolutionDependenciesResolved=============================================================================================PackageArchVersionRepositorySize=============================================================================================Installing:nss-pam-ldapdx86_640.7.5-18.2.el6_4base152kInstallingfordependencies:nscdx86_642.12-1.149.el6base223kpam_ldapx86_64185-11.el6base88kTransactionSummary=============================================================================================Install3Package(s)Totaldownloadsize:462kInstalledsize:785kDownloadingPackages:Total80MB/s|462kB00:00Runningrpm_check_debugRunningTransactionTestTransactionTestSucceededRunningTransactionInstalling:pam_ldap-185-11.el6.x86_641/3Installing:nscd-2.12-1.149.el6.x86_642/3Installing:nss-pam-ldapd-0.7.5-18.2.el6_4.x86_643/3Verifying:nss-pam-ldapd-0.7.5-18.2.el6_4.x86_641/3Verifying:nscd-2.12-1.149.el6.x86_642/3Verifying:pam_ldap-185-11.el6.x86_643/3Installed:nss-pam-ldapd.x86_640:0.7.5-18.2.el6_4DependencyInstalled:nscd.x86_640:2.12-1.149.el6pam_ldap.x86_640:185-11.el6Complete!將headnodeIP信息加入所有MDS和OSS節(jié)點的hosts文件:[root@mds1~]#cat/etc/hostslocalhostlocalhost.localdomainlocalhost4localhost4.localdomain4::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain61IML2mds13mds24oss15oss2 master.cm.clustermasterlocalmaster.cm.clusterlocalmaster將任意計算節(jié)點(例如node001)的ldap配置文件拷貝至MDS和OSS節(jié)點在所有MDS和OSS節(jié)點執(zhí)行如下命令:cp/etc/nsswitch.conf/etc/nsswitch.conf.bakcp/etc/pam_ldap.conf/etc/pam_ldap.conf.bakcp/etc/nslcd.conf/etc/nslcd.conf.bakscp01:/cm/conf/etc/nsswitch.conf/etc/nsswitch.confscp01:/etc/pam_ldap.conf/etc/scp01:/etc/nslcd.conf/etc/重啟ldap服務(wù)并設(shè)置開機啟動:[root@mds01~]#/etc/init.d/nslcdstartStartingnslcd:[OK][root@mds02~]#chkconfignslcdon客戶端配置完成mdt及osst的配置,即可在client端掛載lustre文件系統(tǒng),并對其進行讀寫訪問。在CMGUI中配置客戶端Lustre文件系統(tǒng)的掛載。獲取掛載點步驟1在瀏覽器查看。點擊LustreWeb界面,文件系統(tǒng)頁面,“ViewClientMountInformation”按鈕,看到掛載點信息圖STYLEREF1\s5SEQ圖\*ARABIC\s11掛載點信息其中“mount-tlustre3@o2ib0:2@o2ib0:/lfs-new/mnt/lfs-new”即為示例的掛載命令進入CMGUI“l(fā)uster-client”NodeCategory,點擊“FSMounts”Tab菜單圖STYLEREF1\s5SEQ圖\*ARABIC\s12如REF_Ref418174134\h圖52所示點擊“Add”按鈕增加掛載項點擊“Add”后彈窗如REF_Ref419358852\h圖53所示圖STYLEREF1\s5SEQ圖\*ARABIC\s13參照《項目實施規(guī)劃》填寫,示例:表項描述值Device被掛載設(shè)備3@o2ib0:2@o2ib0:/lfs-newFilesystemtype文件系統(tǒng)類型lustreMountpoint掛載點/mnt/lfs-newRevision不需填寫Extraoptions掛載選項defaultsFilesystemcheck文件系統(tǒng)檢查0填寫完成點擊“Ok”確認。3.點擊“Save”保存景行集群管理軟件實施方案安裝準備這里主要羅列出需要準備的軟件,硬件按照第三章的配置準備,并完成組網(wǎng)CentOS完整的安裝ISO包,CentOS-6.4-x86_64-bin-DVD1.iso和CentOS-6.4-x86_64-bin-DVD2.iso景行集群調(diào)度軟件:jh-unischeduler-3.0-install-03073.tar.gzJDK1.7:jdk-7u51-linux-x64.gzEnginFrame2013:enginframe-2013.0-r28282.jar景行JHINNO集群管理軟件:jhinno_ef_package_2.1.2-r40311.tar.gz安裝配置OS我們選擇的OS為CentOS6.4x64系統(tǒng),安裝過程較為簡單,這里就不詳細講解。主要注意如下選擇OS的安裝時,選擇SoftwareDevelopmentworkstation版本,并勾選Customizenow。在BaseSystem中勾選Compatililitylibraries在數(shù)據(jù)庫中勾選PostgreSQLDatabaseclient和Server,注意需要勾選所有packages。勾選Developmenttools,需要勾選所有安裝包勾選NFSServer在SystemManagement中勾選SystemManagement,注意勾選所有安裝包以上勾選完成后,安裝操作系統(tǒng)。關(guān)閉selinux[root@localhostrepodata]#grep^SELINUX=/etc/selinux/configSELINUX=enforcing[root@localhostrepodata]#sed-i's/^SELINUX=.*/SELINUX=disabled/g'/etc/selinux/config[root@localhostrepodata]#grep^SELINUX=/etc/selinux/configSELINUX=disabled關(guān)閉防火墻配置業(yè)務(wù)網(wǎng)絡(luò)IP地址,重啟network服務(wù)配置hosts文件配置YUM,詳細過程請參考附錄重新啟動OS配置NIS掛載OS鏡像。[root@JH02~]#mount-oloop/home/CentOS-6.4-x86_64-bin-DVD1.iso/media/CD1[root@JH02~]#mount-oloop/home/CentOS-6.4-x86_64-bin-DVD2.iso/media/CD2安裝ypserv[root@JH02~]#yum-yinstallypserv安裝simplejson[root@JH02~]#yum-yinstallpython-simplejson-2.0.9-3.1.el6.x86_64執(zhí)行authconfig-tui,選擇NIS,點擊Next。輸入域名和本地IP地址,點擊OK重啟NIS服務(wù)[root@JH02~]#chkconfigypservon[root@JH02~]#serviceypservrestartStoppingYPserverservices:[FAILED]StartingYPserverservices:[OK][root@JH02~]#配置NISMaster,輸入Ctrl+D確認Master[root@JH02~]#/usr/lib64/yp/ypinit–m重啟ypserv和ypbind服務(wù)[root@JH02~]#serviceypservrestartStoppingYPserverservices:[OK]StartingYPserverservices:[OK][root@JH02~]#serviceypbindrestartShuttingdownNISservice:[OK]StartingNISservice:[OK]BindingNISservice:.[OK][root@JH02~]#如何添加用戶,需要在NIS服務(wù)器上做如下動作#adduser<username>#passwd<username>#cd/var/yp/#make配置NISClientClient不用安裝ypserv安裝包,直接通過authconfig-tui加入域即可。啟用NIS支持,在NIS配置中填寫NIS域:hpcNIS服務(wù)器:<NISMaster的IP地址>[root@JH01~]#[root@JH01~]#authconfig-tui[root@JH01~]#配置NFSServer新建安裝目錄/apps和用戶的home目錄/apps/users[root@JH02~]#mkdir/apps[root@JH02~]#mkdir/apps/users修改NFSServer權(quán)限[root@JH02~]#vi/etc/exports[root@JH02~]#cat/etc/exports/apps*(rw,no_root_squash)在NSFServer輸入一下命令[root@JH02~]#cat/etc/exports/apps*(rw,no_root_squash)[root@JH02~]#exportfs-a[root@JH02~]#chkconfignfson[root@JH02~]#servicenfsrestartShuttingdownNFSdaemon:[FAILED]ShuttingdownNFSmountd:[FAILED]ShuttingdownNFSquotas:[FAILED]ShuttingdownNFSservices:[OK]StartingNFSservices:[OK]StartingNFSquotas:[OK]StartingNFSmountd:[OK]StoppingRPCidmapd:[OK]StartingRPCidmapd:[OK]StartingNFSdaemon:[OK][root@JH02~]#在NFS客戶端上,做如下配置可以掛載NFSServer的/apps目錄mkdir/appsvi/etc/fstab...<NFSSever的IP地址>:/apps/appsnfsdefaults00...mount-a安裝JDK新建jhadmin用戶[root@JH02~]#useraddjhadmin-b/apps/users/[root@JH02~]#passwdjhadminChangingpasswordforuserjhadmin.Newpassword:BADPASSWORD:itisbasedonadictionarywordBADPASSWORD:istoosimpleRetypenewpassword:passwd:allauthenticationtokensupdatedsuccessfully.[root@JH02~]#[root@JH02~]#cd/var/yp/[root@JH02yp]#makegmake[1]:Enteringdirectory`/var/yp/'Updatingpasswd.byname...Updatingpasswd.byuid...Updatinggroup.byname...Updatinggroup.bygid...Updatingnetid.byname...gmake[1]:Leavingdirectory`/var/yp/'[root@JH02yp]#把jdk-7u51-linux-x64.gz拷貝的/apps目錄下,進入/apps目錄,解壓安裝包到/opt目錄tar-zxvfjdk-7u51-linux-x64.gz-C/opt/在/etc/profile最后添加如下命令exportJAVA_HOME=/opt/jdk1.7.0_51exportPATH=$JAVA_HOME/bin:$PATHexportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar驗證JDK是否安裝成功[root@JH02apps]#[root@JH02apps]#source/etc/profile[root@JH02apps]#java-versionjavaversion"1.7.0_51"Java(TM)SERuntimeEnvironment(build1.7.0_51-b13)JavaHotSpot(TM)64-BitServerVM(build24.51-b03,mixedmode)[root@JH02apps]#安裝jh-unischeduler把jh-unischeduler-3.0-install-03073.tar.gz解壓安裝包[root@JH02apps]#tar-zxvfjh-unischeduler-3.0-install-03073.tar.gz進入解壓目錄,編輯install.conf[root@JH02jh-unischeduler-3.0-install]#viinstall.conf把license目錄添加進去執(zhí)行./install.py安裝unischeduler[root@JH02jh-unischeduler-3.0-install]#./install.py14-03-2622:34:42INFOinstaller(606):瑙e.緙?./linux/x86_64/jh-unischeduler-3.0-linux-x86_64-0307-1624.tar.gz...14-03-2622:34:45INFOinstaller(517):?..?..postgresql-jhscheduler錛...14-03-2622:34:49INFOinstall(160):瀹.?瀹..[root@JH02jh-unischeduler-3.0-install]#啟動jhscheduler[root@JH02jh-unischeduler-3.0-install]#source/apps/unischeduler/conf/profile.jhscheduler[root@JH02jh-unischeduler-3.0-install]#jhschedulerstartStartingdaemons...limstartedresstartedsbatchdstarted[root@JH02jh-unischeduler-3.0-install]#檢查服務(wù)是否正常啟動[root@JH02jh-unischeduler-3.0-install]#jhschedulerstatuslimpid:<2989>respid:<2991>sbatchdpid:<2993>mbatchdpid:<3007>schedpid:<3433>[root@JH02jh-unischeduler-3.0-install]#安裝Enginframe上傳enginframe-2013.0-r28282.jar到/apps目錄下運行Java–jarenginframe-2013.0-r28282.jar啟動Enginframe的安裝。java-jarenginframe-2013.0-r28282.jarWritinglogto(/tmp/efinstall.2014-03-26-22.41.58.log)點擊Next勾選Iaccept…導(dǎo)入License輸入安裝目錄/apps/nice,點擊Next點擊yes,創(chuàng)建目錄確認jre目錄點擊Next。點擊yes,創(chuàng)建目錄選擇ServerandAgent輸入Enginframe管理員jhadmin,需要和jh-unischeduler的管理員一致。配置OSuserowning和Webapplication。其中OSuser是enginframe管理員,ContextoftheEnginframewebapplication設(shè)置為cloud,其他使用默認配置。點擊Next按照默認值,點擊Next是否開機運行,選擇yes勾選InstalltheEnginFramedemo選擇OperatingSystem設(shè)置enginframe的用戶名和密碼,用戶名設(shè)置為Enginframe管理員名稱,即第8步中設(shè)置的用戶,用戶名密碼必須和OS密碼一致選擇IBMLSForOpenLava選擇jh-unischeduler的profile文件點擊yes,繼續(xù)安裝點擊Install進行安裝點擊Finish,安裝完成修改/apps/nice/enginframe/conf/server.conf中的ef.triggers.db.url=jdbc:derby://localhost:1527/EFTriggersDB的1527端口改為51527端口,如下所示,保存退出。修改/apps/nice/conf/enginframe.conf文件中的DERBY_PORT端口,把1527改為51527,如下圖所示,保存退出。啟動Enginframe[root@JH02apps]#serviceenginframestartEnginFrameControlScriptStartingDerbyDB[PID:43460]...[OK]DerbyDatabasestarted[OK]EnginFrameServerstarted[OK]EnginFrameAgentstarted[root@JH02apps]#安裝jhinno把jhinno_ef_package_2.1.2-r40311.tar.gz上傳到/home目錄,并解壓tar-zxvfjhinno_ef_package_2.1.2-r40311.tar.g切換到su–jhadmin[root@JH02home]#sujhadmin[jhadmin@JH02home]$cdjhinno_ef_package把jobstarter拷貝到/apps/unischeduler/[jhadmin@JH02jhinno_ef_package]$cp-rjhscheduler/jobstarter/apps/unischeduler/把lsb.users和lsb.quenes覆蓋/apps/unischeduler/conf/users.conf和queues.conf[jhadmin@JH02jhinno_ef_package]$cpjhscheduler/conf/lsb.users/apps/unischeduler/conf/users.conf[jhadmin@JH02jhinno_ef_package]$cpjhscheduler/conf/lsb.queues/apps/unischeduler/conf/queues.conf修改queues.conf文件[root@JH02apps]#vi/apps/unischeduler/conf/queues.conf把JOB_STARTER=/apps/jhscheduler/jobstarter/fluent_starterJOB_STARTER=/apps/jhscheduler/jobstarter/fluent_starterJOB_STARTER=/apps/jhscheduler/jobstarter/cfx_starterJOB_STARTER=/apps/jhscheduler/jobstarter/cfx_starter改為JOB_STARTER=/apps/jhscheduler/jobstarter/fluent_starterJOB_STARTER=/apps/unischeduler/jobstarter/fluent_starterJOB_STARTER=/apps/unischeduler/jobstarter/cfx_starterJOB_STARTER=/apps/unischeduler/jobstarter/cfx_starter切換到root[jhadmin@JH02jhinno_ef_package]$surootPassword:[root@JH02jhinno_ef_package]#修改/root/.bashrc文件,注釋掉aliascp=’cp-i’使用root用戶重新登陸把jhinno_ef_package中的Enginframe和tomcat拷貝到/apps/nice下[root@JH02jhinno_ef_package]#cp-renginframe/apps/nice/[root@JH02jhinno_ef_package]#cp-rtomcat/apps/nice/增加jhadmin的權(quán)限[root@JH02jhinno_ef_package]#chown-Rjhadmin:jhadmin/apps/nice/tomcat/webapps/customer[root@JH02jhinno_ef_package]#chownjhadmin:jhadmin/apps/nice/enginframe/conf/authorization.xconf[root@JH02jhinno_ef_package]#chown-Rjhadmin:jhadmin/apps/nice/enginframe/conf/jhlogon在/apps/nice/tomcat/bin/setenv.ef.sh文件頭上加入一個參數(shù):exportJHCUSTOMER_CONFDIR=/apps/nice/tomcat/webapps/customer/conf[root@JH02enginframe]#cat/apps/nice/tomcat/bin/setenv.ef.shexportJHCUSTOMER_CONFDIR=/apps/nice/tomcat/webapps/customer/confCATALINA_OPTS="-Dtocol.handler.pkgs=mon.utils.xml.handlers$CATALINA_OPTS"CLASSPATH="$CLASSPATH":"$CATALINA_HOME"/lib/sdftree-handler.jar修改/apps/nice/enginframe/plugins/lsf/bin/ef.opendialog.sh將以下環(huán)境變量JHSCHEDULER_ENV=/apps/jhscheduler/conf/profile.jhscheduler改為JHSCHEDULER_ENV=/apps/unischeduler/conf/profile.lsf修改/apps/nice/enginframe/plugins/lsf/conf/ef.lsf.conf文件將以下環(huán)境變量LSF_PROFILE="/apps/jhscheduler/conf/profile.jhscheduler"改為LSF_PROFILE="/apps/unischeduler/conf/profile.jhscheduler"在/apps/nice/enginframe/plugins/vnc/conf/ef.vnc.conf文件修改VNC_HOSTNAME=””VNC_HOSTNAME_INT=””VNC_LSGRUN=”lsrun–m<vnc服務(wù)器名>”。執(zhí)行如下命令chmod777/apps/nice/enginframe/plugins/vnc/etc/*修改enginframe/plugins/hpc/WEBAPP/application.xml文件[root@JH02nice]#vimenginframe/plugins/hpc/WEBAPP/application.xml把<ef:optionid="6.3.26">6.3.26</ef:option>改為<ef:optionid="14.5.0">14.5.0</ef:option>注意需要修改兩個地方修改/apps/unischeduler/jobstarter/fluent_starter[root@JH02.lsbatch]#vim/apps/unischeduler/jobstarter/fluent_starter把文件頭的PATH改為fluent的安裝路徑,如下所示exportPATH=/apps/ansys_inc/v145/fluent/bin:$PATH重新啟動Enginframe服務(wù)[root@JH02tomcat]#serviceenginframestopEnginFrameControlScriptKillingTomcatwiththePID:44642WedMar2623:44:34CST2014:ApacheDerbyNetworkServer--(1344872)shutdown[OK]DerbyDatabaseisdown[OK]EnginFrameAgentisdown[root@JH02tomcat]#[root@JH02tomcat]#[root@JH02tomcat]#serviceenginframestartEnginFrameControlScriptStartingDerbyDB[PID:50413]...[OK]DerbyDatabasestarted[OK]EnginFrameServerstarted[OK]EnginFrameAgentstarted[root@JH02tomcat]#通過瀏覽器登陸景行高性能集群仿真平臺http://主機IP:8080/通過jhadmin賬戶登陸安裝VNC上傳nice-dcv-2013.0-9073.iso到/home目錄中新建/nice目錄[root@JH02home]#mkdir/nice掛載nice-dcv-2013.0-9073.iso鏡像文件[root@JH02home]#mount-oloop/home/nice-dcv-2013.0-9073.iso/nice在圖形界面中安裝VNC,安裝包為:vnc-VE4_6_3-x64_linux.rpm,其路徑為/nice/NICE/linux/rpms/雙擊vnc-VE4_6_3-x64_linux.rpm,點擊Install安裝完成后,導(dǎo)入License。[root@JH02home]#vnclicense-addHABA2-34RXF-4Q4PH-24N4R-AVYEB[root@JH02home]#啟動vncserver[root@JH02home]#vncserverVNC(R)ServerVisualizationEditionVE4.6.3(r99394)BuiltonNov8201216:40:29Copyright(C)2002-2012RealVNCLtd.VNCisaregisteredtrademarkofRealVNCLheU.S.andinothercountries.SeeforinformationonVNC.Forthirdpartyacknowledgementssee:/products/enterprise/4.6/acknowledgements.htmlWarning:1istakenbecauseof/tmp/.X1-lockRemovethisfileifthereisnoXServerrunningas:1Runningapplicationsin/etc/vnc/xstartupVNCServersignature:c4-54-b3-b5-97-bc-2a-d7Logfileis/root/.vnc/JH02:2.logNewdesktopisJH02:2(02:2)[root@JH02home]#景行DCV方案實施流程安裝準備這里主要羅列出需要準備的軟件,硬件按照第三章的配置準備,并完成組網(wǎng)CentOS完整的安裝ISO包,CentOS-6.4-x86_64-bin-DVD1.iso和CentOS-6.4-x86_64-bin-DVD2.iso景行DCV安裝包:nice-dcv-2013.0-9073.isoNvidiaGT610驅(qū)動:NVIDIA-Linux-x86_64-331.49.runWin732位操作系統(tǒng):X17-24280_Win7_English_professional_sp1-32bit.iso安裝配置OS我們選擇的OS為CentOS6.4x64系統(tǒng),安裝過程較為簡單,這里就不詳細講解。主要注意如下,注意插上顯卡可能無法安裝系統(tǒng),需要把顯卡拔掉,安裝完系統(tǒng)后,在插上顯卡。選擇OS的安裝時,選擇SoftwareDevelopmentworkstation版本,并勾選Customizenow。在BaseSystem中勾選Compatililitylibraries在數(shù)據(jù)庫中勾選PostgreSQLDatabaseclient和Server,注意需要勾選所有packages。勾選Developmenttools,需要勾選所有安裝包在SystemManagement中勾選SystemManagement,注意勾選所有安裝包在Virtualization中勾選Virtualization、VirtualizationClient和VirtualizationPlatform,注意需要勾選所有安裝包。以上勾選完成后,安裝操作系統(tǒng)。關(guān)閉selinux[root@localhostrepodata]#grep^SELINUX=/etc/selinux/configSELINUX=enforcing[root@localhostrepodata]#sed-i's/^SELINUX=.*/SELINUX=disabled/g'/etc/selinux/config[root@localhostrepodata]#grep^SELINUX=/etc/selinux/configSELINUX=disabled關(guān)閉防火墻配置業(yè)務(wù)網(wǎng)絡(luò)IP地址,重啟network服務(wù)配置hosts文件配置YUM,詳細過程請參考附錄修改grub啟動文件[root@JH01yum.repos.d]#vim/boot/grub/grub.conf在kernel行最后加上acpi=offnoapic下電系統(tǒng),安裝顯卡,在重新啟動系統(tǒng)安裝顯卡驅(qū)動輸入lspci驗證是否發(fā)現(xiàn)硬件設(shè)備04:00.0VGAcompatiblecontroller:NVIDIACorporationGF119[GeForceGT610](reva1)上傳顯卡驅(qū)動包NVIDIA-Linux-x86_64-331.49.run到系統(tǒng)中退出圖形界面[root@JH01~]#init3[root@JH01~]#屏蔽nouveau第三方驅(qū)動,因為nouveau這個第三方nvidia驅(qū)動仍然在運行,且與官方驅(qū)動沖突。禁用nouveau,將nouveau加入黑名單,編輯黑名單文件:

vi/etc/modprobe.d/blacklist.conf在最后一行后加上blacklistnouveau并保存。修改/etc/grub.conf中的kernel參數(shù),再該參數(shù)后面添加rdblacklist=nouveau來禁止加載該nouveau內(nèi)核安裝驅(qū)動shNVIDIA-Linux-x86_64-331.49.run-k$(uname–r)選擇accept,按Enter選擇yes,點擊Enter點擊yes點擊yes點擊OK,完成顯卡驅(qū)動安裝執(zhí)行如下命令nvidia-xconfig--add-argb-glx-visuals--allow-glx-with-composite--busid="YOUR_PCI_ID"--damage-events--disable-glx-root-clipping-a--no-logo--overlay--cioverlay--render-accel--registry-dwords="PowerMizerEnable=0x1;PerfLevelSrc=0x2222;PowerMizerDefault=0x1;PowerMizerDefaultAC=0x1"--no-use-edid--no-use-edid-freqs--use-display-device="None"執(zhí)行如下命令nvidia-xconfig-enable-all-gpus重啟啟動系統(tǒng),在BIOS中,VideoCardSelected中選擇OptionalVideoCard,按F10保存退出。Linux主機上安裝配置VNCServer上傳nice-dcv-2013.0-9073.iso到/home目錄中新建/nice目錄[root@JH01~]#mkdir/nice掛載nice-dcv-2013.0-9073.iso到/nice目錄上[root@JH01~]#mount–oloop/home/nice-dcv-2013.0-9073.iso/nice在圖形界面中安裝VNCServer,需要在本地KVM上操作。安裝包為:vnc-VE4_6_3-x64_linux.rpm,其路徑為/nice/NICE/linux/rpms/雙擊vnc-VE4_6_3-x64_linux.rpm,點擊Install安裝完成后,導(dǎo)入License。[root@JH02home]#vnclicense-addHABA2-34RXF-4Q4PH-24N4R-AVYEB[root@JH02home]#啟動vncserver[root@JH01~]#vncserverVNC(R)ServerVisualizationEditionVE4.6.3(r99394)BuiltonNov8201216:40:29Copyright(C)2002-2012RealVNCLtd.VNCisaregisteredtrademarkofRealVNCLheU.S.andinothercountries.SeeforinformationonVNC.Forthirdpartyacknowledgementssee:/products/enterprise/4.6/acknowledgements.htmlRunningapplicationsin/etc/vnc/xstartupVNCServersignature:ed-90-f7-32-65-68-10-0cLogfileis/root/.vnc/JH01:1.logNewdesktopisJH01:1(01:1)[root@JH01~]#修改vnc系統(tǒng)配置文件,加入x11字體路徑和depth等選項,更新完成后,保存退出。vim/etc/vnc/config文件,修改-fp參數(shù),保證如下配置。-fp"unix/:7100,built-ins,/usr/share/X11/fonts/100dpi,/usr/share/X11/fonts/75dpi,/usr/share/X11/fonts/Type1" 同時,添加如下選項 -depth24Config文件如下所示:#DefaultXServercommand-lineparametersforVNCEnterpriseEdition.##Thisfileisautomaticallygenerated.DONOTEDIT.#Tooverridesettingsinthisfile,createoredit/etc/vnc/config.custom.#Continueevenifstandardportsfail-pn-fp"unix/:7100,built-ins,/usr/share/X11/fonts/100dpi,/usr/share/X11/fonts/75dpi,/usr/share/X11/fonts/Type1"-depth24Linux主機上安裝配置RenderingServer掛載OS鏡像文件[root@JH01~]#mount-oloop/home/CentOS-6.4-x86_64-bin-DVD1.iso/media/CD1[root@JH01~]#mount-oloop/home/CentOS-6.4-x86_64-bin-DVD2.iso/media/CD2進入dcvserveringserver安裝包目錄[root@JH01~]#cd/nice/NICE/linux/rpms/rhel_5_x86_64/安裝nice-dcv-server-2013.0-9073.i686.rpm和nice-dcv-server-2013.0-9073.x86_64.rpm安裝包[root@JH01rhel_5_x86_64]#yum-yinstallnice-dcv-server-2013.0-9073.x86_64.rpm[root@JH01rhel_5_x86_64]#yum-yinstallnice-dcv-server-2013.0-9073.i686.rpm修改DCV的配置文件:vim/opt/nice/dcv/conf/dcv.conf將其中的[Remotization]section的host修改為本機的ip地址host=01將[RenderingServer]section的host修改成為KVM內(nèi)部的網(wǎng)關(guān)地址,一般也為主機IP,如host=01拷貝/opt/nice/dcv/etc/init.d/dcvrenderingserver到系統(tǒng)的/etc/init.d/目錄下[root@JH01rhel_5_x86_64]#cp/opt/nice/dcv/etc/init.d/dcvrenderingserver/etc/init.d/添加、運行啟動dcvrenderingserver服務(wù)chkconfig--adddcvrenderingserverchkconfigdcvrenderingserveron/etc/init.d/dcvrenderingserverstart[root@JH01rhel_5_x86_64]#chkconfig--adddcvrenderingserver[root@JH01rhel_5_x86_64]#chkconfigdcvrenderingserveron[root@JH01rhel_5_x86_64]#/etc/init.d/dcvrenderingserverstartStartingdcvrenderingserver:NICEDCV:Grantingaccessto3DacceleratedXservertouser'dcv'...localuser:dcvbeingaddedtoaccesscontrollistOK導(dǎo)入DCVLicense,上傳license.lic文件到/home目錄下把License文件拷貝到/opt/nice/dcv/license/目錄[root@JH01rhel_5_x86_64]#cp/home/license.lic/opt/nice/dcv/license/添加啟動rlm服務(wù)[root@JH01rhel_5_x86_64]#cp/opt/nice/dcv/etc/init.d/rlm/etc/init.d/[root@JH01rhel_5_x86_64]#chkconfig--addrlm[root@JH01rhel_5_x86_64]#chkconfigrlmon修改rlm腳本,刪除-o%PPID參數(shù)啟動rlm服務(wù)[root@JH01rhel_5_x86_64]#servicerlmstartStartingrlm:OK安裝配置KVM網(wǎng)橋創(chuàng)建橋接器在/etc/sysconfig/network-scripts目錄下,創(chuàng)建一個ifcfg-br0文件,其類型設(shè)為Bridge[root@JH01network-scripts]#catifcfg-br0DEVICE=br0TYPE=BridgeONBOOT=yesIPADDR=01NETMASK=GATEWAY=NETWORK=將物理網(wǎng)口橋接到橋接器上修改eth0的內(nèi)容(本服務(wù)器是用eth0上網(wǎng)的),去掉其IP相關(guān)信息,加上“BRIDGE=br0”,將其橋接到br0上;如果是雙網(wǎng)卡或是多網(wǎng)卡,照此過程修改DEVICE=eth0TYPE=EthernetONBOOT=yesBRIDGE=br0停止NetworkManager服務(wù),重啟啟動網(wǎng)絡(luò)服務(wù)安裝配置Win7虛擬機上傳virtio-win-1.6.3-3.el6.noarch.rpm到/home目錄上,安裝此文件[root@JH01home]#rpm-ivhvirtio-win-1.6.3-3.el6.noarch.rpm安裝完成后,virtio-win相關(guān)驅(qū)動放在/usr/share/virtio-win下,包括軟驅(qū)格式的驅(qū)動:virtio-win-1.5.2.vfd,光驅(qū)格式的驅(qū)動:virtio-win-1.5.2.iso上傳Win732位的OS到Linux主機上執(zhí)行virt-manager點擊New,輸入虛擬機名稱,選擇Localinstallmedia,點擊Forward載入Win732位的操作系統(tǒng)ISO文件,OStype選擇Windows,Version選擇MicrosoftWindows7,點擊Forward。給虛擬機分配內(nèi)存和CPU資源給虛擬機分配硬盤容量,勾選Allocateentiredisknow勾選Customizeconfigurationbeforeinstall,點擊Finish處理器“Configuration”,最好點擊“CopyhostCPUconfiguration”,保證虛擬機CPU支持的指令集與物理CPU一致。定義處理器“Topology”和“Pin”,對性能有少量提升。特別是XP和Windows7最大只能支持2個物理CPU,如果你需要在虛擬機中分配更多的core給虛擬機,就必須定義Topology。點擊Apply在虛擬磁盤配置,選擇”DiskBus”為:virtio。點擊Apply虛擬網(wǎng)絡(luò)接口配置中,選擇“DeviceModel”為:virtio,點擊Apply點擊左下方的AddHardware添加軟驅(qū)設(shè)備,如下圖所示點擊BeginInstallation開始安裝Win7系統(tǒng)安裝系統(tǒng)過程比較簡單,在如下界面中點擊LoadDriver選擇RedHatVirtIOSCSIcontroller(A:\i38\Win7\viostor.inf),點擊Next。安裝硬盤驅(qū)動。硬盤驅(qū)動安裝完成后,如下所示,可以進行分區(qū)操作,點擊Next。Win7系統(tǒng)開始安裝系統(tǒng)安裝完成安裝網(wǎng)卡驅(qū)動,打開設(shè)備管理器點擊EthernetController點擊UpdateDriver..,選擇軟驅(qū)目錄安裝完成后點擊Close配置IP地址,注意虛擬機的鍵盤和真實的鍵盤并不對應(yīng)。VMWin7安裝配置DCVServer和VNCServer掛載nice-dcv-2013.0-9073.iso到虛擬機上點擊Runsetup.exe點擊Next勾選Iaccept…點擊Next選擇安裝路徑勾選Iaccept…,點擊Next勾選UseanExternalRenderingServer,輸入Linux主機IP,勾選InstallVirtIOnetworkdrivers勾選Iaccept…添加License文件點擊Next點擊Install點擊Finish安裝完成重啟系統(tǒng)導(dǎo)入VNCServerLicenseKey輸入LicenseKeyClient端安裝配置DCVEndstation解壓nice-dcv-2013.0-9073.iso文件運行nice-dcv-endstation-2013.0-9073-Release.msi文件,其目錄為nice-dcv-2013.0-9073\NICE\win32按照提示安裝重新啟動系統(tǒng)雙擊setup-niceviewer.bat文件,其目錄為nice-dcv-2013.0-9073\Portable-Endstation-Windows雙擊niceviewer.bat,開啟一個VNCView實例,其路徑為nice-dcv-2013.0-9073\Portable-Endstation-Windows輸入VMWin7的IP地址,點擊Connect點擊yes輸入VMWin7的用戶名和密碼可以正常通過VNC遠程登陸虛擬機打開CMD命令行輸入dcvadmin輸入dcvadminenable,開啟DCV輸入dcvon打開DCV,輸入dcvtest驗證是否安裝成功出現(xiàn)DCV圖形界面說明安裝成功。方案驗收集群性能驗收計算能力驗收Linpack現(xiàn)在在國際上已經(jīng)成為最流行的用于測試高性能計算機系統(tǒng)浮點性能的benchmark。通過利用高性能計算機,用高斯消元法求解N元一次稠密線性代數(shù)方程組的測試,評價高性能計算機的浮點性能。HPL即HighPerformanceLinpack,也叫高度HYPE

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論