文稿教案講稿bined-tdm v1_第1頁(yè)
文稿教案講稿bined-tdm v1_第2頁(yè)
文稿教案講稿bined-tdm v1_第3頁(yè)
文稿教案講稿bined-tdm v1_第4頁(yè)
文稿教案講稿bined-tdm v1_第5頁(yè)
已閱讀5頁(yè),還剩460頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Cisco ACI 4.0Technical Decision Makers (TDM)November 26, 2018Azeem Suleman, Principal Engineer TMERoadmap Extends ACI AnywhereOptimized FootprintOperational SimplicityCloud AutomationSecurity 4.0 Application Centric InfrastructureBuilding anIntent-Based Data CenterSMART LicensingInfrastructureHardwa

2、reN9332C36 x 40/100GE QSFP28100G Line rate 8 MACSec 1 RUBeacon LEDStatus LEDEnvironment LED2 x 10 GE SFP+Compatibility with 1st & 2nd gen ACI leafsIdeal for smaller ACI fabric deployments5 (4+1) Fans1100W PSU1100W PSUManagement PortsUSBConsoleN9332C9332CSystem CharacteristicsInternal Code NameMaiboc

3、kASIC S6400 (BigSky) Port Density32 x 40/100G QSFP, 2 x 1/10G SPF+Port speed support1 / 10 / 40 / 100 GbpsCPUIntel(R) Xeon(R) CPU D-1526 1.80GHzSystem Memory16GSSD256GNumber of Power Supply 2Typical Power Usage 296W (AC)Maximum Power Usage700W (AC)Input Voltage (AC)100 to 240VInput Voltage (DC) min-

4、max40 to 72VFrequency (AC)50 to 60 HzN9332CBigSky ASICCPUFANsPSUDIMMIOcardRetimers2(1+1) 1100W PSUs 5(4+1) FANs Connected to FAN interface BoardCPU BoardMain BoardSingle Cisco CloudScale S6400 ASICMain System Board4 Retaimers for last 8 MACSEC Ports 1x 16 GB DIMMIntel(R) Xeon(R) CPU D-1526 4C 1.80GH

5、zPSUN93240YC-FX248 x 1 / 10 / 25 GE QSFP ports12 x 40 / 100 GE QSFP28 ports5 (4 + 1) Fans2 (1 + 1) Power SuppliesN93240YC-FX293240YC-FX2System CharacteristicsInternal Code NameSouthlakeASIC LS3600FX2 (Heavenly) Port Density48 x 1/10/25 Gbps and 12 x 40/100 Gbps QSFP28Port speed support1 / 10 / 25/ 4

6、0 / 100 GbpsCPUIntel(R) Xeon(R) CPU D-1526 1.80GHzSystem Memory16GSSD256GNumber of Power Supply 2 (1 + 1 redundancy) Typical Power Usage 298 W (AC)Maximum Power Usage708 W (AC)Input Voltage (AC)100 to 240 VInput Voltage (DC) min-max-40V to -72VFrequency (AC)50 to 60 HzN93240YC-FX2Heavenly ASICCPUFAN

7、sPSUDIMMSRetimers2(1+1) 1200W PSUs 5(4+1) FANs directly connected to CPU BoardCPU BoardMain BoardSingle Cisco CloudScale LS3600 FX2 ASICMain System Board2 Retaimers for 4 ports4 DIMM slots, System is using 1 16 GB DIMMIntel(R) Xeon(R) CPU D-1526 4C 1.80GHzPSUAPIC M3APIC New APIC hardware based on UC

8、S M5 hardwareAPIC-CLUSTER-M3 APIC Appliance for medium configuration (up to 1000 Edge Ports)3 x APIC-SERVER-M3 1.7 GHz 3106/85W 8C/11MB Cache/DDR4 2133MHz1 TB HDD, 400GB SSDAPIC-CLUSTER-L3 APIC Appliance for large configuration ( 1000 Edge Ports)3 x APIC-SERVER-L3 2.1 GHz 4110/85W 8C/11MB Cache/DDR4

9、 2400MHz2.4TB HDD, 400GB SSDSupport for Cisco VIC 1385 dual port 40Gb QSFP+ CAN v/RDMASupport for trusted platform module (TPM) 2.0Transition from APIC-CLUSTER-M2/L2Based on new Atomix OSAtomix OSOverview Common OS to underlie variety of productsCentos 7 basedModern micro-services based OSFresh root

10、 filesystem on every boot and service startAtomix OSArchitecture System is built, shipped, installed as ”layers”Application layers can share base layersLayers are installed as thin-pool LVsCopy-on-write snapshotsPool is named ThinDataLV on Volume Group vg_ifc0Layers are content-addressesLV names are

11、 SHA-sums of the layer content/config/manifest specifies layer name which corresponds to a versionFinal, mounted filesystem is a writable snapshot of the canonical installed versionEach time the service or system starts/ is not read-only, but is cleanly recreated as a new snapshot each timeConfigura

12、tion info is stored persistently in config LV mounted as /configAtomix Layers Base v1 Kernel, System LibrariesSystems v1Systems v2Atomix v1Grub1 v1Product-baseAPICProduct BAtomix v1Grub1 v1Product-baseAPICProduct BAtomix OSBoot OverviewEarly boot:initramfs runs Dracut hooksPre-mount Dracut hook sets

13、 up root filesystemRemove old /dev/vg_ifc0/bootLook at /config/manifest for the layer to use as rootfsCreate new /dev/vg_ifc0/boot as copy-on-write snapshotsystemd atomix-boot-setup.service configures/customizes the rootfsRuns before APIC bootstrap service, ensures expected mounts etc. are setupAPIC

14、 bootup continues as before Atomix OSUpgrade OverviewUpgrade ISO is mountedNew layers are stored under oci/ on the ISOAny layers not yet on disk are expanded as new thin pool LVs/config/manifest is updated with “INSTALL” entries tying version to LV nameINSTALL boot apic-4.7 c7a39efbc0Data conversion

15、 is runIf data conversion is successful thenGrub is updated to work with new root filesystem, if needed/config/manifest gets new HEAD entryHEAD boot apic-4.7RebootChanges in update rollback sequencePreviously, updates switched between /rfs1 and /rfs2If booted from rfs2, then new filesystem was writt

16、en to /rfs2If anything went wrong, boot back from /rfs1As of 4.0, there is only /dev/vg_ifc0/bootIt is recreated each boot/config/manifest points to LV to snapshot for rootfsatomix show bootlvs | grep bootUpdating is just installing new LV and appending new HEAD record to manifest.If /config/manifes

17、t is corrupt, then system cant create /dev/vg_ifc0/bootSeeing manifest is important for investigating boot failures after update Upgrade / Downgrade4.0 switches from grub1 to grub2Upgrade /downgrade code handles this, but know where to look for config(2.x-4.x and 3.x-4.x only, does not apply to m5)P

18、artitioning layout changeUpgrades to 4.x resize /data2 partition to use 100G as layer thin poolDowngrade will detect this and restore original data2 sizeWill fail/cancel if /data2 cannot be unmounted(2.x-4.x and 3.x-4.x only, does not apply to m5)Installer log locations/firmware/log/$timestamp/*inst

19、aller.logIt may be necessary to track execution across multiple log filesBecause a single upgrade/downgrade calls different versions install programsTPM 2.0 TPM 2.0New Industry standardMultiple keys supportIncreased storage capacity for secretsAlgorithm agilityTPM 2.0Changes & Troubleshooting LUKS-e

20、ncrypted volume layout (rfs, securedata) remains sameLUKS keys now stored on TPM for M5 with TPM 2.0No key in secure bootflashFailure to mount /securedata will cause DME core dumps.Investigate using journalctl u atomix-boot-setup.serviceFind cores under /dmecoresTPM 1.2 support in APIC 4.0Support fo

21、r APIC-SERVER-M2 & L1,2Key storage in bootflash unchanged for APIC 4.0but atomix-boot-setup.service is what does the unlocking now.LVMTroubleshooting & RecoveryCreating a new boot LV if atomix fails to do so:Symptom: failure to boot, dracut promptFind origin of old boot LVawk /INSTALL boot/ x = $4 E

22、ND print x /config/manifestCreate new snapshot from that origin:name=$(awk /INSTALL boot/ x = $4 END print x /config/manifest)lvcreate ay kn n boot -snapshot vg_ifc0/$namePlatformCompatibility Support MatrixPlatformVersions supportedAPIC-SERVER-M22.x, 3.x, 4.xAPIC-SERVER-L12.x, 3.x, 4.xAPIC-SERVER-L

23、22.x, 3.x, 4.xAPIC-SERVER-L3 / M34.x onlyUpgrade / DowngradeCompatibility Matrixto from2.23.2.24.x2.23.2.24.xUpgrade / DowngradeCompatibility Matrix APIC-SERVER-L3to from2.23.2.24.x2.2N/A(2.x cant be installed on L3 / M3)N/AN/A3.2.2N/A(3.x cant be installed on L3 / M3)N/AN/A4.xDowngrade not supporte

24、d on L3 / M3Downgrade not supported on L3 / M3Mini ACI Fabric & Virtual APIC (vAPIC)One physical APIC is mandatory to discover switches and vAPICs.The physical APIC must be the first controller (controller ID 1) that is added to the cluster.The physical APIC will discover the leaf/spine initially.vA

25、PIC will be deployed and discovered after that.Physical APICVirtual APICVirtual APICACI 4.0.1 introduces Mini ACI fabric with virtual APICs (vAPIC):Reduces physical footprint of APIC cluster: 1 physical APIC + 2 virtual APIC (vAPIC)Supports small ACI fabric deployments: Up to 200 edge portsMini ACI

26、FabricvSwitch/ DVSESXVirtual APIC (vAPIC) DiscoveryDuring first boot of vAPIC, it will connect to the physical APIC as part of bootstrap logic to generate its certificate. vApic will use the passphrase provided by user from Physical APIC as input to get its certificate signed by Physical Apic. Physi

27、cal APIC ID will be always 1 and IpAddress can be derived from tepPool.Once certificate is generated, vAPIC store the certs as /securedata/ssl/server.crt (vAPIC cert), as /securedata/ssl/server.key (vAPIC key) and /securedata/cacerts/cabundle.crt (Root CA).Then gen_cert will also send vAPIC id/chass

28、isId/serialnumber to pAPIC, and pAPIC will send back its chassis Id. On vAPIC, the pAPIC chassisId is written to a file /data/avdb/vapic, and when bootmgr starts, it will read the pAPIC chassisId and compose a Discovery message send to local AD to discover pAPICNginxCertgen/BMADADSign certs and exch

29、ange UUID/Serial#Discover APIC1Discover APIC2Physical APIC1Virtual APIC2Virtual APIC (vAPIC) Discovery (Contd)On pAPIC, nginx when received vAPIC id/chassisId/serialnumber, it will send a method with those inforation to pAPIC bootmgrpAPIC bootmgr will then compose Discovery message of vAPIC to local

30、 AD Upon user approval, vAPIC will e part of the cluster.vAPIC is discovered!CA should be updated in leaf and spins so that vAPIC can establish IFM connection.Now vAPIC should able to communicate with leaf and spine and push policy.vAPIC Virtual Machine RequirementsvAPIC VM hardware specs:CPUs: 8Mem

31、ory: 32 GBTwo hard disks: HDD 1:300 GB (Local HDD/SSD) HDD 2100 GB (Local SSD) - The SSD disk write speed needs to be 50MB/sec. It is verified during the VM installation. If the write speed does not meet the requirement, the vAPIC deployment will not continue. Two network adaptors: Network Adapter 1

32、 (VMXNET 3):Out Of Band Network Adapter 2 (VMXNET 3): ACI Infra VLAN trunking* The SSD for HDD 2 needs to have a write speed 50MB/sec. It is verified during the VM installation. If the write speed does not meet the requirement, the vAPIC deployment will not continue. vAPIC VM Hardware Specs:vCPUs: 8

33、Memory: 32 GBTwo hard disks: HDD 1: 300 GB (Local HDD/SSD) HDD 2*: 100 GB (Local SSD)Two network adaptors: Network adaptor 1 (VMXNET 3): Out of Band Network adaptor 2 (VMXNET 3): ACI Infram VLAN trunkingvAPIC Virtual Machine DeploymentInformation Needed about the ACI fabricThe following vAPIC specif

34、ic configuration is also needed:The following information about the ACI fabric needs to be provided during the vAPIC virtual machine deployment:ACI fabric nameACI fabric IDAPIC cluster size: 3Pod IDTEP Pool - get from the physical APICACI infra VLAN IDThe passphrase generated by the physical APIC -

35、During the initial boot up of the vAPIC VM, the passphrase is used to create signed certificate for the vAPIC to join the APIC cluster. The physical APIC generates a new passphrase every hour. Ensure to obtain the latest one upon the vAPIC VM deployment.vAPIC controller namevAPIC IPv4 OOB IP address

36、/ prefix / gatewayvAPIC Virtual Machine Deployment (Contd)Obtain the Passphrase on Physical APICAs part of the vAPIC VM configuration, a passphrase generated by the physical APIC needs to be provided. During the initial boot up of the vAPIC VM, the passphrase is used to create signed certificate for

37、 the vAPIC to join the APIC cluster.In visore viewer, search for “pkiFabricSelfCAEp”Copy the passprahse from the field “currCertReqPassphrase”12vAPIC Network Configuration RequirementsNetwork Requirements For the ACI fabricOn the ACI fabric:The ACI fabric must have been up and running with a physica

38、l APIC.The ESX hosts for vAPICs need to be connected to front panel ports of leaf switches. Redundant connectivity via PC/vPC is mended.ACI Infra VLAN need to be deployed on these leaf ports, i.e. their associated AEP needs to have infra vlan enabled.vAPIC Network Requirements & ConfigurationNetwork

39、 Requirements For the ACI fabricEnable Infra VLAN on leaf ports via UI: vAPIC Network Requirements & ConfigurationNetwork Requirements For the ACI fabric (Conted)Configure the AttachEntityProfile (AEP) with infra VLAN enabled via API vAPIC Network Requirements & ConfigurationNetwork Requirements For

40、 the ACI fabric (Conted) Associate the AEP to the leaf ports using API:(Node 101 and 102 interface eth1/25 connected to ESX in this example).vAPIC Network Requirements & Configuration Network Requirements For VMW Virtual NetworkOn VMW vCenter or ESXi host:A standard vSwitch or DVS needs to be create

41、d. The ESX hosts links to ACI leaf need to be in its uplink.The vSwitch/ DVS uplink needs to be trunking for the ACI infra VLAN (in addition to any data VLANs it need to carry)A port-group needs to be created on the vSwitch/ DVS with the configuration of VLAN trunking for the ACI infra VLAN. Assign

42、the 2nd vNIC of the vAPIC VM to this port-group.Note, If the DVS is part of an ACI integrated VMM domain, the port-group can be created by APIC through the automation in ACI VMM integration.vAPIC Network Requirements & Configuration Sample DVS configuration on vCenter GUIHost vmnics are in the uplin

43、kThe VM port-group config:VLAN trunking for ACI infra VLANvAPIC Network Requirements & ConfigurationUsing VMM Integration to Configure DVSCheck the box next to “Configure infra Port Groups”, then click “Submit”. APIC will push the desired configuration to vCenter DVS In case of vCenter VMM intergrat

44、ion, APIC can push the desired configuration to vCenter DVS.vAPIC Network Requirements & ConfigurationUsing VMM Integration to Configure DVSUsing API to configure vCenter DVS through VMM domain:Configure vmmDom to create DVS switch and infra port group in vCenter. The leaf and ports to which the ESX

45、i host is connectedSelect the VMM domain if the DVS is in an integrated VMM domain. Else, leave it emptyvAPIC Network Requirements & ConfigurationUsing vAPIC Wizard to Configure the NetworkACI GUI provides a wizard for network provisioning required for vAPIC. In case of vCenter VMM integration, the

46、wizard will provision the needed configuration on VDS as well. vAPIC Virtual Machine Deployment OptionsvAPIC uses the same iso image as the physical APIC.Tow options to deploy vAPIC VM:Deploy on vCenter using the OVA file provided by CiscoDeploy directly on ESXi host using the APIC iso image. (This

47、option is useful for the case where the ESXi host isnt managed by vCenter)vAPIC Virtual Machine Deployment VM Deployment on VM Deployment on vCenter using OVA fileEdit and choose an SSD datastore for the VM SSD disckNeed to get to the Advanced mode for VM storage (Hard Disc configuration)vAPIC Virtu

48、al Machine Deployment (Contd)VM Deployment on vCenter using OVA fileNeed to choose OOB mgmt port-group for OOB Network and the ACI infra VLAN port-group for the infra Network. vAPIC Virtual Machine Deployment (Contd)VM Deployment on vCenter using OVA filevAPIC Virtual Machine Deployment (Contd)VM De

49、ployment on vCenter using OVA filevAPIC Virtual Machine Deployment (Contd)VM Deployment on vCenter using OVA filevAPIC Upgrade/ DowngradeSame physical APIC ISO image will be used for upgrade of the entire APIC cluster.No need to directly upgrade the vAPICs Upgrade the Physical APIC with the normal u

50、pgrade procedure. It will decrypt the secure image and send the image to vAPIC for upgrade.Downgrade to lower than ACI 4.0 release will not be supported.vAPIC Tech support/ ID Recovery/ mission /Clean Reboot techsupport, config import work just like physical APIC.id recovery is not be supported miss

51、ion like physical APICClean Reboot - need to update the vAPIC VM configuration with the current pass-phrase of the physical APIC before power up the VM. vAPIC Scalability in ACI 4.0PropertyMaximum Scale Multicast Groups (L2 and L3)200BGP + OSPF Sessions 25Number of VmmDomains10Service graph20Number

52、of L4-L7 Devices6 physical , 20 virtualNumber of pods1 GOLF VRF, Route scaleN/ANumber of FEX ports144vAPIC Scalability in ACI 4.0PropertyMaximum Scale Tenants25End Points20kBDs1000EPGs 1000 uSec EPGs100VRFs25Number of Leafs4Number of Spines2Contracts2000Limitations of vAPIC in ACI 4.0 ReleasevAPIC V

53、M limitations:DRS is not supportedvMotion is not supportedVM HA is not supportedMust use a local SSD on the ESXi host for the vAPIC VM SSDMust use a local HDD/SSD on the ESXi host for the vAPIC VM HDDvAPIC fabric function limitations:No support for mpod/msite/vPOD/Remote LeafvAPIC cannot be a standb

54、y to pAPICNo support for ID recoveryHost Based Routing AdvertisementWhy is host routing from border leaves needed?In Multi-Pod and Multi-Site use cases customers may configure different L4/L7 device clusters per pod or siteEgress traffic from a pod or site will prefer the local L3out. Ingress traffi

55、c can arrive at L3outs in either pod or site.This can lead to Asymmetric traffic flows where the ingress and egress traffic is taking a different path resulting in Firewalls dropping the flowIPNAPIC ClusterWANL3Out-1WANL3Out-2WANBD Subnet /24Advertising/24Advertising/240000GOLF already supports host

56、 route advertisement. Why not use GOLF for host routes?Customers Using GOLF: GOLF is primarily used by large SPs or large Enterprise customers15-20% customer Benefits:Auto-programming of VRFs on GOLF routers1 BGP-EVPN session for all fabric VRFsVxLAN data plane handoff to GOLF routerEVPN route to L3

57、VPN route translationHost routingWorks like a charm once you get it up Issues: Complex to deploy in real life especially with OpflexASR9k team ONLY mends Manual GOLF and not OpflexMissing many basic features:Mcast vrf leakingtrustsec, etc Host-Route Advertisement OverviewIn order to ensure traffic s

58、ymmetry as of ACI 4.0 it will be possible to advertise host routes (/32 and /128) from the Border LeavesThe host route will be advertised only if the host is connected to the local POD or local site. If the EP is moved away from the local POD or site, or once the EP is removed from the EP database,

59、the route advertise will be withdrawn.Remote leaf will advertise host routes for endpoints connected to the remote leaf pair and the pod where the remote leaf is associated.Supported Routing Protocols:BGPOSPFEIGRPHost Based Routing HW/ScaleSupported on all Cloud scale and later switches (EX/FX/FX2/F

60、X3)Not supported on 1st Gen HardwareTested border leaf host scale is 30k host routes (sum of /32 and /128)Host Route BehaviorNon-Border LeavesBorder Leaves.1.2.3.1.2/24/24BD1BD2Advertise Host Routes:Advertise Host Routes:RCOOPOraclesCOOPCitizensHost Routes/24/32/32/32Endpoint information is stored o

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論