180 likes | 503 Views
VCP Club. Teaser Questions & Ice Breaker. Just for fun. Iwan ‘e1’ Rahabok e1@vmware.com | virtual-red-dot.blogspot.com. There are only 10+ questions. For each, think of the answer. A lot of dummy/funny/wrong answers are provided. The correct answer may or may not be there
E N D
VCP Club Teaser Questions & Ice Breaker. Just for fun Iwan ‘e1’ Rahabok e1@vmware.com | virtual-red-dot.blogspot.com
There are only 10+ questions. • For each, think of the answer. • A lot of dummy/funny/wrong answers are provided. • The correct answer may or may not be there • At the end, ask yourself. • If the questions are easy and you know the answer, you are ready for the VCP Club session. • If the questions are hard, then you should consider attending vSphere Design Workshop • On a serious note, I attended the workshop and got a major surprise on something I thought was 101. Introduction
What Slot Size? vSphere 4.1 no longer use slot size Let me call my manager Slot size is based on actual usage or configured/allocated (say you configure the VM to be 2 vCPU) In vSphere 4.1, we now have Storage IO Control and take into account Disk in slot size. See picture below Hmm… good question How is Slot Size determined in HA?
Hmm… 3 Total Slots, but used are 14. Strange… A bug. Must be man Sorry, I was sleeping during the ICM class. My trainer’s fault. Never explained properly How do you explain this?
Nothing man. See no evil, hear no evil Oh, oh. What’s that darn spike? I’m not clear what CPU Ready means. I thought CPU is always ready! This looks like ECG diagram when I did my health check What does the previous chart tell us?
It just gets it. ESX is smart right I got 2 GE when I configure NIC Teaming properly. NFS performance is bad. So I don’t use it. By praying really hard By getting NetApp/EMC to do it. That’s their job right How do you get 2 GE performance in NFS? Your ESX has 2 ports. Each is 1 GE. It is dedicated for NFS traffic. Not shared with anything else. It is connected to 1 NFS array. The array has 2 GE ports per SP. Total is 4 GE from the Array. The array has 10 datastores.
VMware HA Cluster You have the above setup. A Cluster of 4 ESXi spread in 2 blades in 2 racks. For some reason, blade in Chassis 1 lost connection to the Default Gateway. That’s all it lost. All other connection are in tact.
We are dead. The entire blade in Chassis 1 will go down. No problem my friend. Life goes on This problem can never happen. Not in my environment Oh, oh, split brain will happen. ESX will start the isolation respond like the one below. From previous slide. What happens next?
All 16 GB will travel on 1 port. The other cable is idle. Whoever answers A is a fake VCP Only active pages are copied.Not all 16 GB are copied. 16 GB, but on both wires. I don’t use 16 GB. vSphere cannot handle big RAM as boot takes too long Darn… now I’m confused When you vMotion a 16 GB VM… You have 10 VM and 2 ESX. You want to vMotion 1 VM It has 16 GB of RAM. It used 16 GB of RAM but has gone idle. It has no TPS. You have dedicated vMotion LAN. 2x 1 GE with IP-Hash teaming. You use vSphere 4.1
Hei, what nonsense. It is a Layer 3 switch too. It’s Layer 3, because traffic from different VM on the same vSwitch get short-circuited. That’s why we need virtual firewall. No, it’s not short-circuited, unless they are on same segment. Well it depends if port group is used. Same segment, but different port group, no short-circuit. Wait, if you forget to specify a VLAN tagin the port group, they get short circuit. Enough enough! What do you mean by Layer 2 anyway? This is not a layered cake vSwitch is a Layer 2 switch
With Physical Compatibility Mode, you can exceed 2 TB It does not matter what mode, you can’t. The standard 10-byte CDB of SCSI limits it. Answer B is correct, but if you use Para-virtualised SCSI driver in the VM, you can overcome it. The guy who answer B is drunk Wait a minute! You can extent! I know, I know! Mount it directly via software iSCSI initiator inside the VM. Then use special driver from Intel to load balance it as VMware Tools does not provide NIC teaming. Can we overcome 2 TB vmdk limit?
Round Robin is still active-passive. At any given time, IO only goes via 1 path. The guy who answers A is a Hyper-V administrator A is incorrect when used in Active/Active Array as all paths are active at the same time. You can load a multi-pathing from NetApp or HDS and replace the baby one from VMware Storage Multi-Pathing
Don’t use it man. Your CPU will be clocked down to the lowest common denominator. Use it. It helps you future proof. It is rather limited to 1 generation. So you can’t skip a generation (say from Xeon 5300 to 5500). It is also limited to class of CPU. So you can’t vMotion from Xeon 5000 to Xeon 7000. It does not really work as it’s just a mask. A badly written app can still use the instruction set. Let me toss a coin…. This is how it works in production anyway Enhanced vMotion Compatibility
Yes. If you put the underlying ESX under maintenance mode, DRS will move all VM (powered off or on). Hei, what are you smoking? DRS is about load balancing based on actual usage. VM that is off does not consume any CPU/RAM, so it will never be moved. Does DRS move powered-off VM?
MS Clustering is not supported in 4.1. Coming in Update 1. Of course it is you bozo. You must turn on VM-VM anti-affinity rule so the 2 MSCS VMs are always apart. The guy who answer B is the Bozo VMware HA does not obey VM-VM affinity rule. In vSphere 4.1, you can set 2 VM-Host group. Put MSCS VM 1 in Host group 1, and MSCS VM 2 in Host group 2. So you at least need 4 nodes in HA cluster. You must disable DRS for the MSCS VM. MSCS and HA/DRS
How did you go? You may find the following doc useful as follow up: http://communities.vmware.com/docs/DOC-13850