E N D
1. Evolution of Cloud Computing Mitesh Soni
http://clean-clouds.com
2. Bell's Law of Computer Classes Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry. A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class.A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class.
3. Evolution
4. Typical IT Environment
5. Typical IT Environment
6. Business Drivers Cost Saving
To reduce up front investment in infrastructure
Elasticity
Need to quickly adjust to changes in infrastructure requirements
Reduce time to market
Requisition->Approval Workflows->Acquisition->Installation and Configuration->Maintenance
Experimental / Innovative Projects / Proof of concepts
7. Capacity Utilization
8. Virtualization Create multiple virtual environments on a single physical resource
Decouple OS from hardware
Create a abstraction between OS and the hardware
Optimization of resource utilization
A virtual machine (VM) is a software implementation of a computing environment in which an operating system (OS) or program can be installed and run.
The virtual machine typically emulates a physical computing environment, but requests for CPU, memory, hard disk, network and other hardware resources are managed by a virtualization layer which translates these requests to the underlying physical hardware.
VMs are created within a virtualization layer, such as a hypervisor or a virtualization platform that runs on top of a client or server operating system. This operating system is known as the host OS. The virtualization layer can be used to create many individual, isolated VM environments.
Increased Flexibility (Better Mobility, Backup and Disaster Recovery capacity, Copy and Move Virtual Machines since they are bunch of files)
Create multiple virtual environments on a single physical resource
Decouple OS from hardware
Create a abstraction between OS and the hardware
Optimization of resource utilization
A virtual machine (VM) is a software implementation of a computing environment in which an operating system (OS) or program can be installed and run.
The virtual machine typically emulates a physical computing environment, but requests for CPU, memory, hard disk, network and other hardware resources are managed by a virtualization layer which translates these requests to the underlying physical hardware.
VMs are created within a virtualization layer, such as a hypervisor or a virtualization platform that runs on top of a client or server operating system. This operating system is known as the host OS. The virtualization layer can be used to create many individual, isolated VM environments.
Increased Flexibility (Better Mobility, Backup and Disaster Recovery capacity, Copy and Move Virtual Machines since they are bunch of files)
9. Virtualization Type 1 (or native, bare metal) hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. A guest operating system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual machine architectures; the original hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's z/VM. A modern equivalent of this is the Citrix XenServer, VMware ESXi, and Microsoft Hyper-V hypervisor. Type 2 (or hosted) hypervisors run within a conventional operating system environment. With the hypervisor layer as a distinct second software level, guest operating systems run at the third level above the hardware. KVM and Virtualbox are examples of Type 2 hypervisors.
In computing, a hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualization techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer.
Type 1 (or native, bare metal) hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. A guest operating system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual machine architectures; the original hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's z/VM. A modern equivalent of this is the Citrix XenServer, VMware ESXi, and Microsoft Hyper-V hypervisor. Type 2 (or hosted) hypervisors run within a conventional operating system environment. With the hypervisor layer as a distinct second software level, guest operating systems run at the third level above the hardware. KVM and Virtualbox are examples of Type 2 hypervisors.
In computing, a hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualization techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer.
10. Cloud Journey Server consolidation is the management of the total computer server complement to eliminate multiple, individual servers and maximize available resources by loading several different applications on the same server. Database architects or system managers typically complete this function or role. A server is a computer dedicated to the management of data or software applications. It typically has a basic operating system and the rest of the space is used to support multiple users accessing the same software simultaneously.
There are four things to consider when looking at server consolidation: hardware, redundancy, operating system, and maximizing efficiency. The purpose of server consolidation is to decrease the number of individual servers and maximize available resources. Make an inventory list of all your servers, operating system, software installed, versions and their primary function and user group. If possible, review the total traffic load, peak times and overall user demand.
Server consolidation is the management of the total computer server complement to eliminate multiple, individual servers and maximize available resources by loading several different applications on the same server. Database architects or system managers typically complete this function or role. A server is a computer dedicated to the management of data or software applications. It typically has a basic operating system and the rest of the space is used to support multiple users accessing the same software simultaneously.
There are four things to consider when looking at server consolidation: hardware, redundancy, operating system, and maximizing efficiency. The purpose of server consolidation is to decrease the number of individual servers and maximize available resources. Make an inventory list of all your servers, operating system, software installed, versions and their primary function and user group. If possible, review the total traffic load, peak times and overall user demand.
11. Cloud Deployment Model
12. Download with Linkedin Username/Password