Tuesday, June 4, 2019
The History Of Virtualization Information Technology Essay
The History Of realistic(prenominal)ization Information Technology EssayIntroductionVirtualization is unmatchable of the hottest innovations in the Information Technology field, with proven benefits that proceed organizations to strategize for rapid planning and implementation of realistic(prenominal)ization. As with any recent engineering, managers must be c arful to analyze how that technology would shell fit in their organization. In this document, we pull up stakes offer up an over emplacement of virtualization to help shed light on this quickly evolving technology.History of VirtualizationVirtualization is Brand New Again Although virtualization chitchatms to be a hot new cutting edge technology, IBM originally manipulationd it on their mainframes in the 1960s. The IBM 360/67 racecourse the CP/CMS system apply virtualization as an orgasm to time sharing. from severally bingle drug manipulationr would run their own 360 railroad car. Storage was partiti mavind into virtual disks called P-Disks for each exploiter. Mainframe virtualization remained popular through the 1970s.During the 1980s and 1990s, virtualization kind of dis break throughed. During the 1980s, there were a couple of products make for Intel PCs. Simultask and unification/386, both developed by Locus Computing Corporation, would run MS-DOS as guest operating systems. In 1988, Insignia Solutions released Soft PC which ran DOS on temperateness and Macintosh platforms.The late 1990s would usher in the new wave of virtualization. In 1997, Connectix would release Virtual PC for the Macintosh. Later, Connectix would release a version for the Windows and by and by be bought by Microsoft in 2003. In 1999, VMw ar would introduce its entry into virtualization.In the last decade, every major player in emcees has integrated virtualization into their offerings. In amplification to VMware and Microsoft, Sun, Veritas, and HP would all acquire virtualization technology.How Does Virtu alization Work?In the go-ahead IT world, waiters are necessary to do many jobs. Traditionally each work and does oneness job, and sometimes many servers are given the same job. The reason behind this is to keep hardware and software problems on one machine from causing problems for some(prenominal)(prenominal) programs. There are several problems with this set emerge notwithstanding. The first problem is that it doesnt take returns of modern server ready reckoners processing business office.11 Most servers only use a small percentage of their overall processing capabilities. The other problem is that the servers begin to take up a lot of somatogenic space as the enterprise meshing grows larger and more complex. Data centers might become overcrowded with racks of servers consuming a lot of power and generating heat. legion virtualization tries to fix both of these problems in one fell swoop.16Server virtualization uses specially designed software in which an administra tor all in(p)r convert one physical server into bigeminal virtual machines. Each virtual server acts as a unique physical device that is capable of running its own operating system. Until recent technological developments, the only guidance to create a virtual server was to design special software to trick a servers CPU into providing processing power for several virtual machines. Today, however, processor manu particularurers much(prenominal) as Intel and AMD offer processors with the capability of supporting virtual servers already built in. In the virtualized environment, the hardware doesnt create the virtual servers. mesh administrators or engineers mum need to create them using the right software. 11In the world of information technology, server virtualization is still a hot topic. Still considered a new technology, several companies offer different approaches to server virtualization. There are three ways to create virtual servers full virtualization, para-virtualizat ion, and OS-level virtualization. In all three variations there are a few common traits. The physical server is always called the innkeeper. The virtual servers are called guests. The virtual servers all be strike as if they were physical machines. However, in each of the different modes uses a different approach to allocating the physical server resources to virtual server call for. 11Full VirtualizationThe full virtualization method uses software called a hypervisor. This hypervisor whole kit and boodle instantaneously with the physical servers CPU and disk space. It performs as the stage for the virtual servers operating system. This keeps each server completely autonomous and unconscious of the other servers running on the same physical machine. If necessary, the virtual servers stool be running on different operating system software like Linux and/or Windows.The hypervisor overly watches the physical servers resources. It relays resources from the physical machine to the appropriate virtual server as the virtual servers run their occupations. Finally, because hypervisors have their own processing needs, the physical server must concur some processing power and resources to run the hypervisor natural covering. If not done properly, this stinkpot affect the overall performance and slow down natural coverings. 11Para-VirtualizationUnlike the full virtualization method, the para-virtualization approach allows the guest servers to be aware of one another(prenominal). Because, each operating system in the virtual servers is conscious of the demands being placed on the physical server by the other guests, the Para-virtualization hypervisor doesnt require as much processing power to oversee the guest operating systems. In this way the entire system works together as a unified organization. 11OS-Level VirtualizationThe OS-level virtualization approach doesnt use a hypervisor at all. The virtualization capability is part of the host OS, instead. The host OS executes all of the functions of a fully virtualized hypervisor. Because the OS-level operates without the hypervisor, it limits all of the virtual servers to one operating system where the other two approaches allow for different OS usage on the virtual servers. The OS-level approach is known as the homogeneous environment because all of the guest operating systems must be the same. 11With three different approaches to virtualization, the question remains as to which method is the best. This is where a complete understanding of enterprise and network requirements is imperative. If the enterprises physical servers all run on the same OS, then the OS-level approach might be the best termination. It tends to be faster and more efficient than the others. However, if the physical servers are running on several different operating systems, para-virtualization or full virtualization might be better approaches.Virtualization StandardsWith the ever-increasing adoption of virtualization , there are very few stock(a)s that actually reign as prevalent in this technology. As the migration to virtualization grows, so does the need for aerofoil industry measuring sticks. This is why the work on virtualization is viewed by several industry observers as a giant step in the right direction. The Distri stilled Management childbed Force (DMTF) genuinely promotes beats for virtualization management to help industry suppliers implement compliant, interoperable virtualization management resolving powers.The strongest standard to be created for this technology was the Standardization of Management in a Virtualized Environment. It was accomplished by a team who builds on standards already in place. This standard lowers the IT learning curve and complexity for vendors implementing this support in their management solutions. Its ease-of-use makes this standard successful. The new standard recognizes supported virtualization management capabilities, including the ability todi scover inventory virtual data processor systemsmanage disembodied spiritcycle of virtual information processing system systemscreate/modify/ call off virtual resourcesmonitor virtual systems for health and performanceVirtualization standards are not suffering as a result of poor development but earlier because of the common IT challenge involved in pleasing all drug users. Until virtualization is standardized, network professionals must continue to meet these challenges within a dynamic data center. For example, forward the relationship between Cisco and VMWare was established Ciscos Data Center 3.0 was best described as scrawny. 150 million dollars later, Cisco was able to establish a successful integration that allows the VFrame to load VMware ESX Server onto bare-metal computer hardware something that previously could only be done with Windows and Linux and configure the network and terminal connections that ESX requisite.In addition, Microsoft make pledges only in the Web services arena, where it faces tougher open standards competition. The companys Open Specification Promise allows every individual and organization in the world to make use of Virtual Hard Disk Image data format forever, Microsoft said in a statement. VHD allows the packaging of an application with that applications Windows operating system. Several such combinations, each in its own virtual machine, can run on a single piece of hardware.The future standard of virtualization is in Open Virtual machine Format (OVF). OVF doesnt aim to replace the pre-existing formats, but instead ties them together in a standard- base XML package that contains all the necessary installation and configuration parameters. This, in theory, will allow any virtualization platform (that implements the standard) to run the virtual machines. OVF will set some safeguards as well. The format will permit integrity checking of the VMs to ensure they have not been tampered with after the package was produced. Virtualization in the Enterprise Microsofts Approach (Toms needs references) Virtualization is an approach to deploying computing resources that isolates different layers-hardware, software, data, networks, storage-from each other. Typically today, an operating system is installed straightway onto a computers hardware. Applications are installed directly onto the operating system. The interface is presented through a display connected directly to the local machine. Altering one layer much affects the others, making changes difficult to implement.By using software to isolate these layers from each other, virtualization makes it easier to implement changes. The result is simplified management, more efficient use of IT resources, and the flexibility to provide the right computing resources, when and where they are needed.Bob Muglia, Senior Vice President, Server and Tools Business, Microsoft CorporationThe typical discussions of virtualization focus on server hardware virtualization (which will be discussed later in this article). However, there is more to virtualization than just server virtualization. This section presents Microsofts virtualization strategy. By looking at Microsofts virtualization strategy, we can see other areas, beside server virtualization, where virtualization can be utilize in the enterprise infrastructure.Server Virtualization Windows Server 2008 Hyper-V and Microsoft Virtual Server 2005 R2In Server virtualization, one physical server is made to appear as multiple servers. Microsoft has two products for virtual servers. Microsoft Virtual Server 2005 R2 was made to run on Windows Server 2003. The accredited product is Windows Server 2008 Hyper-V, which will only run on 64-bit versions of Windows Server 2008. Both products are considered hypervisors, a term coined by IBM in 1972. A hypervisor is the platform that enables multiple operating systems to run on a single physical computer. Microsoft Virtual Server is considered a Type 2 hy pervisor. A Type 2 hypervisor runs within the host computers operating system. Hyper-V is considered a Type 1 hypervisor, withal called a bare-metal hypervisor. Type 1 hypervisors run directly on the physical hardware (bare metal) of the host computer.A virtual machine whether we are talking about Microsoft, VMWare, Citrix, or Parallels basically consists of two files, a configuration file and a virtual hard consume file. This is true for backdrop virtualization as well. For Hyper-V, there is a .vmc file for the virtual machine configuration and a .vhd file for the virtual hard drive. The virtual hard drive holds the OS and data for the virtual server.Business continuity can be enhanced by using virtual servers. Microsofts System Center Virtual Machine animal trainer allows an administrator to move a virtual machine to another physical host without the end users realizing it. With this feature, maintenance can be carried out without bringing the servers down. Failover forgather between servers can also be enabled. This style that should a virtual server fail, another virtual server could take over, providing a disaster retrieval solution.Testing and development is enhanced through the use of Hyper-V. Virtual server test systems that duplicate the production systems are used to test code. In UCFs voice of Undergraduate Studies, a virtual Windows 2003 server is used to test new web sites and PHP code. The virtual server and its physical production counterpart have the read same software installed, to allow programmers and designers to check their web applications before releasing them to the public.By consolidating multiple servers to run on fewer physical servers, cost save may be found in lower cooling and electricity needs, lower hardware needs, and less physical space to house the data center. Server consolidation is also a key technology for Green computing initiatives. Computer resources are also optimized, for example CPUs will see less idle time . Server virtualization also maximizes licensing. For example, purchasing one Microsoft Server Enterprise license will allow you to run four virtual servers using the same license. desktop Virtualization Microsoft Virtual Desktop Infrastructure (VDI) and Microsoft Enterprise Desktop Virtualization (MED-V)Desktop virtualization is very similar to server virtualization. A client operating system, such as Windows 7, is used to run a guest operating system, such as Windows XP. This is usually done to support applications or hardware not supported in the current operating system (This is why Microsoft included Windows XP mode in versions of Windows 7). Microsofts Virtual PC is the foundation for this desktop virtualization. Virtual PC allows a desktop computer to run a guest operating system (OS) which is independent instance of an OS on top of their host OS. Virtual PC emulates a standard PC hardware environment and is independent of the hosts hardware or setup.Microsoft Enterprise Des ktop Virtualization (MED-V) is a managed client-hosted desktop virtualization solution. MED-V builds upon Virtual PC and adds features to deploy, manage, and control the virtual public figures. The images can also be remotely updated. The virtual machines run on the client computer. Also, applications that have been installed on the virtual computer can be listed on the host machines Start menu or as a desktop shortcut, giving the end user a seamless experience. MED-V can be very utilitarian to support legacy applications that may not be able to run on the latest deployed operating system.The virtual images are portable and that makes it useful for a couple of scenarios. Employees that use their personal computers for work can now use a bodily managed virtual desktop. This solves a common problem where the personal computer might be running a home version of the operating system that does not allow it to connect to a corporate network. This also means that the enterprise only make s changes to the virtual computer and makes not changes to the personal computers OS.The other scenario where portability plays a factor is that the virtual image could be saved to a dismissible device, such as a USB flash drive. The virtual image could then be run from the USB drive on any computer that has an installation of Virtual PC. Although this is listed as a benefit by Tulloch, I also see some problems with this scenario. USB flash drives sometimes get lost and losing a flash drive in this scenario is like losing a whole computer, so caution should be exercised so that sensitive data is not kept on the flash drive. Secondly, based on personal experience, even with a fast USB flash drive, the performance of the virtual computer running from the USB flash drive is poor as compared to running the same image from the hard drive.Virtual Desktop Infrastructure (VDI) is server based desktop virtualization. In MED-V, the virtual image is on the client machine and runs on the clien t hardware. In VDI, the virtual images are on a Window Server 2008 with Hyper-V server and run on the server. The users data and applications, therefore, reside on the server. This solution seems to be a combination of Hyper-V and Terminal Services (discussed later in this section).There are several benefits to this approach. Employees can work from any desktop, whether in the office or at home. Also, the client requirements are very low. Using VDI, the virtual images can be deployed not only to standard desktops PCs, but also to thin clients and netbooks. Security is also enhanced because all of the data is housed on servers in the data center. Finally, administration is easier and more efficient due to the centralized storage of the images.Application Virtualization Microsoft Application Virtualization (App-V)Application virtualization allows applications to be streamed and cached to the desktop computer. The applications do not actually install themselves into the desktop operat ing system. For example, no changes are actually made to the Windows registry. This allows for some unusual virtual tricks like being able to run two versions of Microsoft Office on one computer. Normally, this would be impossible.App-V allows administrators to package applications in a self-contained environment. This package contains a virtual environment and everything that the application needs to run. The client computer is able to execute this package using the App-V client software. Because the application is self-contained, it makes no changes to the client, including no changes to the registry. Applications can be deployed or published through the App-V Management server. App-V packages can also be deployed through Microsofts System Center bod Manager or standalone .msi files dictated on network shares or removable media.App-V has several benefits for the enterprise. There is a centralized management of the entire application life cycle. There is faster application deploy ment due to less time performing regression testing. Since App-V applications are self-contained, there are no software compatibility issues. You can also provide on-demand application deployment. Troubleshooting is also made easier by using App-V. When an application is installed on a client, it creates a cache on the local hard drive. If an App-V application fails, it can be reinstalled by deleting the cache file.Presentation Virtualization Windows Server 2008 Terminal ServicesTerminal services, which has been around for many years, has been folded into Microsofts Virtualization offerings. A terminal server allows multiple users to connect. Each user receives a desktop view from the server in which they will run applications on the server. Any programs run within this desktop view actually execute on the terminal server. The client only receives the screen view from the server. The strategy employed here is that since the application will only use resources on the server, money c an be spent on strong server hardware and money saved on lighter strength clients. Also, since the application is only on the server, it is easier to maintain the software, since it only needs to be updated on the server and not all of the clients. Also, since the application runs on the server, the data can be stored on the server as well, enhancing security. Another security feature is that every keystroke and mouse stroke is encrypted. The solution is also scalable and can be expand to use multiple servers in a farm. Terminal services applications can also be optimized for both high and low bandwidth scenarios. This is helpful for remote users accessing corporate applications from less than optimal connections.User-State Virtualization Roaming User Profiles, Folder Redirection, Offline FilesThis is another set of technologies that have been around since Windows 95 but have now been folded into the virtualization strategy. A user compose consists of registry entries and folders which define the users environment. The desktop background is a common setting that you will find as part of the user profile. Other items included in the user profile are application settings, Internet Explorer favorites, and documents, music, and picture folders.Roaming user profiles are profiles saved to a server that will borrow a user to any computer that the user logs in to. For an example, a user with roaming profiles logs on to a computer on the factory floor and changes the desktop image to a picture of fluffy kittys. When he logs on to his office computer, the fluffy kittys are also on his office computers desktop as well.When using roaming profiles, one of the limitations is that the profile must be synchronized from the server to the workstation each time the user logs on. When the user logs off, the profile is then copied back up to the server. If folders, such as the documents folder, are included, the downloading and uploading can take some time. An improved solution is to use redirected folders. Folders, such as documents and pictures, can be redirected to a server location. This transparent to the user, for the user will still access his documents folder as if they were part of his local profile. This also helps with data backup, since it is easier to backup a single server than document folders located on multiple client computers.A limitation with roaming user profiles occurs when the server or network access to the server is down. Offline files attempt to address that limitation by providing access to network files even if the server location is inaccessible. When used with Roaming User Profiles and Folder Redirection, files saved in redirected folders are automatically made available for offline use. Files mark for offline use are stored on the local client in a client-side cache. Files are synchronized between the client-side cache and the server. If connection to the server is lost, the Offline Files feature takes over. The user may no t even realize that there have been any problems with the server.Together, Roaming User profiles, Folder Redirection, and Offline Files are also an excellent disaster retrieval tool. When a desktop computer fails, the biggest loss are the users data. With these three technologies in place, all the user would need to do is to log into another standard corporate issued computer and resume working. There is no downtime in trying to recover or restore the users data since it was all safely stored on a server.Review of Virtualization in the EnterpriseVirtualization can enhance the way an enterprise runs the data center. Server virtualization can optimize hardware utilization. Desktop virtualization can provide a standard client for your end users. Application virtualization can allow central administration of applications and fewer chances of application incompatibilities. Presentation virtualization allows central management of applications and allowing low end clients, such as thin cli ents and netbooks, to run software to perform beyond the hardware limitations. User state virtualization gives the user a computer environment that will follow them no matter what corporate computer they use.Benefits and Advantages of VirtualizationVirtualization has evolved into a very important entity and a platform for IT to take a step into computing history, being used by countless companies both large and small. This is due to Virtualizations capability to proficiently simplify IT operations and allow IT organizations to respond faster to changing business demands. Although virtualization started out as a technology used mostly in testing and development environments, in recent years it has moved toward the mainstream in production servers. bit there are many advantages of this technology, the following are the top 5.Virtualization is cost efficientVirtualization allows a company or organization to save money on hardware, space, and energy. Using existing servers and/or disks to add more performance without adding additional capacity, virtualization directly translates into savings on hardware requirements. When it is possible to deploy three or more servers on one physical machine, it is no longer necessary to purchase three or more separate machines, which may in fact have only been used occasionally. In addition to one-time expenses, virtualization can help save money in the long run as well because it can drastically reduce energy economic consumption. When there are fewer physical machines this means less energy to power (and cool) them is needed.Virtualization is GreenGreenIT is not just a fashion trend. Eco-friendly technologies are in high demand and virtualization solutions are certainly among them. As already mentioned, server virtualization and storage virtualization lead to decreased energy consumption this automatically includes them in the list of green technologies.Virtualization Eases Administration and MigrationWhen there are fewer phy sical machines, this also makes their administration easier. The administration of virtualized and non-virtualized servers and disks is practically the same. However, there are cases when virtualization poses some administration challenges and might require some training regarding how to handle the virtualization application.Virtualization Makes an Enterprise More EfficientIncreased efficiency is one more advantage of virtualization. Virtualization helps to utilize the existing infrastructure in a better way. Typically an enterprise uses a small portion of its computing power. It is not uncommon to see server load in the single digits. Keeping underutilized machines is expensive and inefficient and virtualization helps to deal with this problem as well. When several servers are deployed onto one physical machine, this will increase capacity utilization to 90 per cent or more.Improved System Reliability and SecurityVirtualization of systems helps prevent system crashes due to memory degeneration caused by software like device drivers. VT-d for Directed I/O Architecture provides methods to better control system devices by defining the architecture for DMA and fragment remapping to ensure improved isolation of I/O resources for greater reliability, security, and availability.Dynamic Load Balancing and Disaster RecoveryAs server workloads vary, virtualization provides the ability for virtual machines that are over utilizing the resources of a server to be moved to underutilized servers. This dynamic load balancing creates efficient utilization of server resources. In addition, disaster recovery is a critical component for IT, as system crashes can create huge economic losses. Virtualization technology enables a virtual image on a machine to be instantly re-imaged on another server if a machine failure occurs.Limitations and/or Disadvantages of VirtualizationWhile one could conclude that virtualization is the perfect technology for any enterprise, it does have se veral limitations or disadvantages. Its very important for a network administrator to research server virtualization and his or her own networks architecture and needs before attempting to engineer a solution. Understanding the networks architecture needs allows for the adoption of a realistic approach to virtualization and for better judgment of whether it is a suitable solution in a given scenario or not. Some of the most notable limitations and disadvantages are having a single point of failure, hardware and performance demands, and migration.Single present of Failure iodin of the biggest disadvantages of virtualization is that there is a single point of failure. When the physical machine, where all the virtualized solutions run, fails or if the virtualized solution itself fails, everything crashes. Imagine, for example, youre running several important servers on one physical host and its RAID controller fails, wiping out everything. What do you do? How can you prevent that?The disaster caused by physical failure can however be avoided with one of several responsible virtualized environment options. The first of these options is clustering. Clustering allows several physical machines to collectively host one or more virtual servers. They generally provide two distinct roles, which are to provide for continuous data access, even if a failure with a system or network device occurs, and to load balance a high volume of clients across several physical hosts.14 In clustering, clients dont connect to a physical computer but instead connect to a logical virtual server running on top of one or more physical computers. Another solution is to backup the virtual machines with a continuous data protection solution. Continuous data protection makes it possible to restore all virtual machines quickly to another host if the physical server ever goes down. If the virtual infrastructure is well planned, physical failures wont be a frequent problem. However, this solution d oes require an investment in redundant hardware, which more or less eliminates some of the advantages of virtualization. 12Hardware and Performance DemandsServer virtualization may save money because less hardware is required thus allowing a decrease the physical number of machines in an enterprise, it does not mean that newer and faster computers are not necessary. These solutions require powerful machines. If the physical server doesnt have enough RAM or CPU power, performance will be disrupted. Virtualization essentially divides the servers processing power up among the virtual servers. When the servers processing power cant meet the application demands, everything slows down. 11 Therefore, things that shouldnt take very long could slow down to take hours or may even cause the server to crash. Network administrators should take a close look at CPU usage before dividing a physical server into multiple virtual machines. 11MigrationIn current virtualization methodology, it is only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturers processors. For example, if a network uses one server that runs an Intel processor and another that uses an AMD processor, it is not possible to transfer a virtual server from one physical machine to the other. 11One might ask why this is important to note as a limitation. If a physical server needs to be fixed, upgraded, or just maintained, transferring the virtual servers to other machines can decrease the amount of required down time during the maintenance. If porting the virtual server to another physical machine wasnt an option, then all of the applications on that virtual machine would be unavailable during the maintenance downtime. 11Virtualization Market Size and GrowthMarket research reports indicate that the total desktop and server virtualization marketplace value grew by 43% from $1.9 Billion in 2008 to $2.7 Billion in 2009. Researchers estimate that by 2013, approximately
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.