POST YOUR TOPICS HERE

Hi friends, This blog welcomes you all to post you own new tricks and tips here. For this you have to just send a mail to sendmytricks@ymail.com

Your post would be posted along with your name and location. For this you have to send a mail to the above mentioned id.

Format for sending mail.

Subject : MY TRICKS

1. Your name [will be displayed if display name not given]
2. Display name [This name would be displayed along with you post]
3. You location [For ex., Chennai, India.]
4. Post topic.
5. Details.

Pictures are also allowed. For that you need to send the pictures as links.

IMPORTANT NOTE : Please do not spam in this mail id. You can send you ideas/problems in this mail id itself.

Thursday, March 13, 2008

Server Virtualization...

In today’s complex IT environments, server virtualization simply makes sense. Redundant server hardware can rapidly fill enterprise datacenters to capacity; each new purchase drives up power and cooling costs even as it saps the bottom line. Dividing physical servers into virtual servers is one way to restore sanity and keep IT expenditures under control.

With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines), each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server’s compute potential — and provide a rapid response to shifting datacenter demands.

The concept of virtualization is not new. As far back as the 1970s, mainframe computers have been running multiple instances of an operating system at the same time, each independent of the others. It’s only recently, however, that software and hardware advances have made virtualization possible on industry-standard, commodity servers.

In fact, today’s datacenter managers have a dizzying array of virtualization solutions to choose from. Some are proprietary, others are open source. For the most part, each will be based on one of three fundamental technologies; which one will produce the best results depends on the specific workloads to be virtualized and their operational priorities.

WHAT IS VIRTUALIZATION AND WHY USE IT
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

Combined with a sound and well-designed server consolidation strategy, organizations can yield massive benefits by adopting Virtual Server technologies.

By presenting the server based services in a virtual methodology it enables the virtual server to utilize the resources available dynamically. If a performance gain can be delivered to the server by changing the physical processor it is utilizing from its current one to one that is less utilized then it will complete this task transparently and provide a less utilized processing pool to the instructions being executed.

Speed of deployment and testing procedures are far more agile and therefore implementation times for new solutions can be dramatically reduced. As servers are represented by a set of encapsulated files the ability to reverse a recent change or even reject it dynamically can be of enormous benefit to development projects.

WHEN TO USE VIRTUALIZATION
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

HOW TO AVOID “ALL EGGS IN ONE BASKET” SYNDROME
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:
-> HTTP
-> FTP
-> DNS
-> DHCP
-> RADIUS
-> LDAP
-> File Services using Fiber Channel or iSCSI storage
-> Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

REASON FOR USING SERVER VIRTUALIZATION
For anyone who own a business, the ultimate goal is to stay competitive among its business competitors while maintain their technology expenses in a healthy level.

Personally I do think that server virtualization will helps your company to stay competitive. If your are a software based company, you will need a numbers of servers for your code simulation and testing, code version control as well as your database server. Each server will easily cost you around USD 5-20k depends on your server specifications.

However, if your analyst your server performance and utilization in details, you may be shock and realize that each of your server may be only utilizing 10-20% of the total server’s CPU and memory usage. The remaining 30-70% of your server resources are remain idle in most cases.

Will it be nice that you can fully utilize your server resources and you will most probably needs 2 out of 5 servers running in your company. What you will get in returns? Here are the 5 main reasons that you shall always make use of server virtualization:

-> Saves money - The is the main important in any business case as it keep your products competitive and saves money on buying too many servers that you actually not needed. Of cause, this will not saves you on your Microsoft Server OS licenses but it saves on your server hardware in the long run.

-> Saves spaces on your data center - As you do not have to maintain so many servers, you can fully utilize the spaces in your office for other purposes, additional office space for more staff as your company growth? Further more, Server racks are also an expensive equipment to locate all your servers, you save space and save money.


-> Save on server maintenance - You will need specified professionals in fields like System Engineer/ IT Engineer to take care of your servers. Windows Security Patching, hardware monitoring, performance monitoring on hardware failure as well. If you cut down the numbers of server you actually needs, you can saves man power in this area and may be they can focus on other area which will improve your business competitive further.

-> Save the environment - Less servers leads to less power consumption. With lesser servers, you need less power cable, less high power air con to maintenance the server room temperatures; and at the end, it’s more environmental friendly and you play your part in saving the environment from Global Warming.

-> Improve your products build time/time to market - During coding implementation and testing, you may crash your own coding and causing the testing environment stop working. With virtual server, you can actually create a snapshot of the ‘working’ version prior to your new release or testing. If anything happens to your coding, you can just ‘playback’ your previous ‘working version’ in no time. It will definitely improve the time to market of your products.

APPROACHES TO SERVER VIRTUALIZATION
-> Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host's operating system because it is not aware that it's not running on real hardware. It does, however, require real computing resources from the host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires addition privileges. VMware and Microsoft Virtual Server both use the virtual machine model.

-> The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually modifies the guest operating system's code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and UML both use the paravirtual machine model.


-> Virtualization at the OS level works a little differently. It isn't based on the host/guest paradigm. In the OS level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn't able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization.

-> Storage virtualization is commonly used in a storage area network (SAN). The management of storage devices can be tedious and time-consuming. Storage virtualization helps the storage administrator perform the tasks of backup, archiving, and recovery more easily, and in less time, by disguising the actual complexity of the SAN.

-> Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. Each channel is independently secured. Every subscriber has shared access to all the resources on the network from a single computer. Network management can be a tedious and time-consuming business for a human administrator. Network virtualization is intended to improve productivity, efficiency, and job satisfaction of the administrator by performing many of these tasks automatically, thereby disguising the true complexity of the network. Files, images, programs, and folders can be centrally managed from a single physical site. Storage media such as hard drives and tape drives can be easily added or reassigned. Storage space can be shared or reallocated among the servers. Network virtualization is intended to optimize network speed, reliability, flexibility, scalability, and security. Network virtualization is said to be especially effective in networks that experience sudden, large, and unforeseen surges in usage.

SINGLE POINT OF FAILURE
As is immediately obvious, when the whole of your business runs on one or two systems, a hardware, software, or network failure that results in downtime has a much greater impact on the enterprise. In distributed topologies, a single failed system out of several is certainly going to hurt, but it will only impact the segment of the business it serves.

To enjoy the benefits of server consolidation and minimize the shock of planned and unplanned downtime, organizations can deploy a high availability solution to protect hard and soft assets. Compared to tape backups, vaulting, and hot site backups, recovery is almost immediate in instances where high availability clustering is deployed, a consideration that is very important in situations where 24x7 access to applications is necessary or when Web-based, market-facing access to applications is supported. Sometimes you can use one of you decommissioned servers and the data center it resides in as your high availability backup server and disaster recovery site. (This is a good kind of recycling.)

A high availability configuration also allows a consolidated computing environment to be gradually established without interrupting business by switching system users from the primary production system to the backup. Application availability is maintained throughout the reengineering process, for the exception of an interval of roughly 20 minutes to 40 minutes that can be scheduled over a weekend or holiday. Even more value can be derived from the high availability approach because it can be used in the consolidation process as the data transfer agent, replicating data from multiple distributed servers back to the consolidation point. By contrast, tapes that are traditionally used to perform this critical step can fail during the restore process because of normal wear, accidental damage, or environmental issues.

SEEKING BALANCE
Finally, workload management is a key facet to maintaining acceptable response times in a consolidated computing environment. When the work of eight servers is performed by one or two, for example, acceptable response times can be tough to deliver. And if the server is accessible to large groups of users over the Web, demand can be unpredictable.

Automatic load balancing features are available in some high availability solutions. While load balancing is not very complicated in instances where users have read only access, read/write servers are trickier because of contention issues. High availability tools can be well suited to accommodate positive synchronization between primary and backup servers and bypass these problems.

A high availability solution that is part of a server virtualization and consolidation effort will require some additional investment, but the benefits of using high availability clustering can be easily justified by the value of providing a simplified transition path and a markedly shorter recovery time should a failure occur.

Bill Hammond directs product-marketing efforts for information availability software at Vision Solutions. Hammond joined Vision Solutions in 2003 with over 15 years of experience in product marketing, product management and product development roles in the technology industry.

No comments: