POST YOUR TOPICS HERE

Hi friends, This blog welcomes you all to post you own new tricks and tips here. For this you have to just send a mail to sendmytricks@ymail.com

Your post would be posted along with your name and location. For this you have to send a mail to the above mentioned id.

Format for sending mail.

Subject : MY TRICKS

1. Your name [will be displayed if display name not given]
2. Display name [This name would be displayed along with you post]
3. You location [For ex., Chennai, India.]
4. Post topic.
5. Details.

Pictures are also allowed. For that you need to send the pictures as links.

IMPORTANT NOTE : Please do not spam in this mail id. You can send you ideas/problems in this mail id itself.

Wednesday, March 26, 2008

Spyware : You’re being watched

Spyware on your PC leaves you vulnerable. Here are six simple steps to remove and prevent spyware. Everyone loves a spy story. A bit of suspense, a bit of drama and a whole lot of romance. However, in the Internet world, spying is dirty, treacherous and can cost you a great deal of money. We are talking of spyware.

Spyware is a malicious software that sends out your personal information to third parties without your approval. This means that if your computer is infected by spyware, your user name, passwords and credit card information could be used by anyone, anywhere in the world. Not a pretty thought, is it?
Removing spyware is difficult. It uses stealth to install itself and avoid detection. Most people don’t even realise that their computers are infected by spyware. It may also affect the performance of your machine. But there are things you can do to prevent and remove spyware from your computer.

1. Do not click on pop-up links: While surfing on the Net, do not click on those innocuous links that pop up. They may result in your computer being infected by spyware. Never click Yes or OK. Click on the X button on the top right-hand corner of the window. Install a pop-up blocker if necessary.
2. Be careful with what you download: Software that claims to be free can contain spyware. Never assume that the software doesn’t contain spyware. Always be sure. Downloading music and movies from file-sharing programmes is another way you can get spyware.
3. Do not respond to spam: Mail messages that promote anti-spyware software or any other thing are likely to be fraudulent. Responding to them may result in spyware being installed on your PC.
4. Update your Windows software regularly: Microsoft Windows Update regularly offers security updates to fix vulnerabilities in its programs. Updating your Windows software regularly can go a long way in helping you prevent spyware entry into your PC.
5. Install a firewall: Firewalls can detect if any software is being installed on your computer without your knowledge, while you’re on the Internet. They monitor all the data that goes in and out, acting as a shield that protects your PC. Windows XP comes with an in-built firewall that you just need to turn on.
6. Use a spyware remover tool: Anti-spyware programmes are the most effective way to remove spyware. These programmes detect if your PC contains spyware and remove it. Most of them are free as well. But keep in mind that some of the so-called free spyware remover tools are spyware themselves. We recommend you use only the leading anti-spyware tools backed by comprehensive lab testing.
So don’t get spied on. Stay safe.

Monday, March 24, 2008

Internet Explorer 8 Beta from Microsoft Has Arrived


Microsoft's next-generation web browser, Internet Explorer 8, has arrived. In a surprising move, after the demo of IE8 and its new features at today's session of the MIX08 conference, the startling announcement was made: "It's available for download now". The new browser showcases many new features and improvements, like Facebook and eBay integration, standards compliance, and the ability to work with AJAX web pages. What's most notable about IE8, though, is more than a sum of its parts. If anything, this launch shows that Microsoft is not taking Firefox's creep into browser market share lightly.

IE8 New Features Shown At MIX08

As of the demo at MIX08, IE8 has features included the following:

Standards Compliance: There were hints that IE8 would be a remarkable offering on the IE Blog as they released tidbits about the browser's capabilities. For example, the announcement of IE8's passing of the Acid2 test (a test for standards compliance) marked a milestone in IE8's development. The standards mode was originally going to be turned off by default letting web developers code for it by including a "meta" tag to make use of IE8's new standards compliant mode. Later, Microsoft came to their senses and made the default the standards-compliant mode. Meanwhile, Firefox also claims to have passed the Acid 2 test, but an open bug on bugzilla.mozilla.org seems to say otherwise. (our coverage). One commenter on the thread notes, "So, we essentially do pass the test. However, in some situations, it might still fail, that's why this bug is open."

Facebook Integration: Yes, seriously! With a Flock-like feature as an unexpected surprise, Microsoft capitalized on their partnership with the popular social networking site, Facebook, to allow IE8 users the ability to get status updates from Facebook right from their browser toolbar.

eBay Integration: Like Facebook, this feature also uses IE8's new technology, called "WebSlices", which introduces a new way to get updates from other sites via the browser itself, without having to visit the web site. With WebSlices, IE8 beta users can subscribe to portions of a page that
update dynamically, in order to receive updates from that page as content
changes. EBay will offer webslices, too, letting you track your auctions from the browser toolbar. Basically, WebSlices look like Favorites on your Links toolbar but they have a little arrow next to them - clicking on this arrow will show you a small window of live web content.

Live Maps Integration: Another WebSlice was integration with Live Maps. It appeared that you could even highlight text on a page, like an address, and then right-click and choose Live Maps from the context menu to get a WebSlice preview of that location on a map in a small pop-up window. How convenient!

Integration with Me.dium: Me.dium integration will be supported in IE8 via WebSlices. Me.dium will now help web surfers discover and view WebSlices directly from the sidebar. The Me.dium sidebar will alert users to the presence of WebSlices on any page – and even allows users to read each WebSlice, without leaving the Sidebar. In addition, Me.dium will make real-time recommendations for other WebSlices on other relevant web pages and provides direct links to them based on the real time activity of other Me.dium users.

Working with AJAX Pages: IE8 will offer better functionality when it comes to AJAX web pages. The example showed a page where you could zoom in using AJAX technology. Previously, hit the IE "Back" button would take you back to the last page you were on. Now, "Back" will zoom you out.

We can now find out what other features IE8 has to offer, since the beta is now publicly available for download. To get IE8, you can download it by clicking here.

Saturday, March 22, 2008

Is your pendrive not working???

-> It could have been corrupted data which might have damaged the pendrive. Here recovering may not be possible.

-> Do try to see if the usb pin is not deformed in any way, tat shud help or else try goin for a new one.

-> Attach the pendrive switch off ur system abruptly by switching off main power supply. Then restart it keeping the pen drive intact in the USB drive and allow the disk check.It checks the pendrive and rectifies the corrupted Memory Pool.

-> Check with your system hard ware basically if everything is fine also check whether it is getting detected in any other systemsome times some anti viruses will not allow drives with virus in it to get detected hoping this info might help you.

-> If it was the case that your pendrive not getting detected you have an issue with your hardware so return it to the manfacturer.

-> If it was the case that it is not functioning on any USB of your sys it is the issue with your system usb drive better go for an add on pci usb card 2.0 that will solve your issue.

Friday, March 21, 2008

Best ever virus that ruled the world...

Storm worm botnet is a remotely-controlled network of "zombie" computers (or "botnet") that has been linked by the Storm Worm, a Trojan horse spread through e-mail spam. Some have estimated that by September 2007 the Storm botnet was running on anywhere from 1 million to 50 million computer systems. Other sources have placed the size of the botnet to be around 250,000 to 1 million compromised systems. More conservatively, one network security analyst claims to have developed software that has crawled the botnet and estimates that it controls 160,000 infected computers. The Storm botnet was first identified around January 2007, with the Storm worm at one point accounting for 8% of all malware on Microsoft Windows computers.

The Storm botnet has been used in a variety of criminal activities. Its controllers, and the authors of the Storm Worm, have not yet been identified. The Storm botnet has displayed defensive behaviors that indicated that its controllers were actively protecting the botnet against attempts at tracking and disabling it. The botnet has specifically attacked the online operations of some security vendors and researchers who attempted to investigate the botnet. Security expert Joe Stewart revealed that in late 2007, the operators of the botnet began to further decentralize their operations, in possible plans to sell portions of the Storm botnet to other operators. Some reports as of late 2007 indicated the Storm botnet to be in decline, but many security experts reported that they expect the botnet to remain a major security risk online, and the United States Federal Bureau of Investigation considers the botnet a major risk to increased bank fraud, identity theft, and other cybercrimes.

The botnet reportedly is powerful enough as of September 2007 to force entire countries off the Internet, and is estimated to be capable of executing more instructions per second than some of the world's top supercomputers. However, it is not a completely accurate comparison, according to security analyst James Turner, who said that comparing a botnet to a supercomputer is like comparing an army of snipers to a nuclear weapon. Bradley Anstis, of the United Kingdom security firm Marshal, said, "The more worrying thing is bandwidth. Just calculate four million times a standard ADSL connection. That's a lot of bandwidth. It's quite worrying. Having resources like that at their disposal—distributed around the world with a high presence and in a lot of countries—means they can deliver very effective distributed attacks against hosts."

Computer security expert Joe Stewart detailed the process by which compromised machines join the botnet: attempts to join the botnet are made by launching a series of EXE files on the computer system in question, in stages. Usually, they are named in a sequence from game0.exe through game5.exe, or similar. It will then continue launching executables in turn. They typically perform the following:
1. game0.exe - Backdoor/downloader
2. game1.exe - SMTP relay
3. game2.exe - E-mail address stealer
4. game3.exe - E-mail virus spreader
5. game4.exe - Distributed denial of service (DDos) attack tool
6. game5.exe - Updated copy of Storm Worm dropper
At each stage the compromised system will connect into the botnet; fast flux DNS makes tracking this process exceptionally difficult. This code is run from %windir%\system32\wincom32.sys on a Windows system, via a kernel rootkit, and all connections back to the botnet are sent through a modified version of the eDonkey/Overnet communications protocol.

Amazing facts...

Google got its name from the mathematical figure googol, which denotes the number 'one followed by a hundred zeros'.

Yahoo! derived its name from the word Yahoo coined by Jonathan Swift in Gulliver's Travels. A Yahoo is a person who is repulsive in appearance and action and is barely human!

Researchers consider that the first search engine was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal, Canada.

Marc Andreessen founded Netscape. In 1993, he had already developed Mosaic, the first Web browser with a GUI.

It was once considered a letter in the English language. The Chinese call it a little mouse, Danes and Swedes call it 'elephant's trunk', Germans a spider monkey, and Italians a snail. Israelis pronounce it 'strudels' and the Czechs say 'rollmops's...What is it? The @ sign.

In the Deep Web, the part of the Web not currently catalogued by search engines, public information said to be 500 times larger than on the WWW.

The first search engine for Gopher files was called Veronica, created by the University of Nevada System Computing Services group

Tim Berners-Lee predicted in 2002 that the Semantic Web would "foster global collaborations among people with diverse cultural perspectives", but the project never seems to have really taken off.

In February 2004, Sweden led the world in Internet penetration, with 76.9 percent of people connected to the Internet. The world average is 11.1 per cent.

The top visited websites in February2004, including affiliated sites, were Yahoo!, MSN, the Warner Network, EBay, Google, Lycos and About.com.

The search engine "Lycos" is named for Lycosidae, the Latin name for the wolf spider family.
The US International Broadcasting Bureau created a proxy service to allow Chinese, Iraians and other 'oppressed' people to circumvent their national firewalls, relaying forbidden pages behind silicon curtains.

Lurking is to read through mailing lists or news groups and get a feel of the topic before posting one's own messages.

SRS stands for Shared Registry Server. The central system for all accredited registrars to access, register and control domain names.

WAIS stands for 'Wide Area Information Servers' - a commercial software package that allow the indexing of huge quantities of information, the makes those indices searchable across the Internet.

An anonymiser is a privacy service that allows a user to visit Web sites without allowing anyone to gather information about which sites they visit.

Archie is an information system offering an electronic directory service for locating information residing on anonymous FTP sites.

On the Internet, a 'bastion host' is the only host computer that a company allows to be addressed directly from the public network.

'Carnivore' is the Internet surveillance system developed by the US Federal Bureau of Investigation (FBI), who developed it to monitor the electronic transmissions of criminal suspects.

Did you know that the original URL of Yahoo! was http://akebono.stanford.edu/ ?

Developed at the University of Nevada, Veronica is a constantly updated database of teh names of almost every menu item on thousands of gopher servers.

The Electrohippies Collective is an international group of 'hacktivists' based in Oxfordshire, England.

UIML (User Interface Markup Language) is a descriptive language that lets you create a Web page that can be sent to any kind of interface device.

In Internet terminology, a demo is a non-interactive multimedia presentation, the computer world's equivalent of a music video.

Did you know that the name of the famous search engine AltaVista came into existence when someone accidentally read and suggested the word 'Vista' on an unclean whiteboard as 'Alta Vista'?

Boeing was the first airline to discover the Y2K problem, way back in 1993.

Did you know that Domain registration was free until an announcement by the NAtional Science Foundation on 14th September, 1995, changed it?

The Internet was initially called the 'Galactic network' in memos written by MIT's J C R Licklider in 1962.

Shokyu Ishiko, a doctorate in agriculture and chief priest of Daioh Temple in Kyoto has created an online virtual temple which will perform memorial services for lost information.

A 55 kg laddu was made for Lord Venkateswara at Trumala as a Y2K prayer offering.

The morning after Internet Explorer 4 was released, certain mischievous Microsoft workers left a 10 by 12 foot letter 'e' and a balloon with the message, "We love you", on Netscape front lawn.

If you were a resident of Tongo, a monarchy in the southwest Pacific, you could own domains as cool as 'mail.to' and 'head.to'.

The American Registry for Internet Numbers (ARIN) began the administration of Internet IP address in North and South America in MArch 1998.

The testbed for the Internet's new addressing system, IPv6, is called the 6bone.

The first Internet worm was created by Robert T.Morris, Jr, and attacked more than 6000 Internet hosts.

According to The Economist magazine, the first truly electronic bank on the Internet, called First Virtual Holdings, was opened by Lee Stein in 1994.

The French Culture Ministry has banned the word 'e-mail' in all government ministries, documents, publications and Web sites, because 'e-mail' is an English word. They prefer to use the term 'courriel'.

The German police sell used patrol cars over the Internet, because earlier auctions fetched low prices and only a few people ever showed up.

Rob Glasser's company, Progressive Networks, launched the RealAudio system on April 10, 1995.

'Broswer safe colours' refer to the 216 colours that are rendered the same way in both the PC and Mac operating systems.

Though the world Wide Web was born in 1989 at CERN in Switzerland, CERN is mainly involved in research for particle physics.

The first computer company to register for a domain name was Digital Equipment Corporation.

The 'Dilbert Zone' Web site was the first syndicated comic strip site available on the Internet.

Butler Jeeves of the Internet site AskJeeves.com made its debut as a large helium balloon in the Macy's Thanksgiving Day parade in 2000.

Sun Microsystems sponsors NetDay, an effort to wire American public schools to the Internet, with help from the US government.

In Beijing, the Internet community has coined the word 'Chortal' as a shortened version of 'Chinese portal'.

Telnet is one of the oldest forms of Internet connections. Today, it is used primarily to access online databases.

Domain names can be really sell at high prices! The most expensive domain name was 'business.com', which was bought by eCompanies for $7.5 million in 1999.

The first ever ISP was CompuServe. It still exists, under AOL Time Warner.

On an average, each person receives 26.4 e-mails a day.

Ray Tomlinson, a scientist from Cambrige, introduced electronic mail in 1972. He used the @ to distinguish between the sender's name and network name in the e-mail address.

Transmission Control Protocol/Internet Protocol (TCP/IP) was designed in 1973.

The Apple iTunes music store was introduced in the spring of 2003. It allows people to download songs for an affordable 99 cents each.

Satyam Online become the first private ISP in December 1998 to offer Internet connections in India.

The number of UK Internet users increase by an estimated 75 percent each year.

The Internet is the third-most used advertising medium in the world, closely catching up with traditional local newspapers and Yellow Pages.

It took 13 years for television to reach 50 million users- it took the Internet less than 4 years.

As of now, there are over 260 million people with Internet access worldwide.

1 out of 6 people used the Internet in North America and Europe, as per a 1999 survey.
The average computer user blinks 7 times a minute.

In 1946, the Merriam Webster Dictionary defined computer as 'a person who tabulates numbers; accountant; actuary; bookkeeper.'

An estimated 2.5 billion hours were wasted online last year as people waited for pages to download, according to a study sponsored by Nortel Networks.

AOL says spam is the number one complaint of its customers, and that it has to block over one billion unsolicited e-mails every day.

In 2002, the average Internet user received 3.7 spam messages per day. The total rose to 6.2 spam messages per day in 2002. By 2007, it is expected to reach 830 messages per day.

A terminology industry research firm called Basex says that unsolicited e-mail cost $ 20 billion in lost time and expenses worldwide in 2000.

In 2003 an Atlanta- base ISP called Earthlink won a lawsuit worth $16.4 million (US) against a spammer in Buffalo NY, and a $25 million (US) lawsuit against a spammer in Tennessee.

Tuesday, March 18, 2008

Microsoft Application Compatibility Toolkit 5.0

The Microsoft Application Compatibility Toolkit (ACT) 5.0 is a lifecycle management tool that assists in identifying and managing your overall application portfolio, reducing the cost and time involved in resolving application compatibility issues, and helping you quickly deploy Windows Vista and Windows Updates.
With it, you can:

* Analyze your portfolio of applications, Web sites, and computers.
* Evaluate operating system deployments, the impact of operating system updates, and your compatibility with Web sites.
* Centrally manage compatibility evaluators and configuration settings.
* Rationalize and organize applications, Web sites, and computers.
* Prioritize application compatibility efforts with filtered reporting.
* Add and manage issues and solutions for your enterprise-computing environment.
* Deploy automated mitigations to known compatibility issues.
* Send and receive compatibility information from the Microsoft Compatibility Exchange.


Microsoft Application Compatibility Toolkit 5.0 Features
Inventory and Collect your Data
ACT 5.0 provides a way to gather inventory data, through the use of distributed compatibility evaluators and the developer and tester tools. Data can be collected around operating system changes of various magnitude, from large events (such as an operating system upgrade), to medium events (such as a browser upgrade), to smaller events (such as a Windows Update release). Having the ability to collect compatibility data into a single centralized store has significant advantages in reducing organizational risk during platform changes.


Distributed Compatibility Evaluators
The Application Compatibility Toolkit (ACT) 5.0 and Application Compatibility Toolkit Data Collector (ACT-DC) use compatibility evaluators to collect and process your application information. Each evaluator performs a set of functions, providing a specific type of information to ACT.

* Inventory Collector: Examines your organization's computers to identify the installed applications and system information.
* User Account Control Compatibility Evaluator (UACCE): Enables you to identify potential compatibility issues that are due to permission restrictions enforced by the User Account Control (UAC), formerly known as Limited User Accounts (LUA). Through compatibility logging, UACCE provides information about both potential application permission issues and ways to fix the problems so that you can deploy a new operating system.
* Update Compatibility Evaluator (UCE): Provides insight and guidance about the potential effects of a Windows operating system security update on your installed applications. The UCE dynamically gathers application dependencies and is deployable to both your servers and client computers in either a production or test environment. The compatibility evaluator collects information about the modules loaded, the files opened, and the registry entries accessed by the applications currently running on the computers and writes that information to XML files uploaded to the ACT database.
* Internet Explorer Compatibility Evaluator (IECE): Enables you to identify potential Web application and Web site issues that occur due to the release of a new operating system. IECE works by enabling compatibility logging in Internet Explorer, parsing logged issues, and creating a log file for uploading to the ACT Log Processing Service.
* Windows Vista Compatibility Evaluator: Enables you to identify issues that relate to the Graphical Identification and Authentication (GINA) DLLs, to services running in Session 0 in a production environment, and to any application components deprecated in the Windows Vista operating system.


Development Tools
ACT 5.0 provides new tools for developers to test setup packages, Web sites and Web applications with Internet Explorer 7, and applications running as standard users in Windows Vista. The following section provides information about the development tools.
Setup Analysis Tool (SAT): Automates running application installations while monitoring the actions taken by each application's installer. The Setup Analysis Tool detects the following potential issues:

* Installation of kernel mode drivers
* Installation of 16-bit components
* Installation of Graphical Identification and Authentication (GINA) DLLs
* Modification of files or registry keys that are under Windows Resource Protection in Windows Vista


Internet Explorer Test Tool: Collects your Web-based issues from Internet Explorer 7, uploads the data to the ACT Log Processing Service, and shows your results in real time.
Standard User Analyzer (SUA): Determines the possible issues for applications running as a Standard User (SU) in Windows Vista.


Analyze Your Data
After collecting your compatibility data, ACT 5.0 provides features and tools to help you organize, rationalize, and prioritize the data.

* Organize your data: Create custom compatibility reports, assign custom categories and subcategories to your applications based on geographies, departments, internal line-of-business applications, or any custom application tags, and analyze your compatibility data using three types of quick reports, including the Operating System Deployment reports, the Update Impact Analyzer Application reports, and the Internet Explorer 7 reports.
* Rationalize your data: Locate and share your compatibility information, issues, and solutions with industry peers using the Microsoft Compatibility Exchange and the ACT Community, filter your data to eliminate non-relevant applications, applications with specific issues, applications with no known issues, and applications with no compatibility information, run standardized reports for specific operating systems, risk ratings of applications, computers, and custom reports, and manage issues and solutions for each application in your company.
* Prioritize your data: Assign priorities to your applications, track the status of your application testing, by identifying their position in the deployment process, and run standardized reports for understanding your current deployment status based on your prioritizations.

How to Disable New Programs Alert

This is only applicable if you have the new XP start menu not if you are using classic.


To get rid of this alert is really easy:

-> Right click the start bar and click properties.

-> Go to the start menu tab and click customize.

-> Click on the advanced tab.

-> De-select "Highlight newly installed programs"

Thursday, March 13, 2008

Server Virtualization...

In today’s complex IT environments, server virtualization simply makes sense. Redundant server hardware can rapidly fill enterprise datacenters to capacity; each new purchase drives up power and cooling costs even as it saps the bottom line. Dividing physical servers into virtual servers is one way to restore sanity and keep IT expenditures under control.

With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines), each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server’s compute potential — and provide a rapid response to shifting datacenter demands.

The concept of virtualization is not new. As far back as the 1970s, mainframe computers have been running multiple instances of an operating system at the same time, each independent of the others. It’s only recently, however, that software and hardware advances have made virtualization possible on industry-standard, commodity servers.

In fact, today’s datacenter managers have a dizzying array of virtualization solutions to choose from. Some are proprietary, others are open source. For the most part, each will be based on one of three fundamental technologies; which one will produce the best results depends on the specific workloads to be virtualized and their operational priorities.

WHAT IS VIRTUALIZATION AND WHY USE IT
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.

Combined with a sound and well-designed server consolidation strategy, organizations can yield massive benefits by adopting Virtual Server technologies.

By presenting the server based services in a virtual methodology it enables the virtual server to utilize the resources available dynamically. If a performance gain can be delivered to the server by changing the physical processor it is utilizing from its current one to one that is less utilized then it will complete this task transparently and provide a less utilized processing pool to the instructions being executed.

Speed of deployment and testing procedures are far more agile and therefore implementation times for new solutions can be dramatically reduced. As servers are represented by a set of encapsulated files the ability to reverse a recent change or even reject it dynamically can be of enormous benefit to development projects.

WHEN TO USE VIRTUALIZATION
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.

While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.

While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

HOW TO AVOID “ALL EGGS IN ONE BASKET” SYNDROME
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:
-> HTTP
-> FTP
-> DNS
-> DHCP
-> RADIUS
-> LDAP
-> File Services using Fiber Channel or iSCSI storage
-> Active Directory services

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.

For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.

REASON FOR USING SERVER VIRTUALIZATION
For anyone who own a business, the ultimate goal is to stay competitive among its business competitors while maintain their technology expenses in a healthy level.

Personally I do think that server virtualization will helps your company to stay competitive. If your are a software based company, you will need a numbers of servers for your code simulation and testing, code version control as well as your database server. Each server will easily cost you around USD 5-20k depends on your server specifications.

However, if your analyst your server performance and utilization in details, you may be shock and realize that each of your server may be only utilizing 10-20% of the total server’s CPU and memory usage. The remaining 30-70% of your server resources are remain idle in most cases.

Will it be nice that you can fully utilize your server resources and you will most probably needs 2 out of 5 servers running in your company. What you will get in returns? Here are the 5 main reasons that you shall always make use of server virtualization:

-> Saves money - The is the main important in any business case as it keep your products competitive and saves money on buying too many servers that you actually not needed. Of cause, this will not saves you on your Microsoft Server OS licenses but it saves on your server hardware in the long run.

-> Saves spaces on your data center - As you do not have to maintain so many servers, you can fully utilize the spaces in your office for other purposes, additional office space for more staff as your company growth? Further more, Server racks are also an expensive equipment to locate all your servers, you save space and save money.


-> Save on server maintenance - You will need specified professionals in fields like System Engineer/ IT Engineer to take care of your servers. Windows Security Patching, hardware monitoring, performance monitoring on hardware failure as well. If you cut down the numbers of server you actually needs, you can saves man power in this area and may be they can focus on other area which will improve your business competitive further.

-> Save the environment - Less servers leads to less power consumption. With lesser servers, you need less power cable, less high power air con to maintenance the server room temperatures; and at the end, it’s more environmental friendly and you play your part in saving the environment from Global Warming.

-> Improve your products build time/time to market - During coding implementation and testing, you may crash your own coding and causing the testing environment stop working. With virtual server, you can actually create a snapshot of the ‘working’ version prior to your new release or testing. If anything happens to your coding, you can just ‘playback’ your previous ‘working version’ in no time. It will definitely improve the time to market of your products.

APPROACHES TO SERVER VIRTUALIZATION
-> Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host's operating system because it is not aware that it's not running on real hardware. It does, however, require real computing resources from the host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires addition privileges. VMware and Microsoft Virtual Server both use the virtual machine model.

-> The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually modifies the guest operating system's code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and UML both use the paravirtual machine model.


-> Virtualization at the OS level works a little differently. It isn't based on the host/guest paradigm. In the OS level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn't able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization.

-> Storage virtualization is commonly used in a storage area network (SAN). The management of storage devices can be tedious and time-consuming. Storage virtualization helps the storage administrator perform the tasks of backup, archiving, and recovery more easily, and in less time, by disguising the actual complexity of the SAN.

-> Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. Each channel is independently secured. Every subscriber has shared access to all the resources on the network from a single computer. Network management can be a tedious and time-consuming business for a human administrator. Network virtualization is intended to improve productivity, efficiency, and job satisfaction of the administrator by performing many of these tasks automatically, thereby disguising the true complexity of the network. Files, images, programs, and folders can be centrally managed from a single physical site. Storage media such as hard drives and tape drives can be easily added or reassigned. Storage space can be shared or reallocated among the servers. Network virtualization is intended to optimize network speed, reliability, flexibility, scalability, and security. Network virtualization is said to be especially effective in networks that experience sudden, large, and unforeseen surges in usage.

SINGLE POINT OF FAILURE
As is immediately obvious, when the whole of your business runs on one or two systems, a hardware, software, or network failure that results in downtime has a much greater impact on the enterprise. In distributed topologies, a single failed system out of several is certainly going to hurt, but it will only impact the segment of the business it serves.

To enjoy the benefits of server consolidation and minimize the shock of planned and unplanned downtime, organizations can deploy a high availability solution to protect hard and soft assets. Compared to tape backups, vaulting, and hot site backups, recovery is almost immediate in instances where high availability clustering is deployed, a consideration that is very important in situations where 24x7 access to applications is necessary or when Web-based, market-facing access to applications is supported. Sometimes you can use one of you decommissioned servers and the data center it resides in as your high availability backup server and disaster recovery site. (This is a good kind of recycling.)

A high availability configuration also allows a consolidated computing environment to be gradually established without interrupting business by switching system users from the primary production system to the backup. Application availability is maintained throughout the reengineering process, for the exception of an interval of roughly 20 minutes to 40 minutes that can be scheduled over a weekend or holiday. Even more value can be derived from the high availability approach because it can be used in the consolidation process as the data transfer agent, replicating data from multiple distributed servers back to the consolidation point. By contrast, tapes that are traditionally used to perform this critical step can fail during the restore process because of normal wear, accidental damage, or environmental issues.

SEEKING BALANCE
Finally, workload management is a key facet to maintaining acceptable response times in a consolidated computing environment. When the work of eight servers is performed by one or two, for example, acceptable response times can be tough to deliver. And if the server is accessible to large groups of users over the Web, demand can be unpredictable.

Automatic load balancing features are available in some high availability solutions. While load balancing is not very complicated in instances where users have read only access, read/write servers are trickier because of contention issues. High availability tools can be well suited to accommodate positive synchronization between primary and backup servers and bypass these problems.

A high availability solution that is part of a server virtualization and consolidation effort will require some additional investment, but the benefits of using high availability clustering can be easily justified by the value of providing a simplified transition path and a markedly shorter recovery time should a failure occur.

Bill Hammond directs product-marketing efforts for information availability software at Vision Solutions. Hammond joined Vision Solutions in 2003 with over 15 years of experience in product marketing, product management and product development roles in the technology industry.

Friday, March 7, 2008

Change to hot patching

The Microsoft® Windows® operating systems and Microsoft business solutions constitute an environment of choice for many large enterprise customers. For such critical systems, these customers demand high availability and assume the industry standard of "five nines" (99.999%) reliability.

It is important to avoid service interruptions when installing updates on the operating system of such critical systems. Hotpatching provides a mechanism to:
• Update system files without rebooting.
• Update system files without stopping services and processes.

Microsoft reboot reduction initiative

Hotpatching is part of the Microsoft reboot reduction initiative, which seeks to help minimize the need for a full system reboot after installing updates. Reducing reboots is important because IT departments in many organizations implement a time-consuming test cycle every time an update is installed and the system is rebooted. This results in loss of productivity and revenue to the organization until their system is fully verified and operational. Hotpatching allows customers to deploy important updates and patches in a timely, transparent manner without requiring a full system shutdown and restart. This reduces their rollout time.

The following examples demonstrate possible savings from reboot reduction:
• Of the 22 updates that shipped for Windows Server 2003 RTM between April 2005 and August 2005, 15 of them required a reboot. Eight of these could have been hotpatched. This would have reduced the number of reboots by 53%.
• Of the 14 updates that shipped for Windows Server 2003 Service Pack 1 (SP1) prior to August 2005, ten of them required a reboot. Four of these could have been hotpatched. This would have reduced the number of reboots by 40%.

Hotpatch Package Structure

A hotpatch package contains hotpatch and coldpatch binary files for the operating system update.
• The hotpatch binary file contains only the function necessary to address the critical operating system flaw.
• The coldpatch contains the old binary file with the fixed function appended to it and an instruction to jump from the flawed function to the fixed function. Redirecting to the new function ensures that the defective processes in memory are fixed by patching the old function.

Limitations and Compatibility Issues

There are limitations to hotpatching technology, and because of these limitations, not all updates can be hotpatched. Hotpatching does not support components or applications that contain:
• Additions or changes to exports
• Data structure changes
• Dependency changes
• Global state changes
• Infinite loops and persistent stack frames
• Inline assembly code

Package Installation

You can download hotpatch packages from the Microsoft Download Center (http://go.microsoft.com/fwlink/?LinkID=19831). The Software Update Installation Wizard will step you through the hotpatch installation process.

Hotpatch installation for non-Microsoft deployment tools (such as IBM Tivoli and Altiris Patch Management) is supported by enabling hotpatching from the command line. To enable hotpatching from the command line, type:

WindowsServer2003-KB######-x86-LLL.exe /hotpatch:enable

If the hotpatch installation is successful, a reboot will not be required. A log file is created as a part of the installation process, and it will show if the hotpatch installation was successful.

Wednesday, March 5, 2008

Basic process in compression - SQL Server 2008

-> Assume you have a column of data on a single page of rows that contain values like
1. Run
2. Running
3. Runner
4. Runoff
5. Runover
-> There's quite a bit of redundant data 'prefixing‘ each of the rows in this column.
-> Prefix value of ‘Run' as 1 and each column ending up with pointers to that prefix
value - resulting in values like 1, 1ning, 1ner, 1off, 1over.
-> The prefix value 1’s definition is stored in the CI structure.
-> By this compression process 30%(approx) of the space is saved when compared with
storing the original data as such.

Backup Compression - SQL Server 2008

In the SQL Server 2005 version, compression was available via third party backup softwares like
1. SQL LiteSpeed
2. SQL Unzip

SQL Server 2008 has an inbuilt compression for backups.