Virtualization, Monitoring and Resource Control Best Practices

By: Bill Kleyman| - Leave a comment

Pexels

Technologies around virtualization aren’t going anywhere. In fact, there’s a very good chance that you already have at least some form of virtualization deployed. We’re seeing organizations expand their deployments around virtualization into new frontiers. Gartner recently pointed out that while server virtualization remains the most common infrastructure platform for x86 server OS workloads in on-premises data centers, Gartner analysts believe that the impact of new computing styles and approaches will be increasingly significant for this market. This includes OS container-based virtualization and cloud computing.

However, “What was considered as the best approach to greater infrastructure agility only a few years ago, is becoming challenged by an array of newer infrastructure choices,” said Michael Warrilow, research director at Gartner. New types of hardware configurations, software-defined systems, and greater levels of convergence are all creating new considerations around virtualization deployment.

To that extent – it’s important to understand that the way we monitor and work with resources and virtualization is evolving as well. With that in mind, organizations are looking for different types of tools which will help them create better monitoring and resource utilization best practices.

To begin, it’s important to know what’s actually using your resources and how can you prevent issues like spikes.

What kind of events cause spikes in resource usage and how can new tools resolve this?

There are many events that can cause a resource spike. Problems within the environment will cause issues to arise. For example, a programming loop can peg a CPU, or a network error can saturate links.

However, there are business cases where resources get spiked. Take Amazon’s site for example. They rely heavily on virtualization solutions and during peak holiday or sales seasons, their servers will see a massive hit. Same goes for peak season for traveling sites. To accommodate for this, companies worried about overworked VMs utilize something called Workflow Automation. That is, if a host is pegged with resource requests and current running VMs can no longer handle the load, Automation software will kick in and spin up additional VMs on separate hosts to help handle the load.

For resource monitoring tools, what features should we look for?

When looking at purchasing any sort of resource monitoring software, make sure it can answer the following questions:

  • How many VMs do I have and which ones are over or under provisioned? This can also mean containers, virtual services, network functions virtualization (NFV), and so on. Your tools must be able to monitor every aspect of your ecosystem.
  • Where are the performance bottlenecks in my virtualized environment? This can’t be just real-time. Some of the leading monitoring tools will have predictive analytics helping you forecast challenges. This helps with downtime, provision, and keeping the environment agile.
  • How are my VMs configured? And, are there any issues? Misconfigurations lead to outages and performance problems. A good tool will allow you to see your configuration big picture; even in a heavily disturbed environment.
  • How many app servers will fit in my current environment and when will I need more resources? Applications and data are the lifeblood of your environment. You’ll need to be able to support user access, data requirements, and resource needs of those applications. Your monitoring tools can help forecast application requirements and how to meet user demand.
  • What departments are using which resources? Your organization is your customer. And, its divisions are segments of those customers. This means understanding how resources are leveraged by department, building, and even location. Some organizations require that IT sites operate “independently;” maybe for business or compliance purposes. However, you can still keep an eye on the ecosystem and create powerful multi-tenant environments.
  • How is my server utilization being tracked over a period of time? Predictive analytics help look at your usage, and forecast your requirements in the near future. You’re not just looking for errors here – you’re also trying to improve your overall environment.

Within your own organization, virtualization will play a critical role in your ability to consolidate, stay competitive, and improve your connection into the cloud. Most of all – it helps create a business model which can evolve very quickly. Don’t get caught in VM sprawl situations and don’t lose focus from your most critical resources. New tools allow you to forecast usage, understand user interaction, and even integrate business processes for powerful provision and de-provisioning capabilities. Leverage these types of systems to allow your business and virtual ecosystem to be truly agile in today’s digital economy.

Topics: ,

Comments

About The Author

Bill Kleyman

Vice President of Strategy and Innovation at MTM Technologies

Bill is an enthusiastic technologist with experience in datacenter design, management, and deployment. His architecture work includes large virtualization and cloud deployments as well as business network design and implementation. Bill enjoys writing, blogging, and educating colleagues around everything that is technology. During the day, Bill is the CTO at MTM Technologies, where he interacts with enterprise organizations and helps align IT strategies with direct business goals. Bill’s whitepapers, articles, video blogs and podcasts have been published and referenced on InformationWeek, NetworkComputing, TechTarget, DarkReading, Data Center Knowledge, CBS Interactive, Slashdot, and many others. Most recently, Bill was ranked #16 in the Onalytica study which reviewed the top 100 most influential individuals in the cloud landscape, globally.

Articles by Bill Kleyman
See All Posts