Avoiding Five Key Mistakes When Deploying Cloud and Virtualization

By: Bill Kleyman| - Leave a comment

Photo credit: Pexels

Today’s data center has evolved into a distributed, highly efficient environment with multiple nodes. At the forefront of this change is virtualization. At this point, almost every environment will have had at least some experience with a virtualization platform. Still, in the recent years, conversations revolving around virtualization have evolved. We are no longer only discussing server virtualization – now there are application and desktop infrastructures to consider as well. And, even containerization.

Engineers must carefully plan out their environments and have a thorough idea to what they are trying to deliver. This means understanding business drivers behind the actual solution. Since some virtualization technologies (like containerization) are relatively new, there are some core considerations which must be made prior to deployment. These top 5 mistakes can be easily avoided; but when ignored can also bring an entire project to a halt.

  • Pick the right hardware. The hardware environment must suite the needs of the platform. This means “future-proofing” your infrastructure to ensure the right amount of resources is always available. Too often organizations will work with a blade or rack-mount solution only to find out that they had sized improperly or chosen the wrong platform altogether. Take the time to understand the existing and future needs of your organization. From there, the decision to choose the right hardware structure can be made.
  • Size your storage, very carefully. Many organizations now using shared or pooled storage for their virtual infrastructure. Too often an environment will simply assume that their storage environment is capable of handling the virtual platform. However, in many deployments, this simply isn’t the case. With the resurgence around deployments like VDI, there is a greater impact on the disks. This means increased IOPS requirements and the possible need for more shelves. To avoid this issue, size the environment prior to deploying the storage ecosystem. For example, if boot storms or even scale are a concern – consider deploying flash arrays or SSD drives which are capable of offloading this data from the spinning disks and onto solid state technology.
  • Plan out your network (LAN, WAN, WLAN). Your network infrastructure plays a big role in both design and performance of a virtual environment. This means using switches which have enough ports and bandwidth capabilities to handle a virtual platform. Furthermore, it’s critical to work with networking systems which can keep up with virtualization integration demands and density requirements. Will the data require 10Gbe, 40Gbe, or more? Are there QoS considerations? Many times an organization will spend their budgets on servers, virtualization and other components but save their switches for last. Make sure bandwidth requirements are met prior to deploying any virtual environment.
  • Design and define your workloads. This means carefully controlling all of your VMs and respective resources. A major issue within organizations (both large and small) is VM sprawl and resource waste. This has now expanded into virtual desktop sprawl and even application sprawl. There must be a direct control mechanism to prevent unwarranted provisioning of unnecessary VMs. To avoid any kind of sprawl it’s important to have alerts set up within the environment and that only certain users have the ability to spin up new VMs. By taking the time to create the right VM initially, administrators won’t have to go back to create new secondary VMs which then take up resources.
  • Sizing the business and the environment. Resources are very finite and can be very expensive, never forget that. Organizations looking to move to a virtual state must take the time to plan and size their infrastructure. This means understanding everything from bandwidth requirements to the number of VMs to be deployed. Too often, an infrastructure is built without a true vision on how resources will be allocated. This can and does result in runaway costs for the IT department. Resources should be treated carefully since pooled environments can share CPU, RAM, and storage and so on. Careful planning around what needs to be deployed, what resources it will require and how these resources will be distributed can save time and money during and after the deployment phases.

The reality with a virtual infrastructure is that every environment is unique. Organizations will have their own set of business drives which will dictate the course of the deployment. Still, engineers who are new to the virtual platform should proceed with caution and work to plan out their deployment as much as possible. This will mean involving various teams and the executive staff to ensure the IT vision is aligned.

Topics: , ,

Comments

About The Author

Bill Kleyman

Vice President of Strategy and Innovation at MTM Technologies

Bill is an enthusiastic technologist with experience in datacenter design, management, and deployment. His architecture work includes large virtualization and cloud deployments as well as business network design and implementation. Bill enjoys writing, blogging, and educating colleagues around everything that is technology. During the day, Bill is the CTO at MTM Technologies, where he interacts with enterprise organizations and helps align IT strategies with direct business goals. Bill’s whitepapers, articles, video blogs and podcasts have been published and referenced on InformationWeek, NetworkComputing, TechTarget, DarkReading, Data Center Knowledge, CBS Interactive, Slashdot, and many others. Most recently, Bill was ranked #16 in the Onalytica study which reviewed the top 100 most influential individuals in the cloud landscape, globally.

Articles by Bill Kleyman
See All Posts