What’s all the hype about hyper-convergence? Ask two IT guys to define it, let alone spell it, and you’ll get four different answers. At VertitechIT, we define it as an infrastructure that eliminates silos (hardware, software, and people!) in the data center and uses a single platform to provision and manage disparate data center services such as compute, storage and networking. Utilizing software-defined methodologies, the software layer defines operational aspects of the infrastructure, so it can be amended without modifying the hardware it sits upon. Still got questions? This list may help.
Q & A
You’ve Got Questions, We’ve Got Answers
What are the technologies that drive the hyper-converged solution?
Dark fiber, DWDM, Compute nodes w Flash Storage, software-defined storage, and software-defined network solutions.
What is the networking and fiber architecture?
Extremely low latency connections in a three site ring to deliver one logical data center across two or three sites.
How are loads/workloads balanced across data centers?
We utilize VMware to take care of data distribution and load across the data centers using HA and vSAN policies and fault domains.
How does this affect product licensing (specifically databases and especially Oracle)?
It will vary by contract, but there are strategies that can be employed to limit the per socket licensing scope required by many vendors. VMware DRS Affinity groups and log analyzers are one such strategy.
What do we do with legacy non x86 applications?
Cost analysis by application will determine the feasibility and need to migrate the system to HCI. Generally hardware refresh and/or software renewal provide the inflection points that drive the transition.
How does this platform’s cost compare to alternative architectures and public cloud?
HCI delivers a more robust platform with cross-data center data redundancy and fail-over for lower capital and operating cost than other on-premises options. Public cloud does not yet have a good way to deliver the same outcomes without a dramatic refactoring of applications, which is not possible with most Healthcare and numerous other enterprise applications.
What are the timelines for the project?
Analysis, implementation, and migration can be completed inside of 36 months, generally within the confines of existing budget.
How is the data made redundant and available?
Software-defined storage has the ability to deliver redundancy across sites using policy. Different vendors use different terms for their methods, but the logical outcomes are an N + 1 site design with continuous operations.
How does this affect backup, high availability, and disaster recovery?
HA and DR are emergent property of the platform within the geography of the solution. Backups are still required and recommended using offsite targets to protect against corruption, loss, and significant regional events.
How does this affect org structure, staffing, and skills?
This convergence of infrastructure generally encourages a similar convergence of staff and skills: cross-functional architecture and operations teams are becoming the new standard.
How do application delivery and VDI fit into this?
VDI generally sits outside this infrastructure for licensing and operational reasons, and a similar case can be made for terminal services functions to be separate as well. Terminal services workloads will run nicely on top of this platform, but they often do not need the inherent availability and replication delivered by the platform and may find more economical run-time outside of it.