The New Stack Podcast

Show 7: Hyperscale Only Takes You So Far

Episode Summary

I would use the term “trump card,” except that I don’t want to jinx anything. VMware holds a very high card in its portfolio of infrastructure resources: its installed base among enterprises. Its existing server virtualization platform is said to hold as high as 80% worldwide market share in terms of current sales. And as we’ve noted here before, some market analysts actually rate VMware as the leader in containerization, simply because its competitors in that space (you may have heard of Docker, for instance) have too few assets to bother examining them. For a great many enterprises — as one survey reveals, nearly 9 in 10 — the container revolution stops at the boundaries of the virtual machine. In these environments, containers don’t seep into the infrastructure at all. So hyperscale — the type of environment one would need, arguably, to truly deploy applications as microservices — never actually happens there. If it were up to VMware Chief Technology Strategy Officer Guido Appenzeller, and if architecture were the only factor under consideration, as he stated in an interview for this edition of The New Stack: Context, he would be running containers every day. “Everybody’s running containers inside of VMs — pretty much everybody I talk to,” said Appenzeller. “I can’t recall a single instance where people are actually running it on bare metal.” VMware would prefer not to stand in the way. However, it has clearly engineered a pair of options for expanding containerization (if not to hyperscale, then at least beyond a single VM) whose paths are channeled through one of its own principal products: the NSX network virtualization platform. “I think the changes we will see in infrastructure, and the full automation of infrastructure, will be accelerated because of containerization technology,” admitted VMware VP Paul Fazzone, who heads NSX product management. “If you think about how quickly and easily a developer can spin up tens or hundreds of containers in an environment, and stitch them together into an application, that will push the envelope on how performant and how responsive the infrastructure will need to be.” Fazzone went on to admit he believes that human beings should not stand in the way of progress here — that as long as barriers do exist to the implementation of containerization as automation tools, developers will find ways around those barriers. So it isn’t as if VMware is trying to create some artificial barrier to entry, the way a certain other market share leader in a different market segment successfully did, up until the middle of the last decade. Moreover, it’s taking advantage of the natural barricade that already exists, supported by enterprises’ uncertainty around security, compliance, and interoperability. It’s cutting more than one path through that barricade. In an environment where analysts and advisors set the stage, so Docker may not even be a player, which path — if either — will enterprises choose?

Episode Notes

I would use the term “trump card,” except that I don’t want to jinx anything. VMware holds a very high card in its portfolio of infrastructure resources: its installed base among enterprises. Its existing server virtualization platform is said to hold as high as 80% worldwide market share in terms of current sales. And as we’ve noted here before, some market analysts actually rate VMware as the leader in containerization, simply because its competitors in that space (you may have heard of Docker, for instance) have too few assets to bother examining them.

For a great many enterprises — as one survey reveals, nearly 9 in 10 — the container revolution stops at the boundaries of the virtual machine. In these environments, containers don’t seep into the infrastructure at all. So hyperscale — the type of environment one would need, arguably, to truly deploy applications as microservices — never actually happens there.

If it were up to VMware Chief Technology Strategy Officer Guido Appenzeller, and if architecture were the only factor under consideration, as he stated in an interview for this edition of The New Stack: Context, he would be running containers every day.

“Everybody’s running containers inside of VMs — pretty much everybody I talk to,” said Appenzeller. “I can’t recall a single instance where people are actually running it on bare metal.”

VMware would prefer not to stand in the way. However, it has clearly engineered a pair of options for expanding containerization (if not to hyperscale, then at least beyond a single VM) whose paths are channeled through one of its own principal products: the NSX network virtualization platform.

“I think the changes we will see in infrastructure, and the full automation of infrastructure, will be accelerated because of containerization technology,” admitted VMware VP Paul Fazzone, who heads NSX product management. “If you think about how quickly and easily a developer can spin up tens or hundreds of containers in an environment, and stitch them together into an application, that will push the envelope on how performant and how responsive the infrastructure will need to be.”

Fazzone went on to admit he believes that human beings should not stand in the way of progress here — that as long as barriers do exist to the implementation of containerization as automation tools, developers will find ways around those barriers.

So it isn’t as if VMware is trying to create some artificial barrier to entry, the way a certain other market share leader in a different market segment successfully did, up until the middle of the last decade. Moreover, it’s taking advantage of the natural barricade that already exists, supported by enterprises’ uncertainty around security, compliance, and interoperability. It’s cutting more than one path through that barricade. In an environment where analysts and advisors set the stage, so Docker may not even be a player, which path — if either — will enterprises choose?