All The Hot Infrastructure Tech at OpenStack Summit Berlin
Though the weather was a bit nippy in Berlin this week, the latest OpenStack Summit that was held here seemed to spark enthusiasm over the new and emerging technologies around the OpenStack open source private cloud software.
Perhaps some of the energy came from the OpenStack Foundation’s decision, announced at the show, to expand its scope to cover other open source infrastructure software that has grown up around OpenStack these past few years. Though OpenStack itself also continues to take on new users, and address new use cases. Here are some of the projects that created buzz within Berlin’s boxy conference center:
Airship
The summit saw the version 1.0 release candidate for the open source Airship cloud provisioning tool, which AT&T originally developed (and subsequently released as open source) to provision its own OpenStack-based public and private clouds.
“We wanted to build a declarative platform that would quickly stand up new environments, as well as manage the complete lifecycle of all of our applications,” said Alan Meadows, AT&T lead system architect, during a presentation. “Users should be able to operate Airship without having to write additional orchestration or wrappers around it to work in a hands-free way. It is able to achieve this with a singular deployment workflow API.”
Here’s a tool organization could use to make their own infrastructure programmable, using declarative YAML configuration documents to describe the desired environment. Because they run on Kubernetes and Helm, Airship deployments can be easily scaled, upgraded and girded for high-availability.
AT&T itself now uses Airship to manage over 20 different environments, all of which can now be quickly replicated. It will be the tool the company uses to roll out its 5G network next year.
Fifth Generation Wireless (#5G) is the first generation of mobile wireless services developed on, and running in, the cloud. @ATT is rolling out its 5G network later this year, which will run on @OpenStack and be deployed by the @airshipproject — @AmyWheelus #OpenStackSummit pic.twitter.com/9ZIsauT4Li
— The New Stack (@thenewstack) November 14, 2018
The full 1.0 production release of Airship, an OpenStack pilot project, is expected to be launched at the next OpenStack event, the Open Infrastructure Summit in Denver next May.
Zuul
We’ first covered the open source Zuul CI/CD platform at the OpenStack Summit in Vancouver last May, and it continues to gain traction outside of its original mission of managing the complex codebase of OpenStack itself. Zuul offers an advantage over traditional continuous integration (CI) systems such as Jenkins in that it can support the coordinated development of code across multiple repositories.
BMW, for instance, has adopted to Zuul to keep track of the software it is developing for its automobiles, noted BMW Software Engineer Tobias Henkel in a presentation. Each automobile now has multiple software programs to drive everything from anti-lock brakes to the in-dashboard entertainment system. In many cases, operations needed to be coordinated across systems within the automobile, as well as with controls back at a data center.
‘All this means that development within BMW must be done by multiple teams in parallel, who must coordinate with each other, while at the same time not get in each other’s way, Henkel said.
Zuul’s automated gating has been “a game changer” for BMW, he explained. Testing changes on a codebase with a traditional CI platform would require “serial gating,” and multiple re-bases, limiting the number of tests that could be done in a day, as each team tests its changes while the others wait before merging their own changes. Zuul’s automated gating “parallelizes this process so that each change is tested with the speculative future state, along with all the patches that are queued up in front of it. If all go green, they are merged,” he explained.
Zuul also drives the build system for the open source virtual networking software Tungsten Fabric (formerly OpenContrail), according to another presentation by Tungsten’s package maintainers CodiLime. The code from Tungsten Fabric is spread across about 30 repositories, and the final package is delivered as a set of containers. CodiLime uses Zuul strictly as a build tool, one that can run a single pipeline to build multiple packages for different platforms (CentOS, Windows, etc.) as well as different configurations aimed at Kubernetes, OpenStack, and the public clouds. The pipeline can be triggered on a schedule (for daily builds), on-command, or whenever new code is committed.
Kata Containers
Inducted a year ago as an OpenStack pilot project, Kata Containers has drawn steady interest ever since for its unique construct, one that offers containers that are both performant and yet secure as traditional virtual machines. This year at the summit, the talks about Kata were some of the most heavily attended.
The technology uses hardware-backed virtualization to provide the boundary between containers, which sets the stage for multi-tenancy so that service providers or large organizations can securely segment their users’ container sets from each other.
Much of the work on Kata Containers was carried out by Hyper HQ to build out Hyper.sh, a Docker-compatible serverless container hosting service. At the summit, Hyper HQ co-founder and Chief Technology Officer Xu Wang offered some benchmarks that characterized the performance of Kata Containers, both against the Open Containers Initiative‘s runC — a lightweight container runtime used by the industry as a standard specification for interoperability — as well as against gAdvisor, a container runtime sandbox created by Google to offer stronger security controls around containers.
Like gVisor, Kata Containers offers a smaller attack surface than the typical container. Unlike gVisor, which only offers a subset of Linux calls, gVisor does not limit the actions that a container can take with the host operation system kernel. And when it comes to performance, Wang showed Kata Containers can rival the CPU, I/O, and network responsiveness of a stock runC runtime, whereas gVisor struggled to maintain parity, and really suffered from slow network performance.
Ironic
All the technologies described above were created apart from OpenStack itself (even if, in many cases, they were created to aid in the deployment and management of the OpenStack stack). But even as development of the core OpenStack components starts to cool off, projects to extend the software for other uses continue to proliferate.
One surprising use of OpenStack of late has been to provision bare metal servers, namely through the appropriately-named Ironic OpenStack integration. In his keynote, OpenStack Foundation Executive Director Jonathan Bryce noted that use of Ironic has gone up in use within the OpenStack community from 9 percent in 2016 to 24 percent of users in 2018.
Ironic offers an API and a set of plug-ins to interact with bare metal hypervisors, PXE and IPMI to control the machines. For cloud service providers, this capability has turned out to be immensely useful, because it allows them to more easily offer bare-metal servers to their customers, noted Alain Fiocco, chief technology officer of European infrastructure provider OVH, in a press conference. OVH offers bare metal resources through its own custom API, but moving to Ironic will give end-users and standardized way to procure bare metal servers with a cloud-pricing model. It also could help OVH more easily provision these servers for internal operations, he said.
And its use is not confined to just service providers. “Ironic has always been a very operator-driven project,” noted Julia Kreger, a Red Hat principal software engineer who is the team lead for Ironic. About 13 percent of code contributions come from operators and administrators, who want to provide in-house hosting.
Project Cyborg
Another emerging use-case for OpenStack has been for running large machine-learning jobs, which require large numbers of processors to do intensive modeling. In order to wrest better performance for this large-scale work, researchers are turning to alternative processor architectures, most notably GPUs (for vector processing) and FPGAs, which can be reconfigured on the fly for particular tasks.
Preparing jobs for such hardware accelerators, as they are also known, can be tedious work. Cyborg (previously called Nomad) provides an easy way to harness these processors through OpenStack’s Nova compute resource interface. In effect OpenStack provides FPGAs as a bare-metal-speed command line-driven “device-as-a-service,” noted Melissa Evers-Hood, the director for edge and cloud orchestration stacks for Intel, which has invested in Cyborg.
For the opening day of keynotes, OpenStack Foundation chief operating Officer Mark Collier offered a demo of Cyborg driving FPGAs for a real-time sentiment analysis of a video of people talking. Through a Nova command line, the user can find available FPGAs, upload a bitstream to FPGAs and execute the model inference work. This demo analyzed a conversation between former U.S. President Barack Obama and his then-White House chief of Staff John Podesta.
In addition to ML tasks, OpenStack-harnessed FPGAs could pave the way for more efficient video transcription and network function virtualization.
StarlingX
Another emerging market that OpenStack is starting to address is edge computing, where some computational resources are placed at the edge of a network, rather than back at the data center. The idea is that an edge network can be more responsive to end-users, or can collect and curate data from, and control Internet-of-Things end nodes. Such potential gains may be offset, however, by the additional complexity that typically comes with a distributed, multi-tiered architecture.
Introduced last month, StarlingX aims to cut this complexity. Based on an opinionated OpenStack distribution that has been slimmed down, and outfitted with new capabilities, this package is designed so it can run in these potentially smaller environments. The motivation behind StarlingX is to provide fully autonomous computing at edge, one that can run with minimal oversight. It was designed to run on as little as two cores, offering a complete control plane, as well as the ability to host VM-based applications.
“This does everything from initial conifguration and installation of a complete cloud, to adding APIs for hardware and software inventory, for fault and alarm handling management, the detection and recovery of hardware and process failures and zero-impact orchestration of patching and upgrades,” said Intel OpenStack developer Dean Troyer.
The OpenStack Foundation paid for the reporter’s travel and lodging to attend this conference.
The Open Stack Foundation, Red Hat are sponsors of The New Stack.
Feature art: Street art from Dead Chicken Alley, Berlin. All photos by Joab Jackson.