TripleO Liberty Recap

Author: slagle
Source: Planet OpenStack

It was a great OpenStack summit in beautiful Vancouver!

We had a lot of really good discussions about TripleO. I’d like to recap some of the things that were covered. Before I get into that though I wanted to update everyone on the state of TripleO in general.

Over the Kilo cycle, we’ve added a bunch of features to TripleO. See my last blog post for more details of what was added. TripleO has been able to deploy OpenStack clouds for a while, but the community has really been focusing on pushing forwards with making the tooling flexible enough to deploy across a wide variety of production needs. We’ve made a lot of great progress recently. At the summit, a couple of clear messages shone through to me concerning TripleO.

First, more and more people are using Heat to deploy complex cloud applications. For TripleO, this sits really well. There’s no reason that Heat can’t deploy something as complex as OpenStack itself, and that’s being proved every day. If you back it with the solid puppet modules from the OpenStack Puppet community, the solution is even more compelling.

Secondly, Ironic is growing faster than ever. There’s been massive adoption around using Ironic for deploying to and managing baremetal based instances in your cloud infrastructure. Additional features like ready state and introspection are also great to see.

It follows then that being based on Heat and Ironic is good for TripleO. A lot of other OpenStack projects are in use as well, but Heat and Ironic are really the ones that are validating the TripleO architecture decisions.

I think the future of TripleO is very sound as we continue to build on using the solid foundation of OpenStack itself. OpenStack isn’t actually hard to install, but it is hard to deploy and manage as an application. Fortunately, the services and projects that the OpenStack community has been building are more than up to solving that complex challenge for any application.

With that said, I’d like to cover some of what was discussed at the summit…

tripleo and instack integration

In RDO, various contributors have done a bunch of work to turn TripleO into something more consumable overall. A lot of this work is based on the instack tooling which offers a different TripleO workflow than devtest. Given that there is also upstream work that isn’t being consumed nor supported like it once was (most of the bash elements, various pieces from devtest/tripleo-incubator, ubuntu support to a degree), we talked about more or less refreshing what we’re doing and focusing on more of the instack tooling directly. As part of this, we’d move these projects under the TripleO umbrella, eventually use them to drive CI, and then deprecate the unsupported and unmaintained pieces going forward. These projects are already mostly not RDO specific, and where a desire exists to add support for differing tooling/distros, then those can be added based on interest.


We had a session to discuss the direction of the tripleo-heat-templates repository. We decided to continue to build on using parameter_defaults for the different nested stacks in TripleO that are enabled via the Heat resource registry, and not require that those parameters be bubbled up into the top level templates. This has some implications for Tuskar in that it doesn’t handle setting parameter_defaults currently. However, they are easy enough to pass into Heat by sending additional environment files to the Heat API after the templates have been exported from Tuskar. In the future, Tuskar may add the ability to handle parameter_defaults in some fashion.

In cases where it’s useful to represent parameters in the top level templates, we decided that it makes sense to allow these parameters to be present even if they aren’t implemented or handled in the element based templates. The plan is to deprecate the element based templates as we move forward with building out the puppet based ones instead.

We also agreed that the different optional environments that you can choose to include in your overcloud should be committed directly to tripleo-heat-templates. This is currently already in progress in the environments/ directory in the repository for things like Ceph, different Neutron configurations, HA, etc.

Since we’re going to have several different environments, we discussed some basic best practices to hopefully avoid some complexity. First, a specific ordering or chaining of environments shouldn’t be required…each should be able to be used independently on its own. We should also avoid parameter name collisions across the parameter_defaults in each of the environment templates.


A lot of talk about containers at this Summit. TripleO is in the process of adding support for deploying a container based Overcloud. The containers being used are from the Kolla project in OpenStack, which is working towards containerizing all of the OpenStack services. The implementation is based on docker-compose and is orchestrated via Heat SoftwareDeployments, just like the puppet implementation. Initially, the compute node will be focused on, and then the work will expand to include the controller nodes as well.

diskimage-builder functional testing

There was a session about some of the new diskimage-builder functional testing patches that use test elements. We also discussed ways to possibly test boot images built from diskimage-builder. From a tripleo-ci perspective, we could add some cross distro coverage by building an Ubuntu image to use as the user image in one of the Fedora jobs, and then booting that on the deployed Overcloud. Such a job could end up as the only tripleo-ci gating job for diskimage-builder as one TripleO job running provides more or less the same amount of coverage as running all of them would.

network architecture

During one of the TripleO sessions, we reviewed and discussed the proposed network architecture improvements for TripleO. This architecture is going to allow TripleO to define multiple networks to which to bind individual services in order to allow isolation of internal/external API traffic, data, storage, and storage management traffic. There was broad agreement that this sort of isolation is needed for TripleO to deploy a production cloud.

Ok, from a summary perspective, I think that’s enough for a high level overview :-). There are some more details captured in the session etherpads. If I can help explain anything further, please ask, or swing by #tripleo!






Powered by WPeMatico