DevOps, Continuous Integration, and Continuous Delivery

Author: Steve Gordon
Source: Planet OpenStack

As we all turn our eyes towards Tokyo for the next OpenStack Summit edition the time has come to make your voice heard as to which talks you would like to attend while you are there. Remember, even if you are not attending the live event many sessions get recorded and can be viewed later so make your voice heard and influence the content!

Let me suggest a couple talks under the theme of DevOps, Continuous Integration, and Continuous Delivery – remember to vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Continuous Integration is an important topic, we can see this through   the amount of effort  deployed by the OpenStack CI team. OpenStack deployments all over the globe cover a wide range of possibilities (NFV, Hosting, extra services, advanced data storage, etc). Most of them come with their technical specificities including hardware, uncommon configuration, network devices, etc.

This make these OpenStack installation unique and hard to test. If we want to properly make them fit in the CI process, we need new methodology and tooling.

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.

In this talk we’ll give you an overview of a platform, called Software Factory, that we develop and use at Red Hat. It is an open source platform that is inspired by the OpenStack’s development’s workflow and embeds, among other tools, Gerrit, Zuul, and Jenkins. The platform can be easily installed on an OpenStack cloud thanks to Heat and can rely on OpenStack to perform CI/CD of your applications.

One of the best success stories to come out of OpenStack is the Infrastructure project. It encompasses all of the systems used in the day-to-day operation of the OpenStack project as a whole. More and more other projects and companies are seeing the value of the OpenStack git workflow model and are now running their own versions of OpenStack continuous integration (CI) infrastructure. In this session, you’ll learn the benefits of running your own CI project, how to accomplish it, and best practices for staying abreast of upstream changes.

In order to provide better quality while keeping up on the growing number of projects and features lead Red Hat to adapt it’s processes.  Moving from a 3 team process (Product Management, Engineering and QA) to a feature team approach each embedding all the actors of the delivery process was one of the approach we took and which we are progressively spreading.

We delivered a very large number of components that needs to be engineered together to deliver their full value, and which require delicate assembly as they work together as a distributed system. How can we do this is in in a time box without giving up on quality?

Learn how to get a Vagrant environment running as quickly as possible, so that you can start iterating on your project right away.

I’ll show you an upstream project called Oh-My-Vagrant that does the work and adds all the tweaks to glue different Vagrant providers together perfectly.

This talk will include live demos of building docker containers, orchestrating them with kubernetes, adding in some puppet, and all glued together with vagrant and oh-my-vagrant. Getting familiar with these technologies will help when you’re automating Openstack clusters.

In the age of service, core builds become a product in the software supply chain. Core builds shift from a highly customized stack which meets ISV software requirements to an image which provides a set of features. IT Organization shift to become product driven organizations.

This talk will dive into the necessary organizational changes and tool changes to provide a core build in the age of service and service contracts.

http://crunchtools.com/files/2015/07/Core-Builds-in-the-Age-of-Service.pdf

http://crunchtools.com/core-builds-service/

We will start with a really brief introduction of Openstack services we will use to build our app. We’ll cover all of the different ways you can control an OpenStack cloud: a web user interface, the command line interface, a software development kit (SDK), and the application programming interface (API).

After this brief introduction on the tools we are going to use in our hands on lab we’ll get our hands dirty and build a application that will make use of an OpenStack cloud.

This application will utilize a number of OpenStack services via an SDK to get its work done. The app will demonstrate how OpenStack services can be used as base to create a working application.

Powered by WPeMatico