Skip to content

Audrey Project Tracking Wiki

movitto edited this page Dec 7, 2012 · 1 revision

Audrey Project Tracking Wiki

Please see aeolusproject.org for more information about the actual Audrey project and it’s related pieces.

High level view

<>

Phases

We’re breaking down the implementation of the Audrey into a number of phases. Each phase is meant to capture as much information we know at any given time. It’s pretty clear that each successive phase becomes less well-defined. The hope is that we will learn more information as each phase is implemented and be able to clear up the details of each phase.

Phase I: Audrey Standalone

Goal

Get a single instance running in the cloud.

Description

Phase I consists of getting a single Template-Assembly-Deployable combination running in the EC2 cloud. There are several pieces to making this happen. Many of the pieces are simply dependencies that are outside the scope of Audrey. The Audrey-specific pieces are:

  1. Create config engine scripts to manage a service
    • The specific config engine we’re discussing for Phase I is puppet.
    • The puppet scripts we’re focusing on for Phase I are for configuring sshd
  2. Create an RPM for the config engine scripts
    • This RPM will be specified in the Template as a required package
  3. Write a program that runs puppet, audrey-instance-config
    • This program will be installed in the image at bake time
  4. Create an RPM for the audrey-instance-config program
    • This RPM will be specified in the Template as a required package
  5. Update conductor-audrey.rb script
    • The conductor-audrey.rb script is the portion of code that will eventually be a part of conductor to generate user data information for instances
    • This script needs to generate a yaml file that can be read by the audrey-instance-config program and interpreted by puppet to configure and instance

Tasks

Feature #343

Expected outcome

  • A single instance is running in EC2
  • All the RPMs specified in the Template are installed in the instance
  • The audrey-instance-config program has run puppet and configured sshd

Phase II: Simple config server

Goal

Create a simple configuration server that can drive the configurations of several instances spun up as part of a deployable.

Description

In order to really show what Audrey is meant to do, we need to be able to configure several instances that may have dependencies on each other. The key function here is to be able to cross-pollinate the instance configurations in order to resolve inter-instance relationships and dependencies.

An example is the ability to spin up an application server and database server as part of the same deployable, then have the application server’s configuration learn the existence of the database server and modify its configurations accordingly.

The flow will look something like:

  1. The jobs for both instances are created by the audrey.rb script (tomorrow, this will be conductor)
    • This job generation also creates information to be POSTed to the config server to pre-seed the instance configurations
  2. When the instances spin up, they call Amazon for “user data”. The user data consists of:
    • The instance’s UUID
    • The ConfigServer’s IP address
  3. The instance phones home to the ConfigServer for the instance configuration data associated with the UUID
    1. The config server receives the GET request, and jots down the requester’s IP address
    2. The config server applies the IP address of the requester to any instance’s configuration waiting for that data
      • this will be denoted by a “required” parameter in an instances configuration in the deployable that expects that template-type’s IP address
    3. The config server examines the configuration data associated with the UUID
      • if the configuration data is “complete”, it can be sent back to the instance (HTTP 200)
      • if the configuration data is “incomplete”, it cannot be sent back to the instance (HTTP 404)
        • in this case, the instance enters a retry state where it will ask for the configuration data on a recurring basis until the request is successfully fulfilled (i.e., rinse and repeat)

In this example, when the database server phones home to the config server, the config server will populate the app server’s configuration with the database server’s IP address (thus “completing” the app server’s configuration data). The config server can immediately ship the configuration data for the database server. When the app server phones home, the config server will inspect the app server’s configuration data to ensure it’s complete (magic hand wavy? maybe a little). If the database server hasn’t phoned home yet, the configuration data will still be incomplete. In which case, the app server will enter a waiting state and try again. If the database server has phoned home already, the configuration data will be complete, and can be returned to the app server.

Phase III: Matahari

Goal

Use matahari to expose information about a guest.

Description

Phase III requires that matahari be installed on a guest and knows how to expose data from the guest. We are not tackling how to teach matahari in any automated way about what should be exposed from the guest (however, if we’re able to tackle this concept during Phase II, bonus!).

Expected outcome

  • The EC2 instance now has matahari (and dependencies) installed
  • Matahari knows how to expose the “sshd-port” configuration parameter
    • How do we validate this?

Phase IV: QMF

Goal

Use QMF to send messages generated by matahari. (Is this really Phase II-b?)

Description

Phase IV extends Phase II by getting the data exposed through matahari by a guest one step closer to a “config server” (there should probably be a reference here to a grandiose picture describing the entire infrastructure).

Expected outcome

  • The “sshd-port” configuration parameter can be dropped on the bus
    • How do we validate this?

Phase V: Pupppet module metadata

Goal

Decide how to actually deliver puppet module metadata

Description

Decide where does the puppet module metadata actually live. Today (during the implementation of Phase I), this metadata lives in the *-puppet.rpm packages (i.e., sshd-puppet.rpm contains the sshd-puppet module metadata).

The underlying issue at hand for Phase IV is that the intent is to have users create Template files via a UI of some kind (probably in Cloud Engine). In order to build a Template, a user needs to understand what services should be a part of a Template. The description of the services made available by puppet modules are part of each puppet module metadata. The user then has to be able to use the puppet module metadata in order to make selections about which services should comprise a Template based on available puppet module metadata information. If the puppet module metadata is captured in the {service}-puppet.rpm packages, it then requires that the web application presenting the Template-building UI be able to unpack the {service}-puppet.rpm packages, extract the puppet module metadatas for those modules, and present this information back to the user.

Also, this raises the question, “Does the UI really have to parse each and every RPM in the Template package manifest?” If the puppet module metadata is captured in the {service}-puppet.rpm packages, the answer could be yes (unless we can expect to control the naming convention of each puppet module package).

It starts to make sense to present the puppet module metadata outside of the puppet module package. Or, it makes sense to provide the puppet modules outside of RPMs (i.e., a mountable remote filesystem), in which case the puppet module metadata can live with the modules.

Goal

Have a way to provide the puppet module metadata of included puppet modules to the UI so that users can create template files without having to parse each and every RPM in the Template package manifest.