Using Juju LXC For Local Development

Development efforts can be sped up using local juju deployments with lxc.

Setting up Juju LXC

You’ll need a .juju/environments.yaml file with a “local” entry like:

default: local
environments:
  local:
    type: local
    default-series: precise
    lxc-clone: true
    lxc-clone-aufs: true
    admin-secret: secret
    apt-http-proxy: http://10.0.3.1:8000

Install squid-deb-proxy. Then create /etc/squid-deb-proxy/mirror-dstdomain.acl.d/20-juju-local with the following contents:

ppa.launchpad.net
private-ppa.launchpad.net

Restart the squid-deb-proxy service. All apt downloads will now be cached by the proxy server running on your local machine.

Next, you will need swift credentials set up. This ensures the services still have access to object storage (Swift), virtual machine image storage (Glance), and virtual machine instantiation (Nova). An example .hpcloud-rc file should look like:

# FOR SWIFT
export JUJU_ENV=local
export OS_USERNAME="<your email address>"
export OS_TENANT_NAME="<your tenent name from horizon>"
export OS_PASSWORD=<something special>
export OS_AUTH_URL="https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0"
export OS_REGION_NAME=region-a.geo-1

# FOR GLANCE (same credentials as above by default)
export GLANCE_OS_USERNAME="$OS_USERNAME"
export GLANCE_OS_AUTH_URL="$OS_AUTH_URL"
export GLANCE_OS_REGION_NAME="$OS_REGION_NAME"
export GLANCE_OS_TENANT_NAME="$OS_TENANT_NAME"
export GLANCE_OS_PASSWORD="$OS_PASSWORD"

# FOR OAUTH TOKENS
export CI_LAUNCHPAD_PPA_OWNER=<lp login-id>
export CI_LAUNCHPAD_USER=<lp login-id>
export CI_OAUTH_CONSUMER_KEY="ci-airline"
export CI_OAUTH_TOKEN=<Please see Note below+++>
export CI_OAUTH_TOKEN_SECRET=<Please see Note below+++>

Note

+++ Use the OAuth values generated in OAuth setup to fill these

If your swift server only supports the 1.0 auth protocol (TempAuth does not support 2.0), you additionally need to set some $ST_* variables for python-swiftclient, and include the tenant/project name into the user name:

export OS_USERNAME="<project:username>"
export OS_TENANT_NAME="<project>"
export OS_PASSWORD="<password"
export OS_AUTH_URL="http://your.swift.server:8080/auth/v1.0"
export OS_REGION_NAME=

# env for python-swiftclient
export ST_AUTH=$OS_AUTH_URL
export ST_USER=$OS_USERNAME
export ST_KEY=$OS_PASSWORD

# GLANCE_*, CI_* as above

If you use the Python virtualenv produced by testing/venv.py, you must additionally set export EPHEMERAL_CLOUD_NET_ID=.. in that rc file, as neutron is broken in the venv, and thus it cannot determine the net ID by itself.

You’ll now have the settings in-place. In order to iterate rapidly, It would be advisable to ensure juju’s template image has all the dependencies pre-installed so you don’t wait for that every time you re-deploy. You can do that with the following simple manual hack:

juju bootstrap
juju deploy cs:ubuntu # Wait for juju status to show it deployed
juju destroy-environment --force -y local

# You'll now have an lxc container named juju-trusty-template.
# Modify it with these commands:
sudo lxc-start -d --name juju-trusty-template
sudo lxc-attach --name juju-trusty-template -- add-apt-repository -y ppa:canonical-ci-engineering/ci-airline-phase-0
sudo lxc-attach --name juju-trusty-template -- apt-get update
sudo lxc-attach --name juju-trusty-template -- apt-get install -y rabbitmq-server python-amqplib python-pip python-jinja2 mercurial git-core    subversion bzr gettext python-django-south python-lazr.enum python-tastypie python-swiftclient postgresql-9.3 postgresql-contrib-9.3 python-psutil dput python-dput lazr.enum python-tz python-gnupg qemu-utils python-glanceclient python-requests python-novaclient python-psycopg2 pwgen postgresql-client gunicorn python-support pgtune postgresql-9.3-debversion postgresql-plpython-9.3 python-dnspython python3-pyramid

sudo lxc-stop --name juju-trusty-template

Alternatively for a precise deployment you want:

# You'll now have an lxc container named juju-precise-template.
# Modify it with these commands:
sudo lxc-start -d --name juju-precise-template
sudo lxc-attach --name juju-precise-template -- add-apt-repository -y ppa:canonical-ci-engineering/ci-airline-phase-0
sudo lxc-attach --name juju-precise-template -- add-apt-repository -y cloud-archive:icehouse
sudo lxc-attach --name juju-precise-template -- apt-get update
sudo lxc-attach --name juju-precise-template -- apt-get install -y rabbitmq-server python-amqplib python-pip python-jinja2 mercurial git-core subversion bzr gettext python-django-south python-lazr.enum python-tastypie python-swiftclient postgresql-9.1 postgresql-contrib-9.1 python-psutil dput python-dput lazr.enum python-tz python-gnupg qemu-utils python-glanceclient python-requests python-novaclient python-psycopg2 pwgen postgresql-client gunicorn python-support pgtune postgresql-9.1-debversion postgresql-plpython-9.1 python-dnspython

sudo lxc-stop --name juju-precise-template

Host Configuration

Some additional changes will be necessary on the LXC host system for the imagebuilder to work properly. With some of the changes we have planned, many of these should not be needed soon. Be aware that making these changes may have an effect on other LXC containers you run on your host system.

First, ensure that nbd is loaded on the host. Module loading will not work in LXC, but the module will be available under lxc if it is loaded on the host.

Add the following lines to /var/lib/lxc/juju-precise-template/config:

# Allow mounting filesystems under LXC
aa_profile = lxc-container-default-with-mounting
# Allow full access to the block device with major number 43, which
# should be nbd (see /proc/devices)
lxc.cgroup.devices.allow = b 43:* rwm

Modify the LXC default apparmor rules to allow bind mounting filesystems under LXC. In /etc/apparmor.d/lxc/lxc-default, add the following line before the “}”:

mount options=(rw, bind, ro),

Then run:

sudo /etc/init.d/apparmor reload

Working with the code

Code modifications can be done using the following iterations:

1) bzr branch lp:uci-engine
2) cd uci-engine
3) <do changes>
4) juju destroy-environment --force -y local; juju bootstrap
5) rm -rf tmp/
6) ./juju-deployer/deploy.py # Or './juju-deployer/deploy.py branch-source-builder' to only deploy Branch Source Builder service
7) <check the services>
8) Repeat from step 3-7 for iterative development

Upgrade

The development effort can be further sped up using the --upgrade option in deploy.py. The generic steps of upgrading are given in the Upgrade section. The following steps are specific to the local development:

1) bzr branch lp:uci-engine
2) cd uci-engine
3) <do changes>
4) juju destroy-environment --force -y local; juju bootstrap
5) rm -rf tmp/
6) ./juju-deployer/deploy.py --build-only --working-dir ./tmp
7) ./juju-deployer/deploy.py   # use ./juju-deployer/deploy.py branch-source-builder for deploying only bsb serivce.
8) <do changes in charms for the deployed services locally>
9) ./juju-deployer/deploy.py --upgrade all # this will upgrade the modified charms and config
10) Repeat the steps 8-9 for fixing and testing the charms that were already deployed