Bitbucket recently launched the beta of Pipelines, a continuous delivery system for Bitbucket repositories.

Using a familiar convention, Pipelines reads a config file from the root of the repo. When you push changes it spins up a Docker container and runs the configured commands in a bash prompt.

Because so much is run in bash these days, it's an easy way to turn your local build steps into an automated build.

Pipelines isn't the only CD tool for small projects, but it's hard to beat the convenience if you're already on Bitbucket. It's unashamedly MVP and doesn't do email or chat alerts yet, but it does put build status on Pull Requests - so it was enough for our project's needs.

The project

We set up Pipelines for a UI library built with Node tools - everything we need in a build can be installed with NPM and run with Grunt (eg. Jasmine/Karma, ESLint, Amazon S3 deployment). Development is done on a local server but all of the deployed assets are static.

We had been using Bamboo to run tests, but maintaining the Linux agent was a bit of a pain as it's not used for anything else.

In other words: the project was a perfect candidate for Pipelines.

Goals

The initial goal was simply to replace our Bamboo build, which runs tests over merges to master. That was very easy to get working, so we expanded the goals:

  1. run tests on all pushed branches, before merge
  2. deploy per-branch, pre-commit versions of the docs/test pages
  3. clean up the per-branch docs after merge

Tests

Getting tests running just needed the project's install and build commands set up in bitbucket-pipelines.yml in the root of the repo:

image: node:5.4.1
pipelines:
  default:
    - step:
      script:
        - npm install -g bower grunt grunt-cli
        - bower install --allow-root
        - npm install
        - grunt build-dev # builds and runs tests

As soon as you push a branch, a build spins up and you get results in the Pipelines tab and Pull Request summary. Done!

Deployment

The main issues we had with deployment were related to the S3 bucket's configuration, rather than Pipelines per se. It was locked to the company network and we had to allow the Pipelines agent through.

Otherwise the steps were pretty logical:

  1. Create your S3 bucket and get the keys, region and bucket name - I got this info from my helpful devops team, but you can get this info from your AWS Console.
  2. Add keys as secured user-defined environmental variables for Pipelines in the specific repository you're deploying (Settings → Pipelines → Environment variables).
  3. Add grunt-aws-s3 to your Gruntfile.
  4. Add and configure the Grunt task, noting it uses the Pipelines default variable BITBUCKET_BRANCH to deploy into subdirectories.

Our Grunt task looks like this:


aws_s3: {
  test: {
    options: {
      region: 'REGION',
      accessKeyId:
        process.env.AWS_ACCESS_KEY_ID_TEST,
      secretAccessKey:
        process.env.AWS_SECRET_ACCESS_KEY_TEST,
      bucket: 'BUCKETNAME'
    },
    files: [
      {
        dest: process.env.BITBUCKET_BRANCH,
        'action': 'delete'
      },
      {
        expand: true,
        cwd: 'docs/',
        src: ['**'],
        dest: process.env.BITBUCKET_BRANCH
      }
    ]
  }
}

Note that we clean the directory and upload a fresh copy, to ensure the test doesn't accidentally rely on a deleted file.

We use two different buckets for this project, so the options are declared inline. If you only use one bucket, obviously you can just declare one shared options object.

So on our test server, we end up with this structure:

http://example.com/master/
http://example.com/branch1/
http://example.com/branch2/

There's also an index file in the server's root just to keep things tidy.

Cleanup

We don't have something automated for cleanup yet, as Pipelines doesn't include triggers for merging a branch (it would be great if you could vote for issue 12842).

Given our deployed artefacts are tiny we don't need to clean them up particularly urgently. A manual clean once in a while will be plenty, so the workaround we're using is a delete task in Grunt:

aws_s3: {
  deletetest: {
    options: {
      region: 'REGION',
      accessKeyId: 
        process.env.AWS_ACCESS_KEY_ID_TEST,
      secretAccessKey: 
        process.env.AWS_SECRET_ACCESS_KEY_TEST,
      bucket: 'BUCKETNAME'
    },
    files: [{ 
        dest: '/', 
        action: 'delete' 
    }]
  }
}

I wouldn't recommend this take off and nuke from orbit approach for production deployments, but it saves time for our test server where it's not worth the time to pick off individual branches.

A more targeted task could read the target branch as an argument:

grunt aws_s3:deletetest --deletebranch='branchname'

This would allow you to delete one by one (or fall back to master).

...
var deletebranch = grunt.option('deletebranch') || 'master';
...
aws_s3: {
  deletetest: {
    options: {
      region: 'REGION',
      accessKeyId: 
        process.env.AWS_ACCESS_KEY_ID_TEST,
      secretAccessKey: 
        process.env.AWS_SECRET_ACCESS_KEY_TEST,
      bucket: 'BUCKETNAME'
    },
    files: [{ 
      dest: deletebranch, 
      action: 'delete' 
    }]
  }
}

(I'll be honest, that code snippet wasn't tested thoroughly so be careful if you run with it ;))

Gotchas

Bower vs sudo

By default Bower refuses to run as sudo - and fair enough, it doesn't need it. Our build does not use sudo, but Pipelines runs as root and triggers the error.

One solution would be to prep your Docker image with packages pre-installed; but you can also do the quick fix of adding the allow-root flag: bower install --allow-root

Private git and bower access

We did have some issues pulling a private bower package into our builds, as Pipelines does not yet support SSH keys or make the Pipelines process available as a user (so you could grant read access in repo configuration).

Reading a private key from an environmental variable didn't work, as the vars are single-line and keys are multi-line.

My coworker Matt Sutton found a workaround: set the private key as a secured local variable, with manual line breaks (\n); then create an ssh agent:

image: node:5.4.1
pipelines:
  default:
    - step:
      script:
        - echo -e $VARIABLE >> /root/.ssh/key_name
        - chmod 600 /root/.ssh/key_name
        - eval `ssh-agent`
        - ssh-add /root/.ssh/key_name

The key to this trick is inserting \n in the variable and using echo -e when creating the key file within Docker. Although neat it's obviously a bit of a hack, so we're watching issue 12795 with interest.

Last thoughts

Can Pipelines replace Bamboo? For small projects, absolutely. For larger projects, it depends - for example our main product uses .NET, which isn't supported in Pipelines yet (although some people seem to have unofficial builds working); and its builds are doing much more heavy lifting than our simple test suite. Also if you need your builds to be on-premises obviously Pipelines isn't an option.

Long term there's also the matter of what Pipelines will cost. While no pricing has been announced the beta is tracking minutes used, so it seems likely that will be the core metric used for pricing. Speculation is a bit pointless, it's just a matter of wait and see.

Still, in terms of dev time it's been pretty cheap so far. Part of the appeal is that once the S3 bucket is up and running, the maintenance can be done by the teams building the library - very little devops time required.

Overall, Pipelines is easy to get along with. It's a welcome addition to Bitbucket and if you already work on the command line, it's very easy to get things working.