Continuous Integration

Accu 2014 Conference Notes

I had the chance to go to ACCU 2014 the other week (full conference schedule is here) and I have to say it was one of the best conferences I’ve had the pleasure to attend. And while it did confirm my idea that C++ is getting the point of saturation and ridiculous excess (C++11 was needed, as a result so was C++14, but C++17… Just use a more suitable language if these are the features you need), the talks I went to, on the whole, we’re great.

So I thought I may as well post up the notes I made from each talk – and while they might be a bit of a brain dump of the time, if there’s something that sparks your interest, I’m sure the full presentations will be posted up at some point soon.

 

Get Archaeology
Charles Bailey, Bloomberg LP

Interesting talk looking at some of the more esoteric ways of presenting and searching for information within an existing Git repository. Unfortunately, time was short so the speaker had to rush through the last part, which was, for me, the most relevant and interesting part.

Talk notes

 

Designing C++ Headers Best Practices
Alan Griffiths

For most experienced C++ developers the content of this talk is probably quite basic, as you’d have a large portion of this covered already through sheer trial and error over the years. But clarifying some good practices and interesting side effects was enjoyable.

Talk notes

 

Version Control – Patterns and Practices
Chris Oldwood, chrisoldwood.blogspot.com

Highlevel overview of the various patterns we use when using version control (focused mostly on DVC), some useful examples and some interesting discussion about trust issues…

Talk notes

 

Performance Choices
Dietmar Kuhl, Bloomberg LP

Proof that if you want to know about optimisation and performance, stick to asking people in the games industry.

I’m not posting these notes.

 

Crafting More Effective Technical Presentation
Dirk Haun, http://www.themobilepresenter.com

Really good presentation on how to craft good presentations – some interesting discussions on the make up of the human brain, why certain techniques work and why the vast majority of technical talks (or just talks in general to be honest) do what they do.

Talk notes

 

The Evolution of Good Code
Arjan van Leeuwen, Opera Software

Great talk, not telling us what good code is, but examining a few in-vougue books over the last decade to see where they sit on various contentious topics. Note that when the notes say “no-one argued for/against” it’s just referencing the books being discussed!

Talk notes

 

Software Quality Dashboard for Agile Teams
Alexander Bogush

Brilliant talk about the various metrics Alexander uses to measure the quality of their code base. If you’re sick of agile, don’t let the title put you off, this is valuable reading regardless of your development methodology.

Talk notes

 

Automated Test Hell (or There and Back Again)
Wojciech Seiga, JIRA product manager

Another great talk, this time discussing how the JIRA team took a (very) legacy project with a (very) large and fragile test structure into something much more suitable to quick iteration and testing. When someone says some of their tests took 8 days to complete, you have to wonder how they didn’t just throw it all away!

Talk notes

 

Why Agile Doesn’t Scale – and what you can do about it
Dan North, http://dannorth.net

Interesting talk, arguing that Agile is simply not designed to, nor was it ever imaged to, be a scalable development methodology (where scale is defined as bigger teams and wider problems). Excellently covered why and how agile adoption fails and how this can be avoided to the point where agile principles can be used on much larger and less flexible teams.

Talk notes

 

Biggest Mistakes in C++11
Nicolai M Josuttis, IT-communications.com

Entertaining talk where Nicolai, a member of the C++ Standard Committee library working group, covers various features of C++11 that simply didn’t work or shouldn’t have been included in the standard. When it gets to the point that Standard Committee members in the audience are arguing about how something should or does work, you know they’ve taken it to far.

Talk notes

 

Everything You Ever Wanted to Know About Move Semantics (and then some…)
Howard Hinnant, Ripple Labs

Detailed talk on move semantics which are never as complicated as they (and even the speaker was) are made out to seem. There’s some dumb issues, which seem to be increasing as the C++ standards committee don’t seem to understand the scale of the changes being made, but never-the-less it was a good overview of what is a key improvement to C++.

Talk notes

Ruby, Jenkins and Mac OS


I’ve been using Jenkins as my CI server for a while and though user permissions has always been a bit of an issue (I’ll explain why in another blog post covering my Unity build process) running command line tools has never been to much of a problem once it got going.

At least not until I tried to run Ruby 1.9.3 via RVM on our Mac Jenkins server.

I’d developed the Ruby scripts outside Jenkins so I knew they worked, but when I came to run them through Jenkins (using nothing more than the ‘Execute Shell’ build step) it started behaving strangely. Running the script caused the step to fail instantly claiming it couldn’t find any of my installed Gems.

A quick ‘gem query –local’ on the command line listed all the gems I needed were there.

As an experiment I added a build step that installed the Trollop gem (a fantastic gem, you should check it out!) to see if that made any difference, in effect forcing the install in the service that would run the Ruby script. I was surprised when the install worked, but it installed it for Ruby 1.8 rather than Ruby 1.9.

Adding ‘ruby –version’ as a build step showed that for some reason the Jenkins server was using Ruby 1.8.7 rather than 1.9.3.

It turns out that RVM is not available when run through a non-interactive shell script, which is a bit inconvenient when you need it run as part of an automated process.

Searching around I came across this Stack Overflow answer suggesting I make changes to my .bash_profile but those changes were already present meaning I had little luck in finding a solution.

Other suggestions involved rather convoluted steps to just get the thing working, something I neither had the time for nor thought should be required.

Luckily Jenkins provides a RVM Plugin which allows the build step to run inside an RVM managed environment meaning it will respect the RVM settings I’ve set outside of Jenkins…

Now when running ‘rvm list’ via a build script shows that we have multiple versions of Ruby available with 1.9.3 set as the default.

And all of a sudden my Ruby scripts work like a charm!

Is It Green Yet? Improving Our CI Process

I originally posted this to AltDevBlogADay on Friday 15th July 2011.

Having a Continuous Integration server running is one of the most useful and powerful tools a development team can use. Constantly checking the state of the code, building assets which might otherwise take hours and generating stats on build quality are all really useful things to have running in the background hour after hour and day after day.

But if it’s not done with care, a CI process, while still providing some useful information, will stop being an important part of a development teams tool set.

The main problems usually stem from a single CI step taking to long. For example, it might take hours to build all the game assets or it might take 40 minutes to build a single configuration. You might have additional build steps (like copying files to a network drive) which can take quite a while if you’re dealing with gigabytes of data.

As soon as a CI step takes to long, you lose the main benefit – the fast turn around of information.

For example, when we first start a project, our simple CI process will consist of the following steps

  • Detect modification
  • Build code
  • Build game assets
  • Copy to network drive
  • E-mail developers (who’s mailed depends on success or failure)

This is fine as a new project is tiny and the whole process doesn’t even take 5 minutes. And we need to build the whole thing constantly as we’re adding so much content the artists and designers need to be on the bleeding edge of what the programmers are creating.

But after a month (or probably less) this stops being suitable. A whole build might start taking 20 minutes then 30, an hour and then two hours, and if we leave it as is, the programmers are not getting the benefit of continuous turnaround and the designers spend ages waiting for a new build.

So what can we do about it?

The first thing is to look at what the CI process is doing, and exactly what we want to get out of it.

  • Continuous Build – We need the process to constantly compile the source, all configs, all platforms. This is so we can detect any compile errors quickly without having to build everything manually.
  • ‘Designer’ Builds – Creating an executable the designers, artists, animators etc. can get with the latest code changes. Ideally one they can request as required and one that is built as quickly as possible.
  • Full builds – A complete build of the game including executable and all in-game assets along with anything else needed to run the game.
  • QA Builds – QA could use the full build if needed, but this is an additional step which packages the build as it would be submitted allowing a better QA pass (DVD emulation, submission content etc.).

From my point of view, those are the four main things I want to get out of a CI process that having a single build stepl won’t give us. You might have other requirements and I’d certainly be interested in hearing what those are.

So what can we do to try and improve the initial process and still get what we want out of our CI machine?

Continuous Build
The first step is easy. We want a Continuous Build process with nothing to integrate, nothing to deploy and nothing to copy. This can be much quicker if we alter our repository modification checks to only monitor source code folders and not the entire repository.

For example if our repository contains scripts, configuration files or (shudder) game assets and executables we shouldn’t kick off a build if these changes as the CB results won’t be any different from last time.

We might also want to reduce the configs we’re interested in building (usually a debug and master build, the profiling builds might be skipped for speed reasons and due to them rarely being using). If we have a decent machine we might get the individual platforms (X360, PS3) to build in parallel as there will be no conflict between the temp files they generate – or even stick them on separate machines if we have the capacity.

The process only ever needs to notify on failure as no one is going to be using this build, it’s a sanity check pure and simple.

So already we have a much faster turn around time between check-in and ‘all clear’. In the past I’ve managed to reduce this from 60 minutes to 5.

‘Designer’ Builds
Initially it might be tempting to use the results of the ‘Continuous Build’ as the aim of this step is to provide the designers and artists with a new executable to take advatage of any new features (very) recently added.

This might be the right idea at the start, when the CB process is taking less than 5 minutes, but that doesn’t last, so we need a faster, more iterative, process to make sure our non-programmers are not hanging around waiting for the latest builds.

Most of the time, designers will use a ‘release’ build (‘release’ being a bit of a misnomer – it’s not releasable in any way, but it has just enough debug information to make it useful but still run at a ‘releaseable’ frame rate). So we only need to concentrate on a single configuration which means we can drastically cut down the length of time between when a modification is detected and a new usable build is generated.

As this is the fastest CI step we have and can often have the most people dependent on it, it’s the first one to run when a modification is detected.

In our case we don’t e-mail people when a new designer build is enabled. This is being built many times in an hour, and people will just end up sending the mails directly to the trash (I know I’ve done that on particularly spammy CI set ups). Simply allowing them to check the build status and update when it’s green works well enough.

Full builds
Developers generate a lot of content for games. Even small games can balloon in size depending on the scope and quality of the final product. As a result, we need a full integration build for a number of reasons

  • It would take every member of the team far to long to rebuild all the assets themselves
  • When a build gets out of sync with the assets, developers need a quick way to get everything back on track
  • When testing the build it needs to have been built on an independent build machine

A full build could be brute force (just builds everything every time) or smart (it concurrently builds executables and assets on multiple machines). This really depends on how long a full build would take. Less than an hour and I personally stick with a brute force approach but any longer and a more intelligent build step would be needed.

Full builds always e-mail the entire team, since it’s rare they happen. This allows people to get latest as they become available (usually at the start of the day) without having to check the status of the build.

QA Builds
The QA build is a special build. It doesn’t rebuild any assets or executables and is automatically kicked off when the daily build has finished successfully. This step packages the build up as it would be presented as part of a final submission along with any submission assets that would be required.

But why not just use the full build as the QA build to save time?

Simply put when testing a build it’s vital that we test under the same situation that our final submission will be tested under. Making sure we run under DVD emulation and use the same assets the manufacturer will use is an important part of the process. Having our CI machine generate these builds for us makes sure we’re doing this from the very start.

Requesting Builds
In every case we give all members of the development team the ability to request any build. If the ‘Designer Build’ is to far out of sync, they might need a full build to get them back on their feet. An asset change might alter the executable but not trigger a ‘Designer Build’ so a designer might need to trigger one manually.

In our case using CCTray allows us to do this very easily, so a new build can be requested by anyone (including QA) at any point of the day without any input from a programmer, allowing them to concentrate on making the game rather than just enabling others.

Technically anyone in the company could request and get a new build (very useful for getting demo builds together without getting the team involved) but I’ve not seen that happen yet.

What I Haven’t Covered
One major thing I’ve not covered here: self testing builds. This can range from simple unit tests running after every build to a scripted run through of the game after every full integration. The scope of this will very much depend on the size of the game and the time you have available to add them. Since this is a big topic in it’s own right, I’ve left that for another time.

Conclusion
So by simply reviewing what we actually want to get out of the Continuous Integration process we’re able to streamline it down to be much faster and much more useful. Complete integrations still happen (and happen when needed) but the common information needed by the team is generated quickly and allows teams to get short turn-around from the CI machine throughout the day.

I’d be very interested to know what other uses people are getting from their CI processes and how they are still making sure the speed and quick turnaround is happening all the time.

Title image by highgroove.  Used with permission.

Continuous Integration – Taking It Further

Our current Continuous Integration process works quite nicely and has greatly benefited our team, but these systems can always be extended and improved upon.  The following post will simply cover some of the ideas I have for the future and how it might improve the whole process.

Visual Unit Testing

Visual Unit Testing is the process of creating small, scripted tests that run the game and allow you to test that certain states have been reached.  For example, having a test which runs the game, scripts the input so one character shoots another and then tests if the enemy is dead or the score has been increased would be a very basic test.  It could also be used to perform long soak tests with more intelligent input rather than the kind of tests that are used at the moment.

Allowing the process to take screen shots and post them to a web server with the test results would allow a huge number of systems to be tested in the environment they will actually be used in.  Extending this to allow QA Technicians to create rafts of tests would also remove the need to continually test the more mundane aspects of games, such as does the menu flow behave as it should, do the leaderboards work as intended etc.

Obviously running this process would require a large amount of time, especially if run on multiple platforms.  But it could easily be hooked into the nightly build process meaning hours of testing would be carried out when no-one was in the office.

Some Arcade titles originally supported simple scripting using Lua to perform automatic controller input, but this wasn’t taken far enough, and the BlitzTech engine now has its own state machine system which would most likely be used instead.

Automated Code Reviews

Teams at Blitz carry out code reviews at different times and to different levels, but one thing that is usually done is checking that the code conforms to a given Blitz coding standards (with people moving around teams as they do, a consistent standard is pretty much essential).  But again, checking something that is based against a set of rules is something that an automated system is much better at that our human co-workers.

Adding a code metrics step to the Cruise Control build step, meaning the code is run through after every check-in could catch if the code is not standards-compliant below a certain threshold.  If this was the case, the build would simply fail (and this step could be removed if we had a designer/artist build machine as discussed in a previous post) and the programmers would have to re-examine the code they posted.

While this could seem quite draconian, it would free up code reviews to concentrate on the bigger picture, rather than getting stuck in the rut of commenting on the use of ‘m_’ or class name pre-fixes.

Extending The Nightly Build

As I mentioned in Extending Cruise Control we collect information on how often the build is broken and the ratio of who breaks the build the most.  We also generate a massive amount of information by using Subversion, such as check-in rates and lines changed per check-in etc.

The nightly build could easily be extended to incorporate all this information and, along with links to the Visual Unit Testing web service, could produce reports on the activity and stability of the game for that day.  This would be a very useful tool for the programming managers, especially as some people already use this information, but it is usually done on an ad-hoc basis when the tools are manually run and the information manually collated.

Automatic CI Support

A large portion (at the moment pretty much all) of my current work is involved with extending the Arcade technology I developed and integrating it into the BlitzTech engine.  It has the ability to automatically generates the required configuration files to set up a Cruise Control server, but the current plug-in system (like adding automated e-mail generation, stats collection etc.) is pretty much set up to do only what it needs to do, without much flexibility.

I fully intend to make all these plug-in’s come as standard, allowing developers to cherry pick the elements they want and hook them into their CI process with very little work.  While people may start to ask why I do not use NANT since they are all hooked into CruiseControl.net, I can simply do more, and do it faster, with Ruby.

So I have a few ideas on where I want to go with our Continuous Integration process in the next few months (years?) but there is nothing like lofty goals to keep you going.  Visual Unit Testing is obviously the biggest, and will require the largest amount of time to developer, but will generate the biggest payback once it’s being used to its fullest potential.

So what kind of things would you like to see in a CI process?  Or is there something your process is doing that you think is pretty special?

Continuous Integration – Self Testing Builds

So far we have a process which guarantees that the game builds and that the assets can be generated.  We have processes for doing this automatically and for making sure people are as up to date as possible.  But there is still one question which none of this answers…

When I run the game, will it actually work?

Games are a mass of systems, all interacting together to produce something that is fun and interesting to play.  It’s QA’s job (on the whole but not exclusively) to make sure that when it plays it plays well and there are no glaring bugs, but if we are automating so much, we can also automate some of the testing too?

Unit Testing

A Self Testing Build in this case is really just a fancy term for Unit Testing, something a lot of people have heard of but just as many havn’t.

The basic premise behind this is that at the end of every single compile small pieces of code are run that test various systems in the game.  These are run as post build steps so any changes to the code are instantly tested against the tests that already exist.  If some code has changed that makes one of the tests fail then the whole build fails.  This means Cruise Control fails, the nightly build fails, but most importantly everyone knows it’s broken and they should avoid updating until it is fixed.

The following are a couple of very simple tests, that are run every time the code changes to make sure the vector’s behave the same and as expected.  This might look simple (one of the main reasons people don’t unit test is because it will ‘obviously work’).  But with vector code being platform centric, a change on one platform can effects all the others.  The tests are written using UnitTest++ which is the library we use to write all our tests.

Feed back to the developers tells them something has broken, the test results are displayed in Cruise Control and the person responsible instantly sets about fixing the build.

Game Based Unit Testing

It’s often difficult to think about games from the point of Unit Tests but it often results in code being written in a more encapsulated and less coupled way, which in itself results in code being easier to write and maintain.  But as an example, the following could easily be tested in a small suite of Unit Tests…

  • Does and AI character ‘see’ the player when they are in front of them
  • Is the player ignored if they are outside the enemies cone of vision?
  • What state do they enter when they hear a noise?

By testing the under-lying logic of the system we can be confident that when something breaks we will have a better idea of where the problem lies that having to look at every system at every level.

Running Tests On Different Platforms

Unit Tests are only useful if they are run often and on the platforms the code is destined to be used on.  This can cause problems when trying to run Unit Tests on development kits as the code usually needs to be loaded onto the machine before it can be run and the results fed back to the host.  While this might only take 10 seconds, doing it after every build would be a massive drain on a programmers time.  While this isn’t a problem on the PC platform, we generally expect the console tests to be run as part of the nightly build when it has all the time it needs to run and feed back the results.

But, even if it isn’t being run on every platform all of the time, we still have a system running on the PC and some testing is still better than no testing at all.

So this is how the Blitz Arcade system adds a level of testing to the Continuous Integration process.  It’s one of the hardest parts of the process because Unit Testing does take a while to get into, and to understand how code can be tested when at first it looks like it is a series of black box objects.

In the final part, I’ll take a look at what I would like the process to do in the future, and the kind of things that would be cool to have in a fully integrated Continuous Integration process.

Continuous Integration – What About The Rest?

Games are not just code.  Art, design, music and QA all get in the way, making the whole CI process harder, but actually making the game fun!  It’s just as important that the CI process benefits these guys otherwise there are still going to be a hurdles to getting a solid game released and making the process as smooth as possible.

The Nightly Build

Due to using Cruise Control, we know the build can compile and because we never leave on a failed build we know we can run a full ‘rebuild’ of the game at night.  This guarantees that we do a full rebuild of the game at least once a day, and can remove any niggles that crop up from Cruise Control only running ‘build’ steps.  And since we are doing it when no one is around, we have the time to build all the game assets, including animations, textures, models, music and anything else that we need.  Since this can take hours, it doesn’t matter if we start the build at 11pm, as by 9am the next day everything is up and ready to go.

So we finally have a fully integrated build that is being generated for the start of every working day.  It means that QA have a full build to test at the start of the day, but also the designers, artists and musicians have a fully up-to-date project in which to add their assets and tweak the game play.

But this can be difficult to first set up.  Since you are automating the entire process, everything needs to be command line driven.  This isn’t always easy with off the shelf tools, or even bespoke software, but the time it can save is definitely worth the time.  Luckily the BlitzTech SDK is fully configurable using a custom scripting language so we can not only build the assets, but run additional processing steps such as compression or pre-processing of assets.

Generating More Regular Builds

But we still have a slight problem.  What happens if 10 minutes into the day a programmer implements a new feature that the designers want to play with it right there and then?  Do they have to wait until tomorrow morning just to get hold of it?  What if the assets being generated by the artists change in a way that are no longer compatible with the build they have?

Since we have a Cruise Control machine constantly building a new version of the game, we can easily add additional steps to the end of the CC process.  We have a step at the end of a successful build that copies the executables to the network and (similar to the failed build mails) sends a mail to the designers and artists telling them a new build is available.

By providing them with a simple tool that allows them to get ‘latest build’ from whereever they want, the designers have access to the latest game features pretty quickly (this is also another reason to trim down the amount of time it takes to finish a Cruise Control build).

So the programmers are no longer disrupted with requests for new builds, and the artists on the team are no longer using half-built, half complete builds during the day. It means the full game is being used at all times and is never in an unknown state.

Some teams at Blitz add additional information to their Cruise Control builds and this is something I hope to incorporate into the Arcade process at some point.  Each CC build will display the latest Subversion revision on screen at all times.  By requesting that the programmers specify the SVN revision in bug reports or feature check-in’s, designers will instantly know if the build they have does what they need it to do.

Generating The Game Assets

It would also be cool if the Cruise Control build also came with up-to-date assets so QA could get the latest build and go from there.  Obviously time is a serious concern here, but since our entire build process is command line driven, we can add steps to build the assets at the end of a successful build and before copying it to the network.  In our process this is an optional step, as at the start of a project, when it takes minutes to generate the assets this is pretty useful, but near the end of a project this can add hours onto the process and has to be removed.

One way to avoid this, and make the process more suitable for the artists, is to have a separate build machine, which only generates the release build (which is usually the one used by the artists and designers) and builds only the common asset packages.  This is sometimes done on the bigger projects at Blitz as it can take slightly too long for artists to wait for the programmers Cruise Control machine to finish.

Generating Submission Builds

One more thing that comes from this process is our submission builds.  Since we can generate a nightly build automatically (usually by adding a single script as an automated task) we can generate all our builds by running the same script.  This means that our submission builds hook into the same process as everything else which avoids the most important builds being built in an un-tested environment and make the process even more secure.

 

So we now have a process that allows us to automatically generate the whole game at any point of the day, and at least once a night.  The designers, artists and very importantly QA can hook into the CI system giving them an up-to-date build at any point and no one is every playing a build of the game that is half-built or doesn’t have everything that is currently available.

In the next part I want to cover the process of self-testing builds and how this can make a build not only compile, but actually make sure it is doing what it is supposed to be doing.

Continuous Integration – Extending Cruise Control

In the last part I covered the use of CruiseControl.net in the CI process, which is pretty much one of the most important parts of the whole system.  Out of the box, CC.net is pretty useful, but it can be easily extended and this part is going to cover how we use CC.net and what we have done to get a bit more out of it.

Turnaround Time

The biggest problem with Cruise Control is the time it can take to report a new (or more importantly broken) build.  At the start of a project this can be a quick turnaround, but later, when we have masses of code, more platforms and more build configurations, it can quite easily take hours to complete a full build.  Since no-one should be going home on a non-green build, this either means they stay in the office until late, or don’t check in after 2pm.  None of these are suitable solutions.

The quickest thing we do is reduce the number of configurations we build.  The ‘profile’ configuration, which is rarely used by anyone, can often be removed, along with full debug builds (since these builds actually become unplayable near the end of a project anyway).  We can also cut down on the number of platforms.  A game might be destined for PS3, but it will probably start life on a PC.  Once everyone has finally moved onto the target platform, a whole platform plus its configurations can be removed.

Finally, we never do a ‘rebuild’ on Cruise Control.  While a ‘build’ sometimes needs a kick from behind due to linker errors or similar cropping up that full builds generally fix, this generates a massive saving of time, and the full build is always done at night anyway.

One technique that has been used in other teams at Blitz is to have a separate CC machine, which simply builds the release build of the title on the target platform.  Not much use for the programmers, but an excellent time saver for designers and artists, who don’t care if the debug PS3 build is broken.

In one case we reduced the build time for a project from around 2 hours on average to about 20 minutes.  A massive boost to productivity.

Extending Cruise Control Reporting

The Cruise Control Tray tool is useful, but it’s actually quite easy to miss the fail message and people do not always check their service tray to see if the CC is red.  The easiest way we have extended this is for the CC machine to automatically send e-mails to all programmers on a team when the build is broken.  This is apparently very easy to do with NANT, but our extensions are written in Ruby, simply because I had a lot of previously written Ruby libraries that were suitable for the job.

This can easily be extended to send a message to our Yammer service (which we use for inter-team communication already) and that is something that I hope to do in the future.

Tracking Build Statistics

It’s very easy to say “The build is always broken” but it’s also very useful to be able to have hard evidence for the % of broken versus fixed builds.  Again, Cruise Control gives you the ability to track this by storing the build status as local Environment Variables which can be used to track the state.  Along with easy access to the SVN log, we know who broke the build, when and how (was it a check-in or a forced build?).  By hooking these into a simple php graph generation tool, statistics on the broken vs. working ratio and broken builds count.

I would never make the ‘who breaks the build the most’ information public, but it is an excellent tool for finding problems in the development process or if anyone’s way of working simply isn’t… working.

 

Obviously Cruise Control can have its limitations.  Multiple projects, dependencies and configurations can soon add up and no amount of tweaks and omissions can stop this.  Once the feedback loop is redundant due to it simply taking too long, Cruise Control can lose its impact and other methods need to be used to bring it back on track.

For example, at its worst case some projects can take over 4 hours to complete a set of dependant builds.  If you are right at the end (which unfortunately I am!), the amount of information being give to you is dramatically limited.  But we have made changes which mean this delay still takes place, but other information from the CC machine is fed back much quicker meaning it becomes useful once again.

So that’s generally how we use Cruise Control.  If anyone reading this does anything differently then let me know as I’m always looking for ways to improve the tools use and make it more beneficial to the team.

In the next part I’ll look at the process of nightly builds and how that can help the rest of the game, since nothing so far has covered the fact that games are more than just code!