Git Workflow Presentation – Intel Buzz Workshop

I travelled to Stockholm this week to deliver a presentation about our Git workflow on Sonic Dash. Originally I was asked to do a talk about the process of converting Sonic Dash to intel based chip sets, but since that would take all of 3 minutes, I worked it into a talk about how we work in general.

The link is possibly tenuous, but at least they had some baring on each other!

Anyway here’s the presentation

Accu 2014 Conference Notes

I had the chance to go to ACCU 2014 the other week (full conference schedule is here) and I have to say it was one of the best conferences I’ve had the pleasure to attend. And while it did confirm my idea that C++ is getting the point of saturation and ridiculous excess (C++11 was needed, as a result so was C++14, but C++17… Just use a more suitable language if these are the features you need), the talks I went to, on the whole, we’re great.

So I thought I may as well post up the notes I made from each talk – and while they might be a bit of a brain dump of the time, if there’s something that sparks your interest, I’m sure the full presentations will be posted up at some point soon.


Get Archaeology
Charles Bailey, Bloomberg LP

Interesting talk looking at some of the more esoteric ways of presenting and searching for information within an existing Git repository. Unfortunately, time was short so the speaker had to rush through the last part, which was, for me, the most relevant and interesting part.

Talk notes


Designing C++ Headers Best Practices
Alan Griffiths

For most experienced C++ developers the content of this talk is probably quite basic, as you’d have a large portion of this covered already through sheer trial and error over the years. But clarifying some good practices and interesting side effects was enjoyable.

Talk notes


Version Control – Patterns and Practices
Chris Oldwood,

Highlevel overview of the various patterns we use when using version control (focused mostly on DVC), some useful examples and some interesting discussion about trust issues…

Talk notes


Performance Choices
Dietmar Kuhl, Bloomberg LP

Proof that if you want to know about optimisation and performance, stick to asking people in the games industry.

I’m not posting these notes.


Crafting More Effective Technical Presentation
Dirk Haun,

Really good presentation on how to craft good presentations – some interesting discussions on the make up of the human brain, why certain techniques work and why the vast majority of technical talks (or just talks in general to be honest) do what they do.

Talk notes


The Evolution of Good Code
Arjan van Leeuwen, Opera Software

Great talk, not telling us what good code is, but examining a few in-vougue books over the last decade to see where they sit on various contentious topics. Note that when the notes say “no-one argued for/against” it’s just referencing the books being discussed!

Talk notes


Software Quality Dashboard for Agile Teams
Alexander Bogush

Brilliant talk about the various metrics Alexander uses to measure the quality of their code base. If you’re sick of agile, don’t let the title put you off, this is valuable reading regardless of your development methodology.

Talk notes


Automated Test Hell (or There and Back Again)
Wojciech Seiga, JIRA product manager

Another great talk, this time discussing how the JIRA team took a (very) legacy project with a (very) large and fragile test structure into something much more suitable to quick iteration and testing. When someone says some of their tests took 8 days to complete, you have to wonder how they didn’t just throw it all away!

Talk notes


Why Agile Doesn’t Scale – and what you can do about it
Dan North,

Interesting talk, arguing that Agile is simply not designed to, nor was it ever imaged to, be a scalable development methodology (where scale is defined as bigger teams and wider problems). Excellently covered why and how agile adoption fails and how this can be avoided to the point where agile principles can be used on much larger and less flexible teams.

Talk notes


Biggest Mistakes in C++11
Nicolai M Josuttis,

Entertaining talk where Nicolai, a member of the C++ Standard Committee library working group, covers various features of C++11 that simply didn’t work or shouldn’t have been included in the standard. When it gets to the point that Standard Committee members in the audience are arguing about how something should or does work, you know they’ve taken it to far.

Talk notes


Everything You Ever Wanted to Know About Move Semantics (and then some…)
Howard Hinnant, Ripple Labs

Detailed talk on move semantics which are never as complicated as they (and even the speaker was) are made out to seem. There’s some dumb issues, which seem to be increasing as the C++ standards committee don’t seem to understand the scale of the changes being made, but never-the-less it was a good overview of what is a key improvement to C++.

Talk notes

Git Off My Lawn – Large and Unmergable Assets

I posted up the Git talk myself and Andrew Fray did at Develop 2013 and mentioned I’d have a few follow up posts going into more detail where I thought it was probably needed (since you often can’t get much from a slide deck and no-one recorded the talk).

One of the most asked questions was how we handled large and (usually) unmergable files (mostly in regards to art assets but it could be other things like Excel spreadsheets for localisation etc.). This was hinted to on slides 35 – 37 though such a large topic needs more than 3 slides to do it justice!

To start talking about this, it’s worth raising one of Git’s (or indeed any DVCS’s) major drawbacks and that’s how it stores assets that cannot be merged. Instead of storing history as a collection of deltas (as it does with mergable files) Git simply stores every version as a whole file, which if you have a 100 versions of a 100MB file, it means your repository could be 10GB in size just for that file alone (it’s not that clear-cut, but it explains the issue clearly enough).

While this is a drawback of DVCS in general it’s not necessarily a bad thing.

It’s how all SCM systems handle files that can’t be merged (some SCMS’s do the same with mergable files too – imagine how large their repositories are) but the problem comes with Git’s requirement that a clone pulls down the whole repository and it’s history rather than just the most recent version of all files. Suddenly you have massive repositories on everyones local drive and pulling that across any connection can be a killer.

As an example, the following image shows how a single server/client relationship might work, where each client pulls down the most recent file, while the history of that file is stored on the server alone.

But in a DVCS, the whole repository is cloned on all clients, resulting in a lot of data being transferred and stored on every clone and pull.

Looking at Sonic Dash, we have some pretty large files (though no-where near as large as they could be) most of them PSDs though we have smaller files like Excel spreadsheets that we use for localisation. Since none of these files are mergable and most of these files are large, we couldn’t store them in Git without drastically altering our workflow. So we needed a process that allowed them to be part of the general development flow but without bringing with them all the problems mentioned above.

Looking at the tools available, and looking at what tools were in use at the moment, it made sense to use Perforce as an intermediary. This would allow us to version our larger files without destroying the size of our Git repository but it did bring up some interesting workflow questions

  • How do we use the files in Perforce without over-complicating our one-button build process?
  • With Git, we could have dozens of active branches, how do they map to single versioned assets in Perforce?
  • How do we deal with conflicts if multiple Git repositories need different versions of the Perforce files?


By solving the the first point we start to solve the following points. We made it a rule that only the Git repository is required to build the game i.e if you want to get latest, develop and build you only need to use the Git repository, P4 is never a requirement. As a result of this the majority of the team never even opened P4V during development.

This means the P4 repository is designed to only hold source assets, and we have a structured or automated process that converts assets from the P4 repository into the relevant asset in the Git repository.

As an example, we store our localisation files as binary Excel files as thats whats expected by our localisation team but as that’s not a mergable format so we store it in P4. We could write (or probably buy) an Excel parser for Unity but again that wouldn’t help since we’d constantly run into merge conflicts when combining branches. So, we have a one-click script (written in Ruby if you’re interested) that converts the Excel sheet into per-platform, per-language XML files that are stored in the Git repository.

These files are much more Git friendly since the exported XML is deterministic and easily merged. Any complex conflicts and the conflict resolver can resolve how they see fit and just re-export the Excel file to get the latest version.

It also means that should a branch need a modified version of the converted asset they can either do it within the Git repository or roll back to a version they want in P4 and export the file again. The version in P4 is always classed as the master version, so any conflicts when combining branches can be resolved by exporting the file from P4 again to make sure you’re up to date.

Along with this we do have some additional branch requirements that help assets that might not be in Perforce (such as generating Texture Atlases from source textures) but that’s another topic I won’t go into yet.

Git Off My Lawn – Develop 2013

Recently myself and Andrew Fray (@tenpn) presented “Git Off My Lawn” at Develop 2013 in Brighton. The talk was designed to discuss how we took a Perforce centric development process and moved over to one centred around Git and a Git based branching work flow.

As with most presentations the slides tell half (or less) of the story, so while I’m posting up the slides here I’ll spend the next couple of weeks (or more likely months) expanding on a particular part in more detail.

In the mean time if you have any questions, just fire them at me.

Git Fetch –prune and Branch Name Case

I posted up the following to the git community mailing list the other day

When using git fetch --prune, git will remove any branches from
remotes/origin/ that have inconsistent case in folder names.

This issue has been verified in versions, and

I've described the reproduction steps here as I carried them out, and
listed the plaforms I used to replicate it.  The issue will most
likely occur on a different combination of platforms also.

- On Mac, create a new repository and push a master branch to a central server
- On Mac, create a branch called feature/lower_case_branch and push
this to the central server (note that 'feature' is all lower case)
- On Windows, clone the repository but stay on master, do not checkout
the feature/lower_case_branch branch
- On Windows, branch from master a branch called
Feature/upper_case_branch (note the uppercase F) and push to the
- On Mac, run git fetch and see that
remote/origin/Feature/upper_case_branch is updated

Couple of things to note here
1) In the git fetch output it lists the branch with an upper case 'F'
  * [new branch]      Feature/upper_case_branch ->
2) When I run git branch --all it is actually listed with a lower case 'f'

Now the problem happens when I run git fetch --prune, I get the following output
  * [new branch]      Feature/upper_case_branch ->
  x [deleted]         (none)     -> origin/feature/upper_case_branch

Note the new branch uses 'F' and the deleted branch uses 'f'.

The results of this bug seem to be
* Everytime I call git fetch it thinks Feature/upper_case_branch is a
new branch (if I call 'git branch' multiple times I always get the
[new branch] output)
* Whenever I run with --prune, git will *always* remove the branch
with a different folder name (from a case sensitive perspective) than
the one originally created on the current machine.

I’ve yet to receive a response as to whether this is an actual bug (certainly looks like it) or expected behaviour, but it caused quite a bit of running around trying to find a solution to it (I originally thought it was a SourceTree bug).  Since our branches are extremely transient, we use –prune a lot so not being able to use it would have caused quite a few issues.

Luckily it can be worked around by calling ‘git fetch –prune’ followed directly with ‘git fetch’ and depending on what tool you’re using, adding this as a custom step is usually pretty easy.

Here’s the link to the list thread if you want to follow it.

Setting up Git on ReviewBoard

I’ve recently made the switch to Git and started using it on a couple of project at work.  One of the important things I needed was to get automatic generation of reviews working on ReviewBoard for our Git server and I was in luck because it’s pretty simply to do.

I’m posting this here both for a reminder to me should I need to do it again and in case anyone trips over on a couple of steps that are not highlighted as clearly in the documentation.


The Git Server

ReviewBoard works best if you have a primary Git server (we’re using Gitolite at the moment) which most people clone from and push their changes to so using this with any GitHub projects you have won’t be a problem.  It’s against the content of this server that the reviews will be built against.  I went along the path of having a local clone of the repository on the ReviewBoard server (more on that later) so for now it’s simply a case of cloning your repository onto the ReviewBoard server machine, somewhere the user used to run the ReviewBoard server can access it.


The ReviewBoard Repository

One you have a local clone, you can start setting up your repository in ReviewBoard.

Hosting Service: Custom

Repository Type: Git (obviously!)

Path: This needs to be the absolute path of the Git repository on the ReviewBoard server machine, not your local machine.  In my case it was simply ‘/home/administrator/git-repositories/repo_name/.git’.  Since we have a number of Git repositories on ReviewBoard they all get put in the same git-repositories folder so it’s easy to set them up.

Mirror Path: This is the URL of the Git repository you cloned from.  To find this, simply run the following git command and copy the address from the Fetch URL line.

git remote show origin

My Mirror Path (because we’re using SSH over Gitolite) is something like git@git-server:repo_name.

Save your repository and now that’s done.


Doing Your First Review

Now you can start on your first review to see if everything is set up correctly…  One thing to note is that a review will only be generated based on what you have committed to your local repository.  So if you have unstaged local modifications they won’t be picked up.

So, modify your code and commit.

When using Post Review (you are using Post Review, right?) creating a review is easy – simply call the following from the root of your Git repository (you can make it even easier by adding this to the right button content menu in Windows)

post-review --guess-summary --guess-description --server http://rb-server -o

If all has gone well, the review should pop up in the browser of your choice ready to be published.


Doing Your Next Review?

Now this will work fine until you push what you’ve been committing.  Now when you next commit and try to generate a review you’ll start to get rather cryptic errors…

The problem is the repository sitting on the ReviewBoard server is still in the same state it was when you first cloned it as the content you pushed hasn’t been pulled and the ReviewBoard server doesn’t check whether anything on the git server has changed.  So we need to make sure the RB server is keeping it’s local copies up to date.

It’s a shame this isn’t built into the Review Process to be honest, but I can understand why, so we simply need to do the work for it.

All I’ve done is create a simple Ruby script which spins through the repositories in ‘/home/administrator/git-repositories’ and polls whether anything needs to be updated.  If it does, it does a pull, if it doesn’t it moves onto the next one.

So as an example, just manually update the repository on the RB servers and try to post another review.  This time it’ll work flawlessly and you just need to set up a system to update the repositories that fits in with whatever system you’re using.


Creating Reviews Using Post Review

In the above examples we used the following command to create a review which pulled out all recent commits and generated a single review from that

post-review --guess-summary --guess-description --server http://rb-server -o

But there are other ways to generate reviews.

The following will create  a review containing only the last commit you made

post-review --guess-summary --guess-description --server http://rb-server -o --parent=HEAD^

This one allows you to create a review using the last [n] number of commits you made

post-review --guess-summary --guess-description --server http://rb-server -o --parent=HEAD~[n]