There’s a moment when there’s no turning back, you’ve done as much preparation and contingency planning as possible and the rest is down to whether a celebrity footballer does something that goes ‘mega viral’.
Soccer Aid is a bi-annual TV programme that raises money for UNICEF. It’s a fantastic live TV event when celebrity footballers, and celebrity non-footballers compete. In return for a great afternoon of telly the viewers are asked to spare a moment to make the world a better place for children by donating to UNICEF UK.
So what exactly does this mean if you happen to be the digital team @UNICEF_UK?
Preparations start-up to a year in advance, and are usually coordinated by a Soccer Aid Digital Producer contracted for the project. There are three core digital streams; marketing, social engagement and tech. I’m going to focus on tech to keep this as brief as you can for something that covers around 12 months work!
There was one time (I don’t hesitate to say it was before me, even though I’ve been in the situation during my career) when the worst happened. The UNICEF UK website went down during Soccer Aid.
So now we do a whole host of scaling up, streamlining and performance testing to make sure there’s a reliable web presence and donation funnel in the lead up, just after, and most critically- during the TV programme.
This involves optimisation of the main website and creating a flat (non-CMS) microsite for the highest peaks. We fine tune the website application layer and database processes. Plus we increase the number of servers and use CDN for hosting of any image and video.
Last year we also channeled most of the online donations traffic to a BT MyDonate funnel to push the heaviest lifting outside of our environment. This decoupling meant we could still serve some content if the donations funnel went down, or vice versa, still gather donations if the website went down.
Once we had our approach built we carried out our first performance test, this identified more tweaks to be made. There were two other tests throughout our preparations. They not only identified issues that were fixed but also gave us an expectation of what contingency to plan.
To help with this we had one or two team members casually browsing the website during the tests to observe the experience during heavy traffic. This meant we could begin to think what to say to users if it happened for real.
If you’ve never done it before – it’s important to note that performance testing is best when its on your live site so it’ll probably mean a few late nights so you don’t affect your regular users during the day!
Coupled with this preparation we also had a very detailed contingency plan. It mapped out the various possible scenarios and the actions we would take, including who would take key decisions. This was co-created with suppliers who were actively monitoring and on call through the peak moments.
The night itself was a long one. We had one of those moments, a celebrity injury which swelled (couldn’t resist the pun!) the conversation.
Fortunately the tech all went well, hitting our ‘max tweets’ threshold three times is another story…
Quick note: This is a re-post: I created this blog post originally for the Web Managers Group