There is always a huge spike in bhf.org.uk traffic on the day that standard places are released for the British Heart Foundation London to Brighton Bike Ride. The first year we switched to online applications was 2009 and each year we learnt something new and tried different things to improve the process for our supporters.
In 2011 we tried using a virtual queue system. Unfortunately not everything went to plan and quite a number of people were left waiting for quite a few hours. Applications were getting through – just slower than expected because the queue was being over efficient (like a jobs worthy bouncer). After a few hours of this we decided to switch the queue off completely as we felt it was better to risk the site falling over than continue the frustration for everyone (ourselves included). By this point the traffic had lessened too – meaning we were more confident that the site would be able to cope (it did).
Here are a few entirely personal thoughts about what we learned:
Communications on the day:
- Use social media and be transparent about the problems – people genuinely had more patience when they knew we were equally frustrated and working as hard as we could. My favourite thing was seeing a response which recognised the dilemma of balancing technology spend with charitable spend.
- Instant messenger was good for keeping all the teams in touch with each other – but needs to be topped up with additional teleconference calls when quick decisions are needed.
- Be clear about roles for the day and stick to them even when in a pressured situation – including who has the final say on various elements.
- There was lots of testing – but the wider stakeholder groups should have been involved in this earlier to ensure all scenarios were considered. Ideally involve a professional testing agency if you can afford it – it’ll pay off in the long run if everything goes smoothly even if it seems expensive at first.
- Put tracking tags on everything – we didn’t tag the queue holding page to keep page weight low to lighten the load on the server but the info is far more valuable than you think and it’s worth the page weight.
- Make sure all your relevant suppliers (web hosts, development team, credit card merchant etc) are on standby and have been engaged in designing the contingency plan, as well as the main delivery plan.
After the ‘storm’:
- Establish a cross team comms and evaluation group which looks at everything objectively – throwing around blame risks things being swept under the carpet and reduces the potential for learning.
- Collect as much data as possible as quickly as possible – some of it might expire and not be available forever, especially given the human tendency for forgetfulness.