European Court of Justice rules on the protection of live internet broadcasts

Posted on March 27th, 2015 by

The Court of Justice of the European Union (the CJEU) yesterday issued a judgment in relation to rights in live internet broadcasts (C More Entertainment AB v Linus Sandberg C‑279/13)

Impact of the Ruling

The ruling clarifies that:

  1. Across all EU Member States, broadcasters have exclusive rights to control “on-demand” transmissions of their broadcasts.
  2. There is nothing at an EU Directive level that prevents Member States from protecting broadcasters’ rights in “live-stream” broadcasts, and different levels of protection may apply to live-stream broadcasts in different Member States.



C More Entertainment is a pay TV station that broadcasts live ice hockey matches on its website. The live-stream sits behind a paywall and users pay a per-match viewing fee. The defendant, Mr Sandberg, placed links on his own website that allowed users to circumvent the paywall and watch the hockey matches live (as broadcast) from C More’s website for free.

C More contacted Mr Sandberg and asked him to remove the links. Mr Sandberg refused. C More then put in place technical measures to prevent access to that broadcast via the links and took action in the Swedish courts against Mr Sandberg, stating that the placing of the links constituted an infringement of C More’s rights.

In 2010, a Swedish District Court found Mr Sandberg guilty of copyright infringement. He was fined and ordered to pay damages and interest to C More. The case went all the way to the Swedish Supreme Court, who asked the CJEU to decide whether Swedish national copyright law was consistent with EU Directives.

The Issues

Copyright laws across the EU have been partially harmonised through various European Directives, but differences remain.

Under the Copyright in the Information Society Directive 2001/29 (the “Directive”) Member States must give:

  1. Copyright owners an exclusive right to authorise or prohibit any “communication to the public” of their copyright works. “Communication” includes live-streaming; and
  2. Broadcasters an exclusive right to authorise or prohibit the availability of their “on demand” broadcasts.


Swedish copyright law gives broadcasters wider rights than those prescribed by the Directive, protecting forms of transmission other than “on-demand” broadcasts. The CJEU therefore had to consider whether Sweden was permitted to give these wider rights to broadcasters, or whether the Directive prevented Member States from doing so.

Relying on another Directive (the EU Rental and Lending Directive 2006/115), the CJEU stated that Member States should indeed be able to give broadcasters the right to authorise or prohibit any communication (not just “on-demand” communications) to the public of a broadcast transmission. The CJEU stressed however, that such protection could only be awarded provided that it did not undermine the protection of copyright. This ensures that the overarching rights granted to the copyright holder reign supreme.


Previous rulings from the CJEU (in Svensson, Bestwater and TV CatchUp) have explored what amounts to a communication “to the public” in the context of hyperlinks, embedded content and retransmissions of broadcasts. The CJEU did not consider this further in its decision in C More, as the related questions were withdrawn. That is a shame as a clear ruling from the CJEU’ that directly addresses linking to content behind a paywall would have been helpful.

The ruling highlights the fact that copyright laws across the EU are partly (not wholly) harmonised and that a principal objective of the Directive is to harmonise copyright and related rights but only as far as is necessary for the smooth functioning of the internal market.   National differences that do not adversely affect the internal market will not be open to challenge. This is an interesting issue at a time when the European Commission is driving through proposals to reform copyright laws across the EU to achieve a Digital Single Market. Readers of this blog may be interested in a Fieldfisher paper on the future of copyright in the EU with views and opinions from the industry, as well as reaction to the EU’s Digital Single Market agenda.



Developments in Digital Currency: will the Government or the Bank of England step-in?

Posted on March 25th, 2015 by


After a brief explanation of what digital currency is, this blog will summarise some of the key risks and benefits currently associated with digital currency. It is a combination of these factors that has caused the Government and possibly the Bank of England to intervene in the digital currency market with the aim of encouraging its growth and to protect users against some of the inherent dangers.

What is digital currency?

A digital currency is an internet based payment scheme that incorporates a decentralised payment system and a related currency. Digital currency exhibits properties similar to physical currencies but allows for instantaneous transactions and borderless transfer of ownership.

Perhaps the best known example of a digital currency is Bitcoin, which came to prominence in late 2013 when the price of a Bitcoin briefly soared to $1230, from around $10 only 18 months previously. Bitcoin is a peer-to-peer system that allows users to transact directly. These transactions are verified and recorded in a public ledger (known as the block chain). Entries are verified and recorded into the block chain by users (miners) who offer their computing power to verify transactions (in exchange for payment in Bitcoins). 

What are its potential benefits?

In a digital currency transaction the parties may benefit from a fast (or even instantaneous) verification and settlement of the payment system. The speed of a digital currency transaction is not affected by the geographical location of the payer or payee. The global reach of digital currency is another of its advantages; it allows parties to conduct borderless international transactions with greater simplicity, foreign exchange issues fall away. This of course means that there are no foreign-exchange cost implications and indeed the other related costs of a digital currency transaction are also normally low. With a digital currency payment there aren’t usually any account-holding fees, and transaction fees are either not applied or are comparatively low. Finally, the technology that is used by digital currencies such as Bitcoins to verify transactions has (so far) proved to have an unprecedented reliability and security.

What are the risks?

There are currently numerous risks relating to digital currency. Use of digital currency, given its lack of transparency and lack of regulation, is susceptible to fraud and digital wallets are open to hacking. To compound these issues there are currently no compensation mechanisms in place, in the event a counterparty does not meet its obligations (including in cases of fraud or bankruptcy).

Other than the criminal undertone and the lack of protection, the other most serious drawback to digital currency is their potentially highly volatile exchange rate. Rapid fluctuations in value will inevitably put businesses and consumers off transacting in digital currency.

What is the Government and the Bank of England doing?

Back in November 2014 the Government sent out a call for information on the use of digital currency: the stated aim being to make Britain a global centre of financial innovation. To that end the Government wanted to better understand digital currencies as a payment method (rather than a speculative investment), this information has now been gathered and assessed.

The Government has recently confirmed it intends to apply anti-money laundering regulation to digital currency exchanges, with the aim of supporting innovation and preventing criminal use. A full consultation is proposed in the next Parliament and it will include trying to ensure that law enforcement bodies have the correct skills, tools and legislation to be able to tackle criminal activity. The Government also wants to develop a best practice standards framework, by working with BSI (British Standards Institution). Finally, the Government is launching a new research initiative to address research opportunities and the challenges facing digital currency technology; funding in this area is increasing by £10million.

Meanwhile the Bank of England, at the end of February, announced it had begun considering the issue of introducing its own digital currency. The Bank said: “While existing private digital currencies have economic flaws which make them volatile, the distributed ledger technology that their payment systems rely on may have considerable promise. This raises the question of whether central banks should themselves make use of such technology to issue digital currencies.

What to expect next?

For most, the transition to the regular use of digital currency is not likely to be a quick one, there is currently only around £60m of Bitcoin circulating in the UK. However, the underlying technology that underpins the security of Bitcoin (along with its low transaction costs and global reach) is what has attracted the Government and the Bank of England to consider the digital currency industry further.

Given the obvious potential benefits associated with the use of digital currency, by providing the legitimacy and confidence that may currently be lacking, any bank backed digital currency (allied with the Government’s promise of regulation and legislation), could be the catalyst that accelerates the use of digital currency into the mainstream. Ultimately, it seems inevitable that digital currency will become increasingly important to businesses and consumers in the coming years.



US and European moves to foster pro-active cybersecurity threat collaboration

Posted on March 12th, 2015 by

On our sister “privacy and information law blog” we’ve just posted a piece on the recently publicised proposals to share cybersecurity threat information within the United States.  In the same piece we also draw analogies with a similar initiative under the EU Cybersecurity Directive aimed at boosting security protections for critical infrastructure and enhancing information sharing around incidents that may impact that infrastructure within the EU.

Both of these mechanisms reflect a fully-formed ambition to see greater cybersecurity across the private sector. Whilst the approaches taken vary, both the EU and US wish to drive similar outcomes. Actors in the market are being asked to “up” their game. Cyber-crimes and cyber-threats are impacting companies financially, operationally and, at times, are having a detrimental impact on individuals and their privacy.

Technology enthusiasts will be interested!

Mark Webber, Partner Palo Alto,


Forget London Bridge: the Tower of SIAM is falling down

Posted on March 12th, 2015 by

There is a new tune being whistled on the streets of Westminster and it is knelling the last for the “Tower” model of running IT services in Whitehall.

First highlighted by Government digital chief Mike Bracken in September of last year, a recent post by Alex Holmes (Deputy Director of OCTO) on the Government technology blog has made it emphatically clear that moving forward the Tower model is “not condoned and not in line with Government policy“. This may come as a surprise to many in the industry; however the Government had stressed that the Tower model was never designed to be a long-term technology strategy.

In essence, under a Tower model large-scale IT contracts are broken down into component services, with each individual service provided by a different supplier. The resulting separate contracts are then re-integrated into a homogenous structure by either an in-house team or a third-party service integrator. In the public sector, this process is usually co-ordinated by a Services Integration and Management (“SIAM”) Contractor, who will manage the delivery of the services by the Tower contractors and act as the customer’s agent in relation to any commercial issues that occur across the various Towers.

In his post, Holmes criticises this type of arrangement for creating “a situation where the customer buys a number of incompatible parts and then asks a SIAM provider to put them together and make it work“. He states that the Tower model arose in the public sector due to a procurement-led solution in response to the termination of large legacy contracts, but that it has ultimately resulted in an unsatisfactory and incohesive piecemeal outsourcing of Government IT. He stresses that the Tower model has failed due to a lack of consideration of which services are required by the user and how they fit together and he emphasises that additional difficulties occur as a result of the outsourcing of service accountability, architecture and management to a third party.

To rectify these issues, the Government is keen to utilise multi-sourcing and to break down its requirements into a number of different contracts to be let to smaller, specialist suppliers rather than big suppliers. To achieve this end, the Government has also developed its own framework, the G-Cloud, through which it aims to reflect user needs, rather than relying on a “one size fits all” methodology. The Home Office has embraced this approach in its move away from two large outsourcing contracts (IPIDS with Atos and IT2000 with Fujitsu).

So far, multi-sourcing programmes used in the Cabinet Office, DCMS and Crown Commercial Service have shown 40% savings and have been described by Holmes as “transform[ing] how people work, quickly“. Reflecting the trend towards “on demand” utility computing, the G-Cloud also offers a framework for Government to purchase services as and when it needs them, rather than tying Government departments into long-term, inflexible contracts. This system should serve to reduce costs and inefficiencies by ensuring that outsourcing contracts can accurately reflect the needs of the user and allows the flexibility of entering into new contracts to match evolving needs.

Finally, the Government hopes that by moving away from mega-outsourcing deals with big suppliers, it will encourage more SMEs to bid for contracts. This should serve to increase competition and further reduce Government IT spend. However, if the recent response posted by TechUK is anything to go by, it is clear that many big IT suppliers are likely to put up a fight in response to these changes.


Should you have favourites? Recent net neutrality developments

Posted on March 11th, 2015 by

After a couple of years of intense public interest, the start of 2015 has seen progress for the concept of net neutrality on both sides of the pond. Almost as the Federal Communications Commission voted to pass new rules for the United States, the EU started to think harder about the issue of net neutrality (and perhaps leading the EU in a slightly different direction as a result).

Whether this is a good or bad thing depends on what principles you hold dear in relation to the Internet. Rarely has there been such fervent and passionate debate.

Whether all information should be treated equally and what “equal” means

Putting aside the arguments for and against, the concept of net neutrality is simple. It is predicated on a principle that those who facilitate internet access should enable access to all content, applications and services regardless of the source, and without favouring or blocking particular products or websites. Some argue that this is the Internet’s “guiding principle” and advocate that an open Internet is essential to protect free speech and a balanced access to ideas (without the influence of power and corporate clout). Others see this concept as archaic (the non-discrimination concept being applied originally in a 1934 Act aimed at facilitating radio access).

Some of the arguments for and against (minus the politics) are laid out in this great infographic from Chris McElroy.

In the US

In May 2014, FCC Chairman Tom Wheeler released a plan that would have permitted large US internet service providers (such as Verizon or Comcast) to actively discriminate between online traffic on the Internet. He had floated the idea of creating pay-to-play “fast lanes”.  A way premium services could prioritise access to certain chosen content. An account favouring the “open Internet” arguments against prioritisation can be found here. Of course there are also those who have an alternative view of the need for net neutrality and some of those arguments are well presented by Joshua Steimle in his Forbes piece “Am I The Only Techie Against Net Neutrality?“.

After much campaigning the option of “fast lanes” on the Internet died. On February 26th, the Federal Communications Commission voted in favour of new rules on how the Internet should be governed. This much publicised step, which was the focus of considerable lobbying, was seen as a victory for advocates of net neutrality.

The headline changes within these new rules are as follows:

  1. Blocking is banned: service providers are prevented from blocking or speeding up connections for a fee
  2. Throttling will be prohibited: Broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices
  3. “Paid prioritisation” is prohibited – so it will not be possible for service providers to strike deals with content firms to prefer certain traffic moving over networks


All this will be achieved by reclassifying broadband access under the telecoms rules.  A reclassification of broadband internet access services (including cellular) under Title II of the 1996 US Telecommunications Act results in broadband services being treated as a “telecommunications service” under Title II and effectively regulated as a public utility and imposing enhanced regulatory controls.

As the proposals went to the vote President Obama was vocally supportive of the net neutrality proposals and the rules will take effect 60 days after publication on the Federal Register. Opponents fear new taxes and fees for users resulting from a need for increased operational procedures. Those in favour argue many protections against such steps already exist and believe the debate to be one of fundamental freedoms. The fight may not be over and with Republican Presidential hopefuls such as Jeb Bush just today coming out against the plans and certain broadband service providers promising to fight the new rules in the US courts. Many agree that there may well be more lawsuits and law changes before the US battle for net neutrality is settled.

In Europe

The other side of the Atlantic may also see similar net neutrality developments where today, across the entire EU, only The Netherlands has a net neutrality law in force. Net neutrality is just one of the issues which is due to be addressed in the so-called “Telecoms Package”. This package is something which was re-awakened this month in a number of announcements around the draft telecoms regulation.

In the European Union, in order for this Telecoms Package to become law, a few procedural steps (in a process called a “trialogue”) needs to take place. In this process, the current presidency (held by Latvia) will have to negotiate the terms of the proposed regulation with The European Parliament and The European Commission (represented by the new Digital Single Market Commissioner Günther Oettinger) on behalf of The Council of the European Union. In order to be adopted, the legal act resulting from these discussions must be voted upon by both The European Parliament and The Council.  As a regulation it would automatically become law in each EU Member State.

The European Parliament adopted its position (first-reading amendments) in April 2014 and, on March 4th this year, the Latvian Presidency was given a mandate to advance proposals on both roaming and the “open internet” (or net neutrality) in trialogue.

According to the March press release, the Presidency’s mandate to negotiate the new regulation covers:

  1. EU-wide rules on open internet, safeguarding end-users’ rights and ensuring non-discriminatory treatment in the provision of internet access services
  2. changes to the current roaming regulation (known as Roaming III), representing an intermediate step towards phasing out roaming fees” [within Europe].


If passed, the new regulation (on the Telecoms Single Market) could apply from 30 June 2016.

In relation to net neutrality, the draft regulation is to enshrine the principle of end-users’ right to access and distribute content of their choice on the Internet. It also sets out to ensure that companies that provide Internet access treat traffic in a “non-discriminatory manner”. The press release explains that the proposed regulation “sets common rules on traffic management, so that the internet can continue to function, grow and innovate without becoming congested. Blocking or slowing down specific content or applications will be prohibited, with only a limited number of exceptions and only for as long as it is necessary. For instance, customers may request their operator to block spam. Blocking could also be necessary to prevent cyber attacks through rapidly spreading malware.”

Again, as our very own net neutrality debate emerges in Europe, another storm is brewing. The original proposals appear to have been watered down and activists and commentators were quick to point to flaws in the principles where it comes to truly open internet. In fact, on the same day, in an open letter a number of Europe Parliament MEPs called for better definition of the European net neutrality rules which they believe to be of “vital importance not only for the Telecoms Single Market, but also for the Digital Single Market“.

Why the controversy? As reports here, “[n]ot only does the proposal enable the creation of slow and fast lanes by allowing paid prioritization and discriminatory practices such as “zero-rating” schemes, but the proposal also introduces loopholes that could authorise the blocking by Internet Service Providers of legal content, contradicting the EU Charter of Fundamental Rights.” Unlike the US, depending on the results of the trialogue, perhaps the EU will see rules in the future which contradict a purist view of net neutrality and permit certain traffic management practices. If there are to be provisions explicitly permitting specialised services “other than internet access services” to be prioritised where high-quality access is needed, what may these services include? Who or what may benefit from prioritisation?

As any future regulation, if you’re not trying to influence the outcome, it can be a mistake to get lost in the detail and the “what ifs?” too early. This said, the current vague and broad references to “traffic management” within the European proposals may test the net neutrality concept of enabling access to all content. The battle lines are drawn and in both the US and Europe we may not currently have a full view of where this net neutrality debate will fall out.

Mark Webber – Partner, Palo Alto California


An Introduction to SIAM

Posted on March 4th, 2015 by

Increasing user experience is driving the need for IT to play a critical role in delivering business outcomes. As a result and to ensure end-to-end delivery across the business value chain, businesses need to look at ways to ensure that the business needs and IT work together.   In order to do this, organisations need to work, act and think smarter. There are a number of ways to do this, including establishing a discrete Service Integration and Management (SIAM) function to integrate IT services.

SIAM is not new and there is no “one size fits all” approach.   Indeed, for a number of years the UK Government has been looking at, or tried to establish, SIAM functions. Some have been successful (e.g., FCO) and some have not (e.g., MOJ). There has been less emphasis on the use of SIAM by private organisations but I think this will change over time as businesses realise that they don’t have (amongst other things) the internal skills set or resources to provide the IT needed to deliver business outcomes in a cost effective way.

Over the next few months, I will be publishing a series of blogs on SIAM which will cover the following (and other) common questions:

1. What is SIAM, its role and how it differs from the traditional outsourcing model?

2. What are the advantages and disadvantages of SIAM?

3. What are the main risks and how can these risks be mitigated?

4. How can I ensure that I implement an effective and successful SIAM function?

This first blog will address the first of these four questions

What is SIAM?

In its shortest and simplest terms, it is a way to manage multiple ICT suppliers/services and integrate them to provide a single business-facing ICT organisation.

If, as you do, you Google “service integration and management” (unfortunately, Googling “SIAM” brings up lots of information on the Society of Industrial and Applied Mathematics!), it will throw up several definitions, including:

Service integration and management lets an organisation manage the service providers in a consistent and efficient way, making sure that performance across a portfolio of multi-sourced goods and services meets user needs” (;

an approach to managing multiple suppliers of information technology services and integrating them to provide a single business-facing IT organisation” (Wikipedia); and

a tower based IT service delivery model that is being rolled out across Government and within some Private sector organisations” (

You’ll also be presented with lots of colourful diagrams.

Although these definitions and diagrams are helpful, it is up to the business to ensure that it has a clear definition of what SIAM means to it. In addition, the business will still need to have its own Intelligent Client Function (ICF) as there will be a number of functions that will have to be retained (e.g., enterprise architecture, security/regulatory compliance and overall governance of the SIAM).

The Role of SIAM v traditional outsourcing

Essentially, the SIAM’s role is to maximise the performance of end-to-end services to the business in the most cost effective way – ensuring that the services and suppliers work and collaborate together, and providing a robust service desk.

Compared to the traditional “single supplier” outsourcing model, SIAM enables businesses to have the flexibility to multi-source their IT services without having to be reliant on one organisation or a prime contractor which doesn’t truly offer value for money, offers little flexibility and doesn’t enable the business to take advantage of new technologies.

As well as service management, and implementing and managing ITIL (the two elements of SIAM), the SIAM will typically also be responsible for continuous improvement, innovation, transition, managing change, and responding to the needs of the business.

If you are thinking of establishing a SIAM function and would like discuss the legal and commercial issues in more detail (ahead of my future blogs on this topic), please contact me.



Progress update on the draft EU Cybersecurity Directive

Posted on February 27th, 2015 by

We have just posted a piece on our Privacy and Information Law Blog concerning the draft EU Cybersecurity Directive.  It contains an update of the progress so far and the proposed amendments that are currently being debated by the EU institutions.

Happy reading



White Spaces to change colour

Posted on February 24th, 2015 by

Following its earlier consultation reported on in this blog (, Ofcom has now published its decision to allow licence-exempt access to the unused parts of the radio spectrum in the 470 – 790 MHz frequency band – that currently used by Digital Terrestrial Television (“DTT“), and Programme Making and Special Events (“PMSE“).

It is intended that access will be controlled by designated white space databases, which will store information on the location of DTT and PMSE users to avoid harmful interference with these pre-existing users. The technology has already been trialled as part of a pilot, which demonstrated use cases including land-ferry broadband; digital signage; live video feeds (of animals in London Zoo); and flood detection. To date, no harmful interference has been reported.

Devices must either be ‘Master Devices’, which will communicate with the database designating white space; or a ‘Slave Device’ which transmits under the control of a Master Device. Therefore the spectrum sharing will be dynamic in order to make the most efficient use of the spectrum available in the area.

A European harmonised standard has been prepared for white space devices (EN 301 598), and devices compliant with that standard will also comply with the UK regime.

The draft Wireless Telegraphy (White Space Devices)(Exemption) Regulations 2015 proposed by Ofcom set out a general exemption permitting the establishment, installation and use of white space devices provided that it transmits on frequencies designated as white space within the 470Mhz to 790MHz frequency band; is not used airborne and does not interfere with any wireless telegraphy; and doesn’t allow the user to alter the technical/operational settings in a way which would affect its device parameters or its operation within the operational parameters. Note that there are additional particular requirements specified for Master and Slave Devices.

The device parameters include information such as (i) whether it is a Master or Slave Device; (ii) its unique identifier; (iii) type of device; and (iv) geolocation data. The operational parameters include (i) boundaries within which transmissions are made; (ii) spectral density; (iii) limits on channel usage; and (iv) time and geographic area within which parameters are valid.

Ofcom intends that the new technology will be available before the end of this year.



UK Government to invest £120m in the tech sector

Posted on February 17th, 2015 by

On 16 February the Government’s Technology Strategy Board, Innovate UK, published its digital economy strategy, showing how £120 million of support for business innovation will be provided over the next 4 years.

According to the report, the global digital services market will be worth as much as the entire UK economy by 2020, and the digital economy strategy aims to keep the UK at the forefront of digital innovation.

£15 million per year is earmarked for innovative business projects and £15 million per year will be divided between the Digital Catapult centre, the Open Data Institute and Tech City UK.

The strategy has 5 key objectives:

  1. encouraging digital innovators to develop their ideas and establish businesses
  2. championing approaches focused on users of digital technology
  3. equipping innovators with the right technical and business expertise
  4. growing infrastructure, platforms and ecosystems
  5. ensuring the digital economy is sustainable



Ten steps to successful Business Process Outsourcing

Posted on February 16th, 2015 by

First published in Professional Outsourcing magazine

My firm closes £5-10 billion of outsourcing work every year. Spanning a vast range of transactions from central government shared services to Fortune 500 offshoring, each deal poses its own unique problems.

Tackling the issues requires a team with a unique mix of skills. I have glowing admiration for the many truly professional sales and procurement, project leads and operational experts, security folk and HR advisers I have worked with over the years. Collaborating with them has taught me a lot. And in my role as a lawyer on these deals, I have been fortunate to see every part of the process from strategy through procurement, from change to exit.

The challenge is to manage complexity and risk in a way which results in efficiency and simplicity.

Here are my top ten thoughts on how to make this possible.

1.  Focus on the business drivers

It’s an obvious place to start, but the outsourcing process is complex in nature and that can sometimes obscure the reasons to do the deal. It is vital to identify early on the likely benefits of the project and then strongly focus on key drivers. This focus must not stop at the business case stage. Required benefits must be measured throughout RFI and RFP evaluation and through to preferred bidder and the contract terms. It is just as important to continue measuring performance against key drivers over the lifetime of the outsourcing.

2.  It’s all about the people…

If people are important to ITO, they are the lifeblood of BPO. From each individual in the delivery team to senior management team governance, it is important to understand recruitment and selection of people and the performance required of them. You may need to consider people incentivisation and measures against staff attrition.

Because BPO is about people-managed processes (perhaps based around an ERP system or platform), care needs to be take on recruitment criteria, training and ramp-up. Take a look at the Group 4 debacle over the London Olympics security guards contract if you want to reflect on the effect of ramping-up too quickly and the implications for on-boarding people.

3.  There’s more to a baking a cake than measuring the ingredients

There was an interesting if messy court case a few years back between Vertex and Powergen in which the judge said that while Vertex may have been meeting service levels, it was not providing the required standard of service. The case may seem a little odd, but there is a point. Service levels are a statistical sampling of a service and not a description of the holistic service output required. I might answer in three rings, escalate in 10 minutes and close the call in 30 minutes. But what does this tell you about quality of delivery?

It is possible to design more quality-driven measures of success, such as whether a process is completed error-free; the richness of functionality produced in an Agile sprint; or the time and cost to on-board a call centre operative. However, SLAs are not the entire answer.

It is key to closely document the processes being outsourced so it is clear what is to be achieved. Equally, service descriptions should be backed up by standards, policies and procedures to ensure corporate standards are met. The contract itself can also include fall-back protections such as warranties on quality of personnel and work, corrective plans or indemnities for loss of data.

The point is to look at how service delivery is documented holistically and across the contractual documentation and processes.

4.  It’s your reputation on the line

Frequently, BPO staff will be hired and deployed exclusively for one customer. Often it is attractive to connect the BPO activity to the customer’s brand. When the services are externally focussed, such as customer service desks, order fulfilment or payment processing the brand is even more present. BPO is often highly visible and can expose the brand of the customer and service provider in a more prominent way than other forms of outsourcing.

How well will your reputation be protected if there is a service failure? It may sometime be that the lid can’t be kept on some problems. For example, security breaches may need reporting to the regulators (and more and more frequently, to business customers under contract). For public bodies, there are limits to how much they are able to deal with confidentially once ministers or councillors step in or the press uses Freedom of Information to find a story.

Customer and supplier may need to think through the communications implications, internally, to the press and to consumers of the customers services. They will also need to tightly control management of any crisis or issues.

Because reputation management is important in BPO, the development of communications and remediation plans, escalation and stakeholder management may need further development than in a typical ITO.

5.  Price and value are different propositions

Most contracts rightly focus on pinning down pricing. It is a valuable exercise to ensure fixed and activity based charging components are well understood, as well as how changes or termination payments will be costed.

However, each element of the work must produce a business result and the production of value is very different from the recovery of cost and a margin. In many cases, value delivered and not just effort expended must be captured.

For some elements of BPO, we have been developing pricing approaches to do just that. For example, in application development and management, ensuring work is right first time; or requiring sufficient functionality to trigger payment for a sprint.

As well as measuring value, some deals may focus on cost reduction. Application rationalisation, service simplification and year-on-year savings may all be deployed.

We frequently deploy standard benchmarking and continuous improvement techniques. While always good to include, mechanisms developed to achieve specific results for specific services are more desirable.

6.  Innovate or die

There are usually two reasons to outsource: Do it better, and do it cheaper.

Whether or not the deal will deliver on these aims is dependent on how developed the business model is. Many suppliers are become adept at innovation and cost reduction mechanics, but it is not always easy to enshrine these in the deal. (Cue the dreaded 20 page “Innovation service description….”)

Innovation is plainly important since the way customer and service provider do business is sure to evolve over the outsourcing lifecycle. So as a minimum, the parties should discuss potential innovation and examine the potential for change. This keeps the customer at the forefront of developments and defends good service providers from falling behind the curve on perceived value.

We have seen and developed innovation programmes which are clear and specific to the deal. These have been especially important in the $$$100 millions plus deals where investment is high and improved service or decreased cost key. Typical mechanisms include:

(i) Funding for agreed innovations

(ii) Technology refresh programmes

(iii) Service rationalisation plans

7.  Compliance is not optional

With many BPOs, the customer is outsourcing services which carry with them a significant compliance burden – from simple payroll requirements and the obvious data privacy implications to more complex issues such as support for Sarbanes Oxley, financial services regulatory requirements or statutory HR processes. These compliance issues need carefully thinking through to ensure the relevant standards are met – and to deal with the implications if they are breached.

8.  The exit is signposted

Every BPO comes to and end and so every BPO contract needs to ensure that the customer is able to move its services back in-house or to another supplier. Yet outsourcing contracts can be thin on detail or too theoretical about handover of services.

Given it is one of the sections of the contract which will definitely be dusted down at some stage, it is best to ensure exit activity is detailed and the cost mechanisms clear.

Exit is surprisingly difficult in practice and mechanisms to ensure that at any point in time the customer has vital information to hand or accessible are important. For example, salvaging in-flight projects or continuing to maintain service levels can be tough without access to data, people and systems. Knowledge transfer and access to data, software and systems is also essential.

9.  Fit for the future

Organisations continue to tactically outsource individual service towers to service providers while grappling with how to manage suppliers across the organisation. There are a number of developing models which align suppliers across service towers. Their suitability depends on the depth of the intelligent client/retained organisation, maturity of model and speed of contract refresh. Some current models which might help future proof contracts include:

(i) Ensuring supplier co-operation provisions and consider whether there will need to be multi-supplier governance for jointly solving issues

(ii)  Catering for Operating Level Agreements and service interfaces which allow clean hand-off of process between suppliers, and for suppliers to resolve issues between themselves before escalating to the customer

(iii) Allowing assignment of the contract by the customer should the service tower outsourced be consolidated and managed by another provider

(iv) Allow for a service integration model to be developed including reporting and interfacing with a service integrator.

10.  There is an x in team

There may be no “I” in team, but great outsourcing teams call on the best people across the organisation. They are multi-disciplinary, diverse in thinking and able in communication. Great outsourcing teams have a certain “x” factor which leads to success.

Customer and supplier will need good people collaborating to avoid the key pitfalls in outsourcing. There is no denying that the customer and supplier have different aims and are looking for different outcomes. But the two teams need to find a space in which they can collaborate, share frank views and build success in the delivery of services. Because outsourcing is ultimately a collaborative process in seeking joint solutions, it is the “x” in team which makes the most significant difference to the long term success of BPO programmes.