Skip to content

Latest commit

 

History

History
278 lines (166 loc) · 51.9 KB

chapter-5-the-play-and-a-decision-to-act.adoc

File metadata and controls

278 lines (166 loc) · 51.9 KB

Chapter 5 - The play and a decision to act

In chapters one to four I’ve covered the basics of mapping, common economic patterns and doctrine. However, these Wardley maps of business don’t tell you what to do any more than a geographical map tells an Admiral how to win a battle. The maps are simply a guide and you have to decide what move you’re going to make, where you’re going to attack and how you navigate your ship through the choppy waters of commercial competition. In other words, you have to apply thought, decide to act and then act. In this chapter we’re going to cover my journey through this part of the strategy cycle — see figure 46.

Figure 46 - The play and a decision to act
Figure 46: The play and a decision to act

Identifying opportunity

There exists two different forms of why in business — the why of purpose (i.e. win the game) and the why of movement (i.e. move this piece over that).The why of movement is what I’m going to concentrate on here but in order to examine this then we must first understand the landscape, orientate ourselves around this and then we can determine where to attack.

Prior to 2005, I had sat in many meetings where options were presented to myself and my executive team and then we made a choice based upon financial arguments, gut feel and concepts of core. We had never used a landscape to help determine where we could attack. This was a first for us and very much a learning exercise. I’ve taken that earliest map from 2005 and highlighted on it the four areas that we considered had potential. There were many others but for the sake of introduction, I thought I’d keep it simple. These four wheres are shown in figure 47.

Figure 47 - Four different wheres
Figure 47: Four different wheres

Where 1 — we had an existing online photo service that was in decline but which we could concentrate on. There existed many other competitors in this space, many of which were either well financed (e.g. Ofoto) or ahead of us in terms of offering (e.g. Flickr). There were also unmet needs that we had found. As a company we had acquired many capabilities and skills, not necessarily in the online photo business as the group developed many different types of systems. We also had an internal conflict with our parent company’s online photo service which we built and operated. Whilst our photo service was open to the public, the parent company’s service was focused on its camera owners and we had to tread a careful game here as our own service was sometimes considered a competitor. We had two external users (our public customers and our parent company) and though not explored in the map above, they had conflicting needs. By meeting the needs of our public consumers in the public site we could diminish the value seen by our parent company in their own version. For example, making it easier for public consumers to upload images from mobile phones did not sit well with a parent company trying to sell cameras.

Where 2 — we had anticipated that a code execution platform would become a utility (what today is called serverless). Remember, this was 2005 and long before systems such as AWS Lambda had appeared. We had ample skills in developing coding platforms but most importantly, we had also learned what not to do through various painful all-encompassing “Death Star” projects. There would be inertia to this change among product vendors that would benefit us in our land grab. To complicate matters many existing product customers would also have inertia and hence we would have to focus on startups though this required marketing to reach them. There was also a potential trade-off here as any platform would ultimately be built on some form of utility infrastructure similar to our own Borg system (a private utility compute environment that we operated providing virtual machines on-demand, based on Xen) and this would reduce our capital investment. Our company had mandates from the parent to remain profitable each and every month and to keep headcount fixed hence I had no room to expand and any investment made would have to come out of existing monthly profit despite the reserves built up in the bank. A platform play offered the potential to reduce the cost of our other systems and increase the speed of development of our other revenue generating projects hence freeing up more valuable time until a point where the platform itself was self-sustaining.

Where 3 — we had anticipated that a utility infrastructure would appear. We had experience of doing this but we lacked any significant investment capability. I was also mindful that in some circles of the parent company we were considered a development shop on the end of a demand pipeline and the parent was heavily engaged with an external hosting company. In this case, the parental company needs (many of which could be described as political) were potentially in conflict with our business needs. Unfortunately I had painted ourselves into this corner with my previous efforts to simply “survive”. If we made this move then in essence many of these problems were no different from the platform space except the agility benefits of platform were considered to be higher. The biggest potential challenge to us would not be from existing product (e.g. server manufacturers) or rental vendors (e.g. hosting companies) but the likes of Google entering the space. This we expected to happen in the near future and we certainly lacked the financial muscle to compete if it did. It seemed more prudent to prepare to exploit any future move they made. However, that said it was an attractive option and worth considering. One fly in the ointment was concerns that had been raised by various members of the team on issues of security and potential misuse of our systems by others. It seemed we would have our own inertia to combat due to our own past success with using products (i.e. servers) and despite the existence of Borg. Fighting multiple forms of inertia and the parent company whilst competing against a likely service from Google seemed a bad deal.

Where 4 — we could instead build something novel and new based upon any utility environments (either infrastructure or platform) that appeared. We understood that using utility systems would reduce our cost of investment i.e. the gamble in the space. However, any novel thing would still be a gamble and we’d be up against many other companies. Fortunately, we were very adept at agile development and we had many crazy ideas we could pursue generated by the regular hack days we ran. It might be a gamble in the dark but not one we should dismiss out of hand. It had the benefit of “just wait and see”, we could continue building and wait for market to launch services we could exploit. Alas, I’m not the sort of person who wants to sit back and watch others create the field before I exploit it.

Looking at the map, we had four clear “wheres” we could attack. We could discuss the map, the pros and cons of each move in a manner which wasn’t just “does this have an ROI and is it core?” Instead we were using the landscape to help us anticipate opportunity and points of attack. I suddenly felt our strategy was becoming more meaningful than just gut feel and copying memes from others. We were thinking about position and movement. I was starting to feel a bit like that wise executive I had met in the lift in the Arts hotel in Barcelona when he was testing that junior (i.e. me) all those years ago. It felt good but I wanted more. How do I decide?

The dangers of past success

One significant problem around making a choice usually stems from past success and the comfort it brings. We had an existing photo service along with other lines of business which generated a decent revenue. We were comfortably profitable and life was pretty easy. Would it not be better for me to just continue doing what we were doing? Why rock the boat? I’d be taking a risk changing the course we were on. However, I had recently watched another company fail to manage change and was acutely aware of the dangers of not taking a risk. That company was Kodak.

Being an online photo service, I had a ringside seat to the fundamental shift happening in the image market between 2000 to 2005. The photo had been seen as something with value to customers due to its costs in terms of time and money to produce — the visit to the photo lab, the cost of processing and the wait for it to be delivered via the post. Film was at the centre of this and the only thing more annoying than waiting for it to be processed was not having enough film to take that next shot on holiday. Many times in the past, I had to make choices over which picture I took due to a limited number of shots left. However, the image and the film were really just components to delivering my overall need which was sharing my experiences. The image was also evolving from analog film to a new digital world in which I could take pictures and delete the ones I didn’t like. I might have a limit in terms of memory card but I could always download to a computer and share with others. There was no film processing required.

I’ve created a map for that changing landscape in figure 48 and as I go through more of my experience with the Kodak story then I’ll make references to that map. The old world was one of analog film (Point 1 below). Sharing a moment was about sitting on the sofa with friends and family and passing the photo album. The film itself needing some mechanism of fulfilment such as the photo lab. However, the camera industry was rapidly becoming commodity with good enough disposable cameras. The analog world of images was also changing to one which was more digital (Point 2). Digital still cameras (DSC) have developed from cameras (Point 3) and were becoming more common. I could share an image by simply emailing it to others. Kodak had led the charge into this brave new world with early research in the mid 1970s but somehow it also seemed to be losing ground to others such as Sony and Canon.

Figure 48 - How images were changing
Figure 48: How images were changing

The growth of digital images and the spread of the internet had enabled the formation of online photo services. These provided simple ways of printing out your images along with easier means for sharing with others. There was a very noticeable shift occurring from printing to sharing. You could create social networks to share images about hobbies or instead share with a close circles of friends. One of the early pioneers in this space was Ofoto which had been acquired by Kodak in 2001. The messaging of Kodak had also changed around that time, it became more about sharing experiences and moments. However, Kodak wasn’t the only competitor in the space and unlike many others, Kodak seemed to have a problem in that it made significant revenue from film processing. I’ve shown this problem in figure 49 with the rise of online photo services (Point 4) and the inertia created by fulfilment (Point 5)

Figure 49 - The rise of online photo services
Figure 49: The rise of online photo services

Whilst it had a strong position in digital still cameras and online photo services, Kodak didn’t seem to be maximising this. Others were quickly catching up and overtaking. I can only assume that the inertia created by its past success with film was significant I suspect there was opposition to the change within the organisation. I’ll guess the usual sort of lines of “digital is just small fry”, “photos are the real business”, “this will cannibalise our business” were trotted out. To an outside observer it certainly seemed that Kodak was in conflict with itself. The first signs of this were already apparent in the late 90s with the release of the Advantix camera system, a curious blend of digital camera which produced film for processing. A somewhat odd attempt to have the digital world but still keep the analog — “It’s the new but just like the old!”

There were also conflicting messages coming out of Kodak despite its messaging, whilst one part of the organisation seemed to pushing digital another part seemed to be resisting. Finally, in 2003, Kodak had introduced the Easyshare printer dock 6000 that enabled consumers to produce Kodak photo prints at home from digital images. When I first heard of this, it felt as through Kodak had finally overcome its inertia through a compromise between the fulfilment and the digital business (Point 6 in figure 50 below). The future was one of a self-contained Kodak system from digital still camera to online service to photo printer. But there was a problem here. “Camera phones” had emerged combining the two value chains of the mobile phone and the digital still camera. Already, on our online site we had witnessed the rapid growth of images taken with camera phones (Point 7).

Figure 50 - The solution and its doom
Figure 50: The solution and its doom

These “camera phones” were still uncommon but they seemed to herald a future where people would take pictures with their phones and share online. Today, few people call them camera phones, we just call them mobile phones. It’s assumed that every mobile phone is a camera.

Back then however, it was clear there was no mass market future for print, only a niche compared to an enormous market of shared digital images. It seemed as though Kodak had overcome its inertia through a compromise which meant investing in exactly where the future market wasn’t going to be. By early 2005, from our perspective then the future of the entire industry from fulfilment to photo printers to cameras to film to digital still cameras (Point 8) was starting to look grim.

Figure 521 - The end of the analogue world
Figure 51: The end of the analogue world

For us, the future of pictures looked more like figure 52 and printed photos were barely worth mentioning unless you intended to specialise in a profitable niche.

Figure 52 - A future picture
Figure 52: A future picture

In any choice I was going to make, I had to be careful of inertia and past success. Simply standing where we were might be the comfortable option but it didn’t mean we would have a rosy future. Our fraught issues around our parent’s photo service could grow if we embraced a camera phone future as this would put us in direct conflict with its core DSC business. However, Kodak was a clear example of what could go wrong if you didn’t move fast enough into the future, allowed inertia to slow you down or compromised by placing the bets in the wrong place. But maybe there was another future we could find but how far into the future should we peek?

The near, the far and the crazy

Back in the late 90s, I had taken a deep interest in 3D printing. It was the main reason why I had originally joined the near bankrupt online photo service in early 2000 because I envisaged a future where images of physical things would be shared. I wanted to learn about the space of sharing images. When we were acquired by one of the world’s largest printer manufacturers, I was overjoyed. I assumed that they too would share my passion. I gave numerous presentations on the topic both externally and internally within the parent company on this subject and to my disappointment it was always the external crowd that got more excited. In 2004, I gave a presentation at Euro Foo on the future of 3D printers. The subject was a pretty hot topic at the time and one of the audience that I was fortunate enough to meet was Bre Pettis who was demonstrating his felt-tip pen printer, the DrawBot. Why fortunate? Bre founded MakerBot and subsequently rocked the world of 3D printing.

Whilst 3D printing was a passion, I had also an interest in printed electronics especially the work of Sirringhaus and Kate Stone. I started to use these concepts to describe a future world of how manufacturing would change. The basics are provided in figure 53 but we will go through each step of this map. I’m going to assume you’re becoming more familiar with maps and so we will just dive in.

Figure 53 - The near the far and the crazy
Figure 53: The near, the far and the crazy

First let us start with the user need for some device (Point 1). I’ll leave it as generic because I want to cover manufacturing itself and not the specific use of one device over another. Our device would have physical elements including electronics along with any software that would interact with it. The physical and electronic elements are commonly described through some form of computer aided design (CAD) diagram which provides instructions on what to build and this is combined with our software which is simply our code (Point 2).

The physical form would normally be manufactured by a factory which generally used common machinery involved in significant custom processes. However, this was starting to change with concepts such as digital factories and even 3D printers which were becoming less magical and more common (Point 3). This promised a future world of highly industrialised factories without extensive re-tooling for each product run. Also, since those first inkjet-printed transistors of Sirringhaus in 2001, a new field of plastic and printed electronics was rapidly growing (Point 4). Electronics manufacture was on the path to becoming industrialised and I would just print the electronics I needed rather than combine a mix of commodity and non-commodity components on my own circuit board created on some assembly line that changed with every product run.

For me, the interesting aspect of this was the combination of both physical and electronic forms. In 2005, I had become aware of several University led efforts to create hybrid objects including junction boxes where both the physical form and electrical components were printed (Point 5). This too would become industrialised to a world in which I printed my entire device rather than used factories which assembled. Now, along with potential for creating novel materials and components, this also had the opportunity to fundamentally change the concept of design.

The function of a device is a combination of its physical form, its electronics and any software that interacts with this. As hybrid printers industrialise then this function is described by purely digital means — the CAD (an instruction set) which is then printed and the code (an instruction set) which is run. When we wish to change the function of a device then we need to change one of those two instruction sets along with considering the interaction between the two. Normally, we try to make changes in software because it’s the less costly but as hardware become more malleable then that equation changes. It also means we are now in a position to simply describe the function of the device that we want and allow a compiler to determine how that should be instantiated in the instruction sets.

My desire to add a sun dial to my phone could be achieved through software or electronic or physical means or a combination of all — a compiler could work out that decision tree for me. This opens up a possibility for an entire new form of programming language that compiles down to physical, electronic and coding forms and where designers concentrate on describing the function of the thing and even object inheritance in the physical world. I called this theoretical programming language SpimeScript (Point 6) in honour of the marvellous book by Bruce Sterling on Shaping Things. This topic was my central theme of a talk I gave at Euro OSCON in 2006.

However, I had previously raised these discussions within the parent company and had become aware that whilst we might be able to make far future anticipations of change, they were increasingly built on layers of uncertainty and were increasingly unfamiliar and uncomfortable to others. The further we went, the crazier the ideas sounded and the more concerned people became. This itself creates a problem if you intend to motivate a team towards a goal. Hence, if I was going to choose a course of action, it needed to push the boundary but not too far so that it seemed like science fiction.

I was starting to feel uncomfortable with: -

Where 1 — focus on the online photo service, for reasons of inertia and conflict.

Where 4 — build something novel and new based upon future industrialised services, for being too far reaching.
The question now became; given our choices could we influence the market in any way to benefit us? Could that help us decide why here over there?

Learning context specific gameplay

Context specific play: Accelerators, decelerators and constraints

I understood that everything evolved due to competition and had plenty of evidence to show past examples from electricity to nuts and bolts. The question was could I somehow influence this? By coincidence, from the very early days of 2001 we had not only been users of open source but also contributors to it. We supported the Perl language and many other open source projects.

I had purposefully used these as fertile hunting grounds to recruit my amazing team during 2002–2005. But I had also observed how open source efforts through collaboration with others had produced stunning technology that out surpassed proprietary efforts in many fields. In many cases, open source technology was becoming the de facto standard and even the commodity in a field. It seemed that the very act of open sourcing, if a strong enough community could be created would drive a once magical wonder to becoming a commodity. Open source seemed to accelerate competition for whatever activity it was applied to.

I had also witnessed how counter forces existed such as fear, uncertainty and doubt. This was often applied by vendors to open source projects to dissuade others by reinforcing any inertia they had to change. Open source projects were invariably accused of being not secure, open to hackers (as though that’s some form of insult), of dubious pedigree and of being a risk. However, to us, and the millions of users who consumed our services then they were an essential piece of the jigsaw puzzle. By chance, the various battles around open source had increased my awareness of intellectual property. I became acutely conscience of how patents were regularly used for ring-fencing to prevent a competitor developing a product. This was the antithesis of competition and it was stifling. I started to form an opinion that certain actions would accelerate competition and drive a component towards a commodity whilst others could be used to slow its evolution. The landscape could be manipulated.

At the same, I had noticed that as certain activities became more industrialised and therefore more widespread then it often became difficult to find people with the right skills or there were shortages of underlying components. The evolution of a component could therefore be constrained by a component it depended upon such as knowledge. I’ve summarised these points in figure 54 by applying them to our first map.

Figure 54 - Accelerators decelerators and constraints
Figure 54: Accelerators, decelerators and constraints

Point 1 — the evolution of a component can be accelerated by an open approach, whether open source or open data.

Point 2 — the evolution of a component can be slowed down through the use of fear, uncertainty and doubt when crossing an inertia barrier or through the use of patents to ring-fence a technology.

Point 3 — the evolution of a component can be affected by constraints in underlying components e.g. converting compute to a utility would potentially cause a rapid increase in demand (due to new uncharted components that are built upon it or the long tail of unmet business needs) but this requires building data centres. Whilst the provision of virtual machines could be rapid, the building of data centres are not.

I started to explore the map further, looking for other ways we could exploit.

Context specific play: Innovate, Leverage and Commoditise

I have frequently been told that it is better to be a fast follower than a first mover. But is that true? Using the map told me a slightly more complex story. Certainly when exploring an uncharted space, there was lots of uncertainty and huge costs of R&D. It certainly seemed better to let others incur that risk and then somehow acquire that capability. But researchers and companies were constantly creating new things and so there was also a cost of discovering that new successful thing in all the noise. We wouldn’t be the only company trying to play that game and any acquisition cost would reflect this. If we wanted to play that game, then somehow we need to be able to identify future success more effectively than others.

By comparison, when taking a product to a utility then the component was already quite well known. It was defined, there was an existing market but yes there would be inertia. I realised there was a connection between the two and we were sitting on the answer. Our pioneer — settler — town planner structure had enabled us to cope with evolution and connect the two extremes. The settlers role was simply to identify future successful patterns and learn about them by refining a product or library component. In 2005, we actually referred to our settlers as the framework team and their success came from understanding the patterns within what the pioneers — our development team — had built. The pioneers were our gamblers.

However, what If our pioneers weren’t us but instead other companies? Could our settlers discover successful patterns in all that noise? The problem of course was where would we look? Like any product vendor we could perform some marketing survey to find out how people were using our components but this seemed slow and cumbersome. Fortunately, our online photo service gave us the answer.

Between 2003 to 2005, we had exposed parts of the photo service through URL requests and APIs to others. It wasn’t much of a leap to realise that if we monitored consumption of our APIs then we could use this to identify in real-time what other companies were being successful without resorting to slow and expensive marketing surveys. This lead to the innovate — leverage — commoditse (ILC) model. Originally, I called this innovate — transition — commoditise and I owe Mark Thompson a thank you for persuading me to change transition to something more meaningful. The ILC model is described in figure 55 and we will go through its operation.

Figure 55 - ILC (innovate leverage and commoditise)
Figure 55: ILC (innovate, leverage and commoditise)

Take an existing product that is relatively well defined and commonplace and turn it into an industrialised utility (Point A1 to A2). This utility should be exposed as an easy to use API. Then encourage and enable other companies to innovate by building on top of your utility (Point B1). You can do this by increasing their agility and reducing their cost of failure, both of which a utility will provide. These companies building on top of your utility are your “outside” pioneers or what we commonly call an “ecosystem”.

The more companies you have building on top of your utility (i.e. the larger your ecosystem) then the more things your “outside” pioneers will be building and the wider the scope of new innovations. Your “outside” ecosystem is in fact your future sensing engine. By monitoring meta data such as the consumption of your utility services then you can determine what is becoming successful. It’s important to note that you don’t need to examine the data of those “outside” companies but purely the meta data hence you can balance security concerns with future sensing. You should use this meta data to identify new patterns that are suitable for provision as industrialised components (B1 to B2). Once you’ve identified a future pattern then you should industrialise it to a discrete component service (B3) provided as utility and exposed through an API. You’re now providing multiple components (A2, B3) in an ever growing platform of component services for others to build upon (C1). You then repeat this virtuous circle.

Obviously, companies in any space that you’ve just industrialised (B2 to B3) might grumble — “they’ve eaten our business model” — so, you’ll have to carefully balance acquisition with implementation. On the upside, the more component services you provide in your platform then the more attractive it becomes to others. You’ll need to manage this ecosystem as a gardener encouraging new crops (“outside companies”) to grow and being careful not to harvest too much. Do note, this creates an ever expanding platform in the sense of a loose gathering of discrete component services (e.g. storage, compute, database) which is distinct from a code execution platform (i.e. a framework in which you write code).

There is some subtle beauty in the ILC model. If we take our ecosystem to be the companies building on top of our discrete component services, then the larger the ecosystem is: -

  • the greater the economies of scale in our underlying components

  • the more meta data exists to identify future patterns

  • the broader the scope of innovative components built on top and hence the wider the future environment that we can scan

This translates to an increasing appearance of being highly efficient as we industrialise components to commodity forms with economies of scale but also highly customer focused due to leveraging meta data to find patterns others want. Finally, others will come to view us as highly innovative through the innovation of others. All of these desirable qualities will increase with the size of the ecosystem as long as we mine the meta data and act as an effective gardener.

Being constantly the first mover to industrialise a component provides a huge benefit in enabling us to effectively be a fast follower to future success and wealth generation. The larger the ecosystem we build, the more powerful the benefits become. There is a network effect here and this model stood in stark contrast to what I had been told — that you should be a fast follower and that you could be one of highly innovate, efficient or customer focused. Looking at the map, I knew that with a bit of sleight of hand then I could build the impression that I was achieving all three by being a first mover to industrialise and a fast follower to the uncharted. I normally represent this particular form of ecosystem model (there are many different forms) with a set of concentric circles. I’ve transposed figure 55 above into such a circular form and added some notes, see figure 56. In this world, you push your “pioneers” outside of the organisation by allowing other companies to be your pioneers.

Figure 56 - Circular view of ILC
Figure 56: Circular view of ILC

Using context specific gameplay: the play

It was at this point, with some context specific gameplay in hand that I started to run through a few scenarios with James, my XO and my Chief Scientist in our boardroom. Our plan started to coalesce and was enhanced by various experiments that the company had conducted. Not least of which was the head of my frameworks team walking in to tell me that they had just demonstrated we could develop entire applications (front end and back end) in Javascript.

At the same time as refining our play, I had encouraged the group to develop component services under the moniker of LibApi as in liberation API i.e. our freedom from endlessly repeated tasks and our existing business model. To say I was rapturous by this experiment would be to underestimate my pure delight. This fortuitous event helped cement the plan which is summarised in figure 57. I’ll break it down and go through each point in detail.

Figure 57 - The Plan
Figure 57: The Plan

Point 1 — the focus of the company would be on providing a code execution platform as a utility service alongside an expanding range of industrialised component services for common tasks such as billing, messaging, an object store (a key-object store API), email etc. All components would be exposed through public APIs and the service would provide the ability to develop entire applications in a single language — JavaScript. The choice of JavaScript was because of its common use, the security of the JS engine and the removal of translation errors with both the front and back end code built in the same language. The entire environment would be charged on the basis of JavaScript operations, network usage and storage. There would be no concept of a physical or virtual machine.

Point 2 — to accelerate the development of the platform, the entire service would be open sourced. This would also enable other companies to set up competing services but this was planned for and desirable.

Point 3 — the goal was not to create one Zimki service (the name given to our platform) but instead a competitive marketplace of providers. We were aiming to grab a small but lucrative piece of a very large pie by seeding the market with our own utility service and then open sourcing the technology. To prevent companies from creating different product versions the entire system needed to be open sourced under a license which enabled competition on an operational level but minimised feature differentiation of a product set — GPL seemed to fit the bill.

We still had a problem that service providers could differentiate and undermine the market. However, we also had a solution as our development process used test driven development and the entire platform was exposed through APIs. In the process of developing we had created an extensive testing suite. This testing suite would be used to distinguish between community platforms providers (those who have taken the code but modified it in a significant way) and certified Zimki providers (those who complied with the testing suite). Through the use of a trademarked image for Zimki providers we could enforce some level of portability between the providers.

By creating this marketplace, backed by an Open Zimki Foundation, we could overcome one source of inertia (reliance on a single provider) whilst enabling companies to try their own platform in-house first and developing new opportunities for ourselves from an application store, market reporting, switching services, brokerage capability, training, support and pre built stand-alone Zimki clusters. Such an approach would also reduce our capital exposure given the constraints we existed under.

Point 4 — we needed to build an ecosystem to allow us to identify the future services we should create and hence we had to build an ILC model. Obviously we could only directly observe the consumption data for those who built on our service but what about other Zimki providers?

By providing common services such as GUBE (generic utility billing engine) along with an application store, a component library (a CPAN equivalent) and ultimately some form of brokerage capability then we intended to create multiple sources of meta data. We had a lot of discussion here over whether we could go it alone but I felt we didn’t have the brand name. We needed to create that marketplace and the potential was huge. I had estimated that the entire utility computing market (i.e. cloud computing) would be worth $200bn a decade later in 2016 and we would grab small piece.

Our longer term prize was to be the market enabler and ultimately build some form of financial exchange. We would require outside help to make this happen given our constraints but we decided not to promote that message as it was “too far in the future and too crazy” for most.

Point 5 — we needed to make it easy, quick and cheap for people to build entire applications on our platform. We had to ruthlessly cut away all the yak shaving (pointless, unpleasant and repeated tasks) that were involved in developing. When one of the development team built an entirely new form of wiki with client side preview and went from idea to launching live on the web in an under an hour then I knew we had something with potential. Pre-shaved Yaks became the catch-phrase to describe the service and something we plastered across our T-Shirts in 2005 and 2006.

Point 6 — we anticipated that someone would provide a utility infrastructure service. We needed to exploit this by building on top of them. We had become pretty handy at building worth based services (i.e. ones we charged for on a percentage of the value they created) over the years and I knew we could balance our charging of the platform against any variable operational cost caused by a utility infrastructure provider.

By building on top of any utility infrastructure service, we would also have the advantage of cutting that supplier off from any meta data other than our platform was growing. If I played the game well enough then maybe that would be an exit play for us through acquisition. If we were truly going to be successful, then I would need to break the anchor of the parent company at some point in the future.
Point 7 — we knew that building data centres would be a constraint in utility infrastructure and that compute demand was elastic. This gave options for counter play such as creating a price war to force up the demand beyond the ability of one supplier to provide. But in order to play one provider off against another we needed to give competitors a route into the market. Fortunately, we had our Borg system and though we had talked with one large well known hardware provider (who had been resistant to the idea of utility compute) we could open source (Point 8) this space to encourage that market to form. I had counter plays I could use if needed them and it was to our advantage if a fragmented market of utility infrastructure providers existed. We should aim for no-one company to gain overall control of this space.

The option looked good based upon our capabilities. It was within the realm of possibilities and mindful of the constraints we had. This seemed to provide the best path forward. It would mean refocusing the company, removing services like our online photo site and putting other revenue services into some form of minimal state until the platform business grew enough that we could dispose of them. I was ready to pull the trigger but there was one last thing I needed.

Impacts on purpose

The decision to act can impact the very purpose of your company — the strategy cycle is not only iterative, it’s a cycle. In this case our purpose was going from a “creative solutions group” a meaningless juxtaposition of words to a “provider of utility platforms”. Just stating that purpose was not enough, it never is. If I wanted to win this battle, then I needed to bring everyone onboard and make the purpose meaningful. I had to create a moral imperative, a reason for doing this, a vision of the future, a rallying cry, a flag we could wave and our very own crusade.

For us this became embodied in the words “pre-shaved Yaks”. We intended to rid the world of the endless tasks which got in the way of coding. We would build that world where you just switched on your computer, opened up a browser and started coding. Everything from worrying about capacity planning, configuring packages to installing machines would be gone. Every function you wrote could be exposed as a web service. Libraries of routines written by others could be added with ease through a shared commons and you could write entire application in hours not days or weeks or months. This was our purpose. It was my purpose. And it felt good.

What happened next?

We built it.

I refocused the company, we cut away that which didn’t matter and we developed our platform. By the 18th Feb 2006 we had the platform, core API services, billing system, portal and three basic applications for others to copy. We officially beta launched in March 2006 (our alpha had been many months earlier), this was a full two years before Google appeared on the scene with AppEngine. The public launch was at dConstruct in September 2006.

By the 18th April 2006, we had 30 customers, 7 basic applications and a monthly rate of 600K API calls. By 19th June 2006, we were clocking a run rate of 2.8M API calls. We were growing at a phenomenal rate and by the first quarter of 2007 we had passed the 1,000 developer mark i.e. others building systems for their own users. After a slow start, our growth was now exceeding even my optimistic forecasts given the huge educational barriers I expected — see figure 58.

Figure 58 - Growth in Zimki users (developers)
Figure 58: Growth in Zimki users (developers)

But during that time something exceptional had also happened. On August 25, 2006 it wasn’t Google but Amazon that launched with EC2. I was rapturous once again. Amazon was a big player, they had provided instant credibility to the idea of utility computing and in response we immediately set about moving our platform onto EC2. Every time we presented at events our booths tended to be flooded with interest with crowds of people often four, five or six layers deep. The company had embraced the new direction (there were still a few stragglers) and there was a growing buzz. We were still very small and had a huge mountain to climb but we had taken our first steps, announced the open sourcing, secured a top billing at OSCON in 2007 and the pumps were primed. But Houston, we had a problem.

What went wrong?

The problem was me. I had massively underestimated the intentions of the parent company. I should have known better given that I had spent over three years (2002–2005) trying to persuade the parent company that 3D printing would have a big future or my more recent attempts that mobile phones would dominate the camera market. The parent company had become pre-occupied with SED televisions and focusing on its core market (cameras and printers). Despite the potential that I saw, we were becoming less core to them and they had already begun removing R&D efforts in a focus on efficiency. They had brought in an outside consultancy to look at our platform and concluded that utility computing wasn’t the future and the potential for cloud computing (as it became known) was unrealistic. Remember, this was 2006. Amazon had barely launched. Even in 2009, big name consultancies were still telling companies that public cloud wasn’t the future or at least was a long way away.

The parent company’s future involved outsourcing our lines of business to a systems integrator (SI) and as I was told “the whole vision of Zimki was way beyond their scope”.

I had several problems here. First, they wouldn’t invest in our service because apparently a decision had been made higher up within the parent company on what was core. What they were concerned with was the smooth movement of our lines of business to the SI. That supported their core aims and their needs. When I raised the idea of external investment then the problem became they couldn’t keep a stake in something which they said was not core.

When I raised the idea of a management buy-out, they would always go to what they had described as an “unrealistic” $200bn market figure for 2016. Surely, I would be willing to pay a hefty sum based upon this future market as a given for a fledgling startup in a fledgling market? No venture capital firm would take such an outrageous one-sided gamble. In any case, I was told the discussion could always be left until after the core revenue services were transferred to the SI. This was just short hand for “go away”.

The nail in the coffin was when I was told by one of the board that the members had decided to postpone the open sourcing of our platform and that they wanted me to immediately sign contracts cancelling our revenue generating services at an unspecified date to be filled in later. As the person who normally chaired the board meeting then I was annoyed at being blindsided, the choice and myself. Somehow, in my zeal to create a future focused on user needs and a meaningful direction, I had forgotten to gain the political capital I needed to pull it off. I might have created a strong purpose and built a company capable of achieving it but I had messed up big time with the board. It wasn’t their fault; they were focusing on what was core to the parent company and their needs.

The members were all senior executives of the parent company and it should have been obvious that they were bound to take this position. I realised that I have never truly involved them in our journey and had become pre-occupied with building a future for others. I had not even fully explained to them our maps relying instead on stories but this was because I still hadn’t realised how useful maps really were. In my mind, maps were nothing more than my way of explaining strategy because I hadn’t yet found that magic tome that every other executive learnt at business school. This was a powerful group of users — my board and the parent company — that had needs that I had not considered. Talk about a rookie mistake. I had finally been rumbled as that imposter CEO.

There was no coming back from this, they were adamant on their position and had all the power to enforce it. I was about to go on stage at OSCON (O’Reilly open source conference) in 2007 and rather than my carefully crafted message, I had to somehow announce the non-open sourcing of our platform and the non-creation of a future competitive utility market. I was expected to break a promise I had made to our customers and I was pretty clear that postpone was a quaint way of saying “never”. I couldn’t agree with the direction they had chosen and we were at loggerheads. My position was untenable and I resigned.

The company’s services were quickly placed on the path to being outsourced to the SI and the employees were put through a redundancy program which all started a few days after I resigned. The platform was disbanded and closed by the end of the year. The concepts however weren’t lost as a few of these types of ideas made their way through James Duncan into ReasonablySmart (acquired by Joyent) and another good friend of mine James Watters into Cloud Foundry. I note that Pivotal and its platform play is now valued at over $2.5bn and serverless is a rapidly growing concept in 2016. As for SED televisions? Well, some you win, some you lose.

As for the consultancy, any frustration I might have is misdirected because I was the one who failed here. It was my job to lead the company and that didn’t just mean those who worked for me but also the board.

In these first chapters, I’ve hopefully shown you how to understand the landscape you’re competing in, anticipate the future, learn to apply doctrine, develop context specific gameplay, build the future and then finally blow it by ignoring one set of users. Would Zimki have realised its potential and become a huge success? We will never know but it had a chance. This was my first run through the strategy cycle and at least I felt as though I had a vague idea as to what I was doing rather than that naïve youth of “seems fine to me”. I was still far from the exalted position of that confident executive that I had met and I was determined to get better next time. Fortunately for me, there was a next time but that’s another part of the story.

Categorising Gameplay

Gameplay is context specific. You need to understand the landscape before you use it. The purpose of gameplay is once you determine the possible “wheres” that you could attack (which requires you to understand landscape and anticipate change from common economic patterns) then you look at what actions you can take to create the most advantageous situation. As we go through this book, we will cover all sorts of gameplay and refine the concepts discussed above. To give you an idea of what we need to cover, I’ve put some basic forms in figure 59, marking off in orange some that we’ve already mentioned.

Figure 59 - Gameplay
Figure 59: Gameplay

I’ve categorised the above forms of gameplay depending upon their main impact :-

  • Alteration of user perception

  • Accelerators to evolution

  • De-accelerators to evolution

  • Means of dealing with toxicity (i.e. legacy)

  • Market plays

  • Defensive plays

  • Attacking plays

  • Ecosystem models

  • Positional plays

  • Poison mechanisms (prevents a competitor using the space)

I have to reiterate that every time that I’ve gone around the cycle, I’ve got better at playing the game. As we travel along the same path I’ll be adding in more economic patterns, more doctrine and more context specific gameplay along with deep diving on some of the parts I’ve glossed over or were merely general concepts in those early days. But as with all journeys, let us stick to the path and no short cutting. Every step is valuable; every landscape is an opportunity to learn from.

An exercise for the reader

Hopefully by now, you may have created a map or two. Using the concepts in this chapter, examine your map and first try to identify where you might attack. Now using the gameplay in figure 59, have a go and try to see where you might use gameplay and whether one route or another stands out. It really does help to work with others on this, fortunately maps provide you with a mechanism to communicate, collaborate and learn.