Category: Uncategorized


Yesterday with my colleague Carl Haggerty (who collaborated with me on this blog post) I attended a very intriguing session about Business Model analogies by the Gartner analyst Dave Aron. This may sound like quite an abstract topic in the current local government climate but I think there are strong possibilities that it might turn out to be one of the more relevant ones once all the dust has settled: I could be off-beam here but only time will tell.

Anyway, the basic premise of the session was that there are relatively low-risk ways to innovate in business that regularly give relatively good results. I think this might be important in the current financial climate where some of our business units are facing very substantial budget cuts and will have to give the same or better results with much less money. It also chimes in with an impression we have that innovation is not something that happens in an internal silo but needs to be distributed to become everyone’s responsibility.

So what is this magic method? Simply strip the value provided by a business down to it’s core and look for analogies in other industries that have a similar business model to yours. Examples of this kind of thinking abound – apparently Henry Ford got the idea for his production line by looking at a butcher’s “disassembly line” and this inspired him to create the first auto factory, and the Gutenberg printing press was inspired by a wine press. When doing this we are looking for stuff that is somewhat safe that has been tried – and succeeded – in other situations. We are looking for ways to get superior performance with lower levels of effort and risk.

The process was illustrated with three examples that all showed a different facet of the use of business model analogies:

Whitbread – applied advances in airline IT (such as online check-in, revenue and yield management) to their Premier Inn business
TokioMarine & Nichido – a large Japanese insurance company: they took ideas from 7-11 Japan around customer relationship management and started a channel management information and comms programme.
APM terminals – a large shipping company that build/operate shipping terminal facilities. Their CIO noticed that risk profile was similar to pharmaceutical companies and implemented ideas from the pharmaceutical “pipeline”, a standard stack based on simulations that enables faster build of facilities and quicker return.

According to Aron there are 4 times to look for analogies:
1)    when there are unresolved problems
2)    when the business model changes
3)    if you have unexploited assets (informational, staff, or other resources)
4)    if you need to step up IT’s contribution to innovation in the business. (I think this one was in the presentation entirely because of the audience aron was speaking to!)

Three types of analogy
1)    customer and product analogies – similar customers or product/service relationships (the Whitbread example)
2)    structural – consider value chain, market position, supplier relations (TokioMarine)
3)    strategic – who wins in the same way? who has same strategic risks? (APM)

According to Aron there are three further types of strategic analogy – customer intimacy, operational excellence, and product leadership

This is all quite theoretical so far. What about practical steps?

The first thing to do, according to Aron is to define your enterprise: ask “what are we like”?
–    customer/product (who? segments?)
–    shape of the value chain? who has power?
–    how do you win? which processes generates most value? key risks, assets etc?)

continually scan for innovations: who are the heroes?
–    build a database of companies that have done interesting or exciting things
–    classify them
–    what is it that is innovative?

look for matches: which are the best fit?
–    slide on structural similarities

create a team that uses analogies smartly
–    increase staff diversity
–    horizontal networking
–    use of external advisors

what can go wrong?
avoid pitfalls:
–    superficial analogies (be rigorous!)
–    taking the analogy too far (anchoring)
–    analogies that evoke unfavourable reactions (reframe them)

Advertisements

#Gartnersym 2010: Social BPM

I’m at the Gartner Symposium again this year doing some work on the SOCITM Technology Challenge (I did the same last year). This is a very quick post while I’m waiting for my next session to start, more detail might or might not emerge later depending on how my workload progresses!

A number of presentations here so far have touched on the subject of “context-aware computing” and what it might mean for applications in the future.

There are a number of smartphone applications, such as Foursquare or Gowalla, that already make use of location based information: these applications allow you to “check in” to a location and see, amongst other things, photos of where you are taken by other users, or to know which of your circle of friends and contacts are there or have visited recently.

An intriguing set of future developments might see this linked to workflow. An application that allows you to check in somewhere and automatically does some or all of the following would be extremely useful:

  • see what other people are there that you know
  • synchronise with your “to-do” list and prompt you to carry out tasks linked that physical location (like posting a letter at the post office, for example)
  • also from your “to-do” list, see if anyone else is there that you might need to see about something and prompt you to make contact, perhps even initiating that contact automatically
  • at an enterprise level, allow tasks to be “pushed” out to you dependent on your location, capabilities (eg what sort of connectivity exists, what facilities are available, what others are there etc) in priority order
  • record the steps involved in activites that take place in that location for future automation capability
  • observe and report other problems, ie interface with other workflow systems.

these capabilities are by no means out of reach, they just have not yet (to my knowledge) been put together in one application. We seem to be pointing towards a sort of context-aware “digital assistant” that helps you carry out tasks efficiently and generally get the most out of your time. (It might also be useful to those with some kinds of mental impairment, such as memory loss).

(Please note: no local authority budgets were harmed in the making of this blog post. My attendance is not being paid for by my employer and the views expressed are my own)

So cost-cutting is the order of the day in the UK public sector, and IT departments are no exception. We are expected to do more with less; to reduce our operational costs; to cut back on what we do. Fair enough.

Some of the traditional ways IT departments can do this are well-documented and rehearsed. Virtualisation of servers (and increasing, desktops), rationalisation of the applications portfolio,  automation (where possible, and if not, then downgrading) of support functions, and cutting back on what projects get done. All these things can reduce the costs of the IT department and from where I sit they appear to be being pursued vigorously.

Since IT is a support function, however, I have to ask what the impact of this cost-cutting is on our client services. If we reduce the application portfolio and force our Community Services department to use the same time-recording system as our Highways department, what does that do for them?

I see this as an issue of demand and supply-side behaviours. Cutting operational costs in a support function is a supply-side strategy. Enabling the client to cut costs, however, is a demand-side strategy – and while it might take actual IT investment to do it, the spend by these departments is so much higher than the overall IT spend in my organisation that we have the potential to realise much bigger cost savings.

Perhaps that “duplicate” time-recording system has a tiny feature that is allowing our client service to streamline the way they work and making savings of several time-recording systems every year? We’ll never know unless we properly understand the value that the IT infrastructure provides to the business.

I’ve been following a conversation over on Robin Dickinson’s blog recently called “Sharewords: the easiest way for us to recommend you“. Robin seems to have developed this idea to help small businesses and consultants market each other, understand themselves, and gain better focus and self-confidence. The conversation it inspired was nothing short of incredible: at the last count there were over 400 comments from people helping each other develop their share-words. I strongly urge you to take some time to check it out as the results have been excellent.

So what is it? Basically, it’s about uncovering the value that a person, team or business brings to the table and boiling it down to as few words as possible, as accurately and positively as possible. In Robin’s discussions there’s a strong marketing flavour to the outputs but also a commitment to get it as right as possible.

Participants need to bring a basic understanding of their own value proposition:

1) What are the *key* benefits that customers get from your service/product delivery (in your words) – best if you can prioritize these from most to least important?

2) How do your most satisfied customers describe the experience of dealing with you (their words)? Again, – best if you can prioritize these from most to least important?

3) What is the *most* unique element of your business that truly differentiates it from anything else out there?

Having read a load of the comments, however, a thought began to develop: wouldn’t this work brilliantly for personal training and internal team development? Everyone is coming away from the conversation with:

  • enhanced self-confidence
  • greater clarity of purpose
  • a strong positive sense of their own value
  • increased respect for those they work with.

It doesn’t appear to be easy, though. Much of the success of Robin’s workshops stems directly from Robin’s own humanity and expertise in getting to the core of people’s businesses, as well as that of other members of the community, so to work the effort needs to be skilfully led.

I’ve been in a bit of team-building from our HR department and they are very good: it’s professionals like these that would need to lead these workshops and help understand the people involved.

Other things I think that characterise Robin’s Sharewords discussion and make it successful:

  • supportive atmosphere. Everyone is obviously enjoying helping and being helped
  • self-motivated people. All the people contributing at the moment are entrepreneurial types, and I believe their self-motivation is being amplified by virtue of them all talking to each other in one place
  • people’s value all being treated equally. Everyone’s value is different and there needs to be a strong commitment to uncover that.

It’s up to Robin, I think, to decide what direction to take the concept. Some have even suggested a book deal: for me, though, the value seems clearest when workshopping this method with expert facilitation.

Understanding the value of ICT

In my last post I argued that ICT understands the cost of everything and the value of nothing, and that this is a fundamentally cynical position to take and that “focusing on ways and metrics to calculate the value of the ICT we deliver has got to be the way forward”. Well, that’s easier said than done.

But hey, whatever, I’m going to have a crack at it. Maybe it can be developed further by people who know better than me.

The basic discipline which focuses on return on investment is management accounting. I have no qualifications in this (it’s going to show) but it seems to divide into two areas: the return on investment in individual projects (expressed as Net Present Value, a way of comparing the return on a project to its opportunity cost as baselined against the standard interest rate) and Portfolio theory, a way to look at a basket of investments and choose the components of that basket in a way that offsets risk whilst maximising returns.

Both of these seem to work as advertised when the return on a particular project can be calculated. I invest £100 in the stock market, I get £110 (or £90!) back 6 months later. The NPV (Net Present Value) calculation compares that return against what would have happened if I had simply invested that £100 in a bank savings account.

If I had lots of £100 investments, Portfolio theory looks at maximising the return and minimising the risk. Perhaps I make one investment in an umbrella manufacturer and another in a company that makes sun cream, this minimises the risk that extreme weather will wipe out my entire portfolio.

OK, so both of these descriptions are crude. Let’s try not to get too hung up on it. What’s it got to do with computers? Because that’s what I’m on about.

The problem here is that it’s complicated. If I have a set of computers in an office with a load of different applications running on them, how much value does that deliver? Does application A deliver more, or less, value than application B? If I spend £100 on an upgrade to the network so that everyone takes 5 minutes less to log on, what’s the return on that investment? Does everyone create (hourly rate x 1/12 x number of staff) more value as a result? Do we even know how much value a person creates in an hour, especially if they are in a support function rather than delivering a front-line service?

One common way of understanding the value a business proces can deliver  is to use a capability analysis  (my colleague Carl Haggerty is thinking about this a lot right now). This is a measure of the ability of part of the business to fulfil some kind of value-creating function. That stuff is very useful but I want to deal with it another time, and just make a more general point about measures of the value of something a business does.

I think “value” is a whole load of different measures. By that I mean that it has several interacting components. A person who works in a bank selling mortgages creates a number of different types of value:

  • profit (through the stuff they sell)
  • brand (through customer service)
  • support services to other team members (eg going to get coffee for them)
  • environment (because they are in the branch, the branch is open and this gives an intangible network-effect to other shops in the same high-street)
  • marketing and market research (through talking to customers they can gain intelligence about the market)
  • and so on.

We can see that, even in a small, straightforward business it is going to be difficult and complex to calculate value.

However, we need to have a go. So, I propose that:

  • in front-line services, “value” is calculated by measuring direct service outcomes
  • in support services, “value” is measured by aggregating the added value those services contribute to front line services.

ICT is almost always a support function. So its value is linked intimately to the value delivered by the front line services that it supports.

So to understand the value of ICT, you first need to understand the value of the front line services. The value added in terms of ICT can only be expressed in these terms to be even remotely meaningful.

And for some additional added value, the sooner the ICT department starts talking in the same value-terms as the front-line services, the better.

My last post focused on an apparent problem: that if the delivery of IT was a repeatable process, why were many corporates so bad at it? One commenter noted that many companies were still in business and therefore their ICT was probably good enough: this is a highly valid point.

A counterpoint dropped into my Google reader this morning, courtesy of Bankervision (James Gardner):

But, as everyone who works in a large company knows, you never get such an experience at work. Everything is a few versions behind, and even when it’s up to date, it all works so slowly. This, in part, is because of all the management, and security, and monitoring, and stuff that IT people feel they have to do to protect their assets.

What are the key changes IT organisations would have to make if they really wanted to deliver a decent consumer computing experience?

James then outlines 8 or so key things an IT department needs to do to reverse the trend. The key thing, though, that stands out for me is this one:

4. IT can’t only be about cost. I am so exhausted by the amount of effort we go to reduce cost. I mean, I’m not advocating that everyone should just go and spend anything they like on shiny gadgets, but really, this laser sharp focus on cost containment prevents you from moving things forward at a decent pace. IT is an asset, a strategic enabler. Do you ask a starving athlete to run faster and longer with less food? Not really.

Someone (not sure who, clue me in reader!) once said that a cynic is someone who knows the cost of everything and the value of nothing.  This could be said to describe the modern IT department totally: in my experience we are totally clueless about the value that the IT we support provides to the business.

The word “cynical” also describes the attitude of many IT staff when faced with a new business initiative. Let’s face it, we’ve seen so many failures. So many things were supposed to be the next big thing and disappointed us (at best) or near-bankrupted us (at worst). Vendors are only partly to blame – promising us the earth and delivering the same old mediocre incremental updates – but IT doesn’t help by only focusing on the nuts and bolts and forgetting the business outputs it is meant to deliver.

So what should be our priority? I would argue that focusing on ways and metrics to calculate the value of the ICT we deliver has got to be the way forward. How can our partners in the business decide which ICT investments to prioritise in an era of extreme cost-cutting? How do we know if the value of a particular system is worth the cost it incurs? Only by focusing on the value returned by a system can we know if it delivers a return on investment. We do it for other projects but why not for our ICT infrastructure?

This is not to say that James’ other points aren’t valid – they are. But they reinforce a view that ICT decisions are in some way something to do with the ICT department. They shouldn’t be exclusively so: we need a different set of competencies brought to bear on the ICT investment selection process.

We can’t solve our problems with the same thinking that created them. We need different competencies in the CIO role, and different supporting skills in those that advise the CIO.

It’s a truism of many IT consultancies that technology delivery is not hard: it’s the management and exploitation of it that is. Technology, so the theory goes, leverages known physical processes to realise predictable outputs – an engineering problem, in other words – whereas dealing with the random factors of managing people (including customers and employees) is unpredictable and therefore complex and difficult.

Many technical people might object to this gross oversimplification of their profession. If it’s so easy, why don’t the managers try it since they are so clever? And true enough, it takes specialist skills and knowledge to even scratch the surface of a modern corporate IT department and understand the work of a database administrator, application developer or SAN storage analyst.

One of the problems here is that these tasks require such different mindsets to complete them successfully. A technical analyst in any field requires levels of patience and conceptualisation unimaginable to most management professionals, but few that develop these skills also go on to acquire empathy, diplomacy, tact and all the other “human” skills that a modern manager needs – especially so since those with the higher level technical skills are also correspondingly more challenging to manage.

I believe that some of these problems may have been behind the shift to outsourcing of IT departments that has been happening over the last two decades: managers simply find themselves unable to manage the technical workforce and so decide to package it up and make it someone else’s problem without really fixing some of the core problems inherent in their architectures. This usually means the outsourcing will fail to realise any savings or even reduce the complexity of the management task.

A number of methods have been proposed to fix the problem. ITIL Service management, Enterprise Architecture, governance standards and project management have all become part of the standard toolkit for many corporate IT departments over recent years.

What interests me now is, have any of these methods worked? Why is corporate IT so brain-meltingly mediocre?

Or do you disagree and know of a corporate IT department that is really genuinely successful and what made it that way?

Yet Another LikeMinds Review

So I’ve blogged now on the some of the thinking that Likeminds stimulated for me. But what of the event itself?

I’m now going to try and be critical. So let’s get some perspective before I do: this event is groundbreaking in lots of ways. It’s well-executed. The speaker line-up is fabulous. It’s well-attended (sold out several weeks early in fact). The conversation is online, offline, before, during and after. Lots of clever people go (and me). It’s in Exeter. It’s cheap as these things go. It’s “for the people” and a bit of “by the people” as well.

I also need to say up front that I know (and like) some of the people behind the event as well. And I know that this was only the second one they’d put on. So I guess they’ll be (after a short break, hopefully!) thinking about the next one and the tweaking they might do.

What went well:

  • Lunches. This was a genius idea. Only problem was that I couldn’t have 3 lunches with all the different people I wanted to.
  • Keynotes. All the speakers were excellent. Jon Akwue set the bar high with the first session, and the others rose to the challenge.
  • Venue and Location. I really like the conference centre, it’s a good size in a great location and the facilities were top-notch.
  • Endeavours. These calls to action were timely reminders that we could actually just blinking well do something about some problem, and were an antidote to the sometimes theoretical content (memo to organisers: do this next time, I’ve got one I want to plug!).

Improvements:

  • MORE! I want more variety of delivery models.  More depth of topic. More insights. I want unconference sessions, breakout workshops and stuff to take away.
  • Different people there. Lots of social media gurus were there. I’ve blogged a bit about this in my last post but I would like to see more operations management and non-SM practitioners there, people who will get inspired to go off and make a difference to their own jobs. (note: this is a problem at many conferences I’ve been to. A self-selecting audience drives a agenda firmly in their own comfort zone. LM is possibly better than many in this respect)
  • I want analysis of the big stuff. Which industries will win and lose as a result of the shift to the networked economy? How will our lives change? What’s the carbon footprint? What are the emerging platforms (although Joanne Jacobs covered this angle to some extent)? What skills do we need the future workforce to have?
  • Panel discussions. For me, these might improve with a different mix of people. We had lots of  writers, journalists, and social media experts. One way this might work is with a slightly more adversarial format: eg get the production manager of a company to grill the “experts” on how their strategies can help them in a live case study. We might learn more this way than with social media experts in the audience asking questions of social media experts onthe panel.
  • More focus on the individual. I know from my own experience that the biggest ROI I’ve had from Social Media has come about through locating and following the experts on Twitter and elsewhere. An additional focus I’d like to see is on personal strategies for making the most of the vast body of knowledge now open to us all. And how this effects knowledge management, HR and careers.
  • Structured networking. Lots of interesting people were there, but I never got to speak with most of them. Some speed networking sessions would go down a treat (although the lunch sessions were good in this respect).
  • Timings. The last panel session was one too many. My brain was already full. Make it a two day thing and bring the extra sessions outlined above. And run some of the workshops elsewhere if need be.
  • Mini-summits. Not the “summit” event itself, but smaller, shorter, vertical-industry events embedded into the parent event. There were at least 15-20 public sector people there, for example: this could have been a half-day as part of the conference and delivered a best-practice whitepaper back to the conference proper.

So that’s my selfish stuff done. The above would make a conference perfect for me. It’s all about me, with me!

🙂

What about you?

“Social Media Marketing people? They all deserve to die”

That was a paraphrase of a comment by someone I know well. And at the time, I agreed.

I was halfway through the afternoon of the Likeminds 10 conference and had just nipped out for a bathroom break. There was a panel discussion going on about blogging and journalism. I felt this was the one bit where I wasn’t really engaged.

Conferences are difficult, especially when you have a wide range of people in the audience. And focusing on the negatives wouldn’t do the rest of the event justice: we had some brilliant keynotes, the personal touch at lunch was inspired, and some of the panelists genuinely insightful and engaging.

I’m not here to review the conference though. Lots of other people are doing that as I type this. It was great. That’s all you need to know.

But there were a lot of people in the audience who somehow “do” social media marketing for clients. And sitting there I was thinking: if I ran a business, would I need to hire any of these people?

I’m not sure I would. And not because they aren’t good people.

You see, I work in the IT department of a large organisation. But I don’t believe there is any future in it: I think that in 10 years time there won’t be internal IT departments: the technology itself will be cloud-sourced and the expertise in procuring and expoiting it will be absorbed into the person-specs of front line staff. Are you a professional salesperson/trainer/finance person? – If so, it’s YOUR duty to understand what technology is doing for you.

And I think the same is going to become true of marketing in general and social media marketing in particular. Marketing is about understanding customer needs and expectations: everyone in the organisation needs to do that. And that means that everyone in the organisation needs to be connected to the conversation, engaging with the customer and with each other.

In a true people-to-people business everyone is a marketer, and social software can enable that.

The Marketing department is dead. Social Media marketing gurus are so last year. We are all social media marketing gurus now.

Thinking about the G-Cloud

I was recently invited to a G-cloud “Quick Wins” session put together by RedPepper52 which, sadly, I couldn’t attend. The cloud is something that has been on its way for some time and is starting to gain traction.

(As an interesting aside, check out this article from 2005 in which Nicholas Carr predicts the future. Truly visionary)

I expect that Nicholas Carr’s vision will come true in the next decade and we will wonder why we ever had internal IT departments. This is interesting for two reasons: firstly, the move itself requires a different kind of technical architecture and IT provision mentality, and secondly IT managers need to become more focused on exploiting the computing resources on offer rather than provisioning them.

But a private cloud for Government? Is this really a likely scenario?

The agenda for the quick wins seminar suggested that commodity services which require little configuration would be the first to go over to this new setup. I’m not sure I agree. Most IT departments in local government have email systems and they work just fine, the capital is already invested and they are run at reasonable cost. Moving these to the cloud would incur transition costs, there may be some issues with local application integration (our apps send emails) and (perhaps most importantly) there is no added value. So we pay a load of money for a small cost reduction in future (if that even materialises) and the increased risk of losing local expertise.

No way.

To my mind, the cloud quick wins lie in providing new stuff. Stuff we haven’t got yet. XRMs. Master Data integration platforms. SOA middleware.

Actually, SOA middleware is the killer app for the cloud. Few councils make extensive use of it and it provides a way to pool services and construct meaningful applications without having to invest in a lot of in-house expertise. We can build our G-cloud architecture one service at a time and use these back-end web services to provide apps on a multitude of existing and emerging end user devices.

If the G-cloud is our future, we can’t do “quick wins”. We have to build from the back-end.