The Metaphors of IT – Machines and Beasts

I remember the first time I worked for IT at a bank. The language in the workplace was all about control and process and more process. My manager was a great people person, which meliorated this mechanistic tendency.

Of course having worked there for a while I realised that talk and action were somewhat different. Undocumented changes occurred and leadership turned a blind eye. Primadonna technologists roamed like cowboys across the systems. GNU tools showed up in the oddest of directory locations.

Command and control was the edict but it was like herding cats. IT was managed as one big machine (Back in the 70s when I was playing school ground tiggy it probably was one machine) that could be managed down to the smallest element. Very particular and focused.

The claw of the beast (John Christian Fjellestad via Flickr)

The claw of the beast (John Christian Fjellestad via Flickr)

Of course IT supported only simpler applications back then like ledger accounting (urgh). Now IT underpins every part of a business. Business and IT have become one big melange.

With firewall boundaries being torn down to support XaaS, mobile and 3rd party integration the world and organisations are becoming one mega-melange.
Organisations that see IT as something they can command and control are setting themselves up for disappointment, or consigning themselves to the past.

In this excellent blog Venkatesh Rao at Ribbonfarm details 8 metaphors for the modern organisation. The organisational metaphor of a machine is where we were, and the metaphor of the organisation as brain or organism is where we are heading.

You cannot control all details of an operation. A better approach is to:

Risk management entails making decisions moving forward but looking backward. Assessing the likelihood of something bad happening, say an AWS zone failing, or a VC-backed vendor going bankrupt, can be problematic. Assessing the MTTF of a hard drive is a little more scientific. We can apply standards, certifications etc. to technology and providers etc., but hey…. Sarbanes Oxley didn’t stop the GFC.

Another method could be to look at managing the risk through Insurance and possibly even state governments. (Socialist, eh? I saw that “comrade” earlier)

I like the idea of insurance companies and governments (ICAG) providing incentives for better IT management. ICAGs will only cover risk if there is a level of transparency, disclosure and accountability from IT users, vendors and providers. Mandatory disclosure laws are a good example – looking at you Australia. Another example could be whether an organisation’s risks should be covered if they don’t perform regular platform lifecycles.

Technology will only improve standards (security, service levels, manageability etc.) with transparency and audited action that drives improvements. Open source development is a good example of heading in the right direction. We also need to report this stuff in annual reports, marketing briefs and government audits.

In the same way that airlines and airplane-makers disclose details of accidents and implement improvements to avoid recurrences, thereby improving air travel safety every year, IT must organise itself so that it is biased to improving without the need for “top-down” intervention. Some things will slip through the cracks but the “rules” will adjust to stop the same event occurring again.

Technology users and providers that can’t adapt will die off. Those that can will thrive.

What are some open and transparent practices IT should have in place to bias it to improving over time?

(As a side note to wide-reading generalists, check out Francis Fukuyama’s The Origin of Political Order. Nation States that had the right institutions and were accountable thrived. When writing this, the same concepts came to mind.)

Cloud Integration: Mission Impossible?

This is a belated follow-on on from “The big 5 areas to nail when moving to the cloud“. It’s been a long time between drinks. Nothing like a new consulting gig to disrupt one’s writing habits.

I caught up with an ex-colleague for lunch recently. We’d both been working on Integration projects and were wondering why good integration capability is difficult and rare.

Integration often manifests itself in a shared asset like a bus or a broker. One issue is that IT is run by projects (with their own selfish interests) so often integration that is scalable, re-usable, loosely coupled etc. gets jettisoned just to deliver the project. This results in point-to-point integrations, managed file transfers and shared DB connections; whatever is easiest for the project to understand and implement. There’s nothing necessarily wrong with using any of these integration options in isolation, but over time the environment becomes unmanageable and ‘orrible.

If a mature integration capability has already been established you stand half a chance, but even then integration teams can be seen as slow, fussy and expensive pedants, to be worked around. (Not by me of course. I love you integration guys)

As a business grows and becomes more complex understanding how applications communicate becomes very difficult. At some stage an organisation invests in a discrete integration capability. (or DIC? Sorry for that moment of immature hilarity). The rise of mobile, cloud and outsourcing (the last being the least sexy of the three) has made integration even harder. It’s a cliché that information needs to be accessible anywhere, any time on any device and not just between systems housed in a single data centre. How do you meet these demands?

Integration – already complex – has become more complex. Security and integration teams must be proactive, forward-thinking and nimble to respond. The diagram shows common integration methods in a today’s organisation.

Integration Methods

Sharing a DB between applications (the red arrows) is great for speed of exchange, but your applications need to always agree on the data format (which never happens with commercial software) and change/release management needs to be in lock step. I’ve never heard of an internal and external application sharing a direct database connection before.

Point-to-point integrations (purple) are fine in isolated sub systems where the likelihood of adding a third application to the mix is low. You could integrate an internal and external application this way, possibly with some bespoke format translation, but it’ll end up being some crude hole in the firewall.

File transfers (light blue) are quick to implement once you’ve agreed on the file format. Typically file transfers are done in batch and therefore not real time. You can make your batch transfers more regular but after a while the files are moving so often and are so small, you might as well look at messaging or web services. Externally file transfers can occur using a file transfer gateway device.

Messaging and queuing systems (light green) are great for moving data between systems where exchange is not time critical, delivery is guaranteed, and where systems have different data formats and standards. If using this method externally there are different messaging technologies and governance standards to manage. For example an organisation might use MQSeries internally, but their mobile app partner only has experience with Amazon SQS. You could start with a handcrafted adaptor if likelihood of re-use is low but it’s not going to scale.

Web Services (dark blue) have a distinct advantage because they use http(s) and can integrate with an API product such as Apigee. Web services have become central to the API economy (”…the design of APIs has become as important as the design of the user interface”). The cloud is in many respects a cloud of APIs. Request / Response integration has its disadvantages – no guaranteed delivery, polling etc. – but most things can integrate with it securely.

Real-time integration between in your inner and outer worlds is the future. Managing things like data confidentiality, service level guarantees, access management, transaction traceability across multiple environments and organisations will become increasingly difficult. Contract management will have to play a part.

The tools required will be provided – at significant cost – by the big established integration vendors. They’ll provide you with the tools, but you’ll have to build and run it. Same as it ever was.

Who are your best local cloud providers?

It’s easy to find a cloud provider right?

Until you’re told you have to use a locally-hosted provider! And then you’re told to find perhaps a locally-hosted and locally-owned provider?

I need you to share the names and URLs of locally available cloud IaaS providers in your country. I’m compiling a list for Canada, Mainland Europe, New Zealand, UK, India, China, South Africa and of course US (although they own all the big providers). Feel free to share in the comments.

Here’s what I found for the Australian market in no particular order (Feel free to add to this):

Hosted in Australia:

  • AWS
  • Dimension Data,
  • Telstra,

Hosted and Australian-owned (I think):

  • NEXTDC,
  • Cloud Central,
  • Decimal,
  • Bulletproof,
  • CloudCentral,
  • UltraServe,
  • Brennan IT,
  • Macquarie telecom

Why IT Infrastructure sucks

There’s a young hipster making his way up through the ranks. He’s the “great hope” and is given an ambitious project to run. The project will make the company a bucket of cash. He’ll win awards etc.

He assembles his team. They draw wire diagrams, make project plans and hack code on a few old PCs. He assembles a “business case”: a PowerPoint with impressively opaque “business language”. He asks for money. Something like:

  • 10 people (project manager, developers, testers, designers etc.)
  • 6 months or 110 work days (A year is typically 220 workdays)
  • $1000/person/day (damned consultants!)
  • Multiplication gives a $1.1million budget

The budget expectation is set and he goes about getting approval (going for coffee) from management.
Imagine the sinking feeling when he gets to the IT Infrastructure team. The IT infrastructure guy hits him with annoying questions like:

How many hits will your site get? What are the growth projections? What is the impact if the site goes down? Does it run our middleware? How important is the data?

He answers as best he can:

The site must never go down of course. We better have a backup site. We’ll need somewhere to develop and test. What do you mean I have to have a performance test site?

The IT infrastructure comes back in two weeks with the following high-level costing and design:

"IT Infrastructure" "virtualization" "cloud" "web application"

  • 16 servers, 5 databases
  • Resourcing: ~200-300 days – ~$350,000
  • Hardware & Software: $100,000
  • Total ~$450,000

The “great hope” flinches! An extra half a million dollars! But computers are so cheap on eBay! His mate runs a start-up uses hosting that costs a few bucks a month.
He flies professionally through the stages of grief: Denial, Anger, Bargaining, Depression and Acceptance. He’ll have to re-set budget expectations.

When can this be delivered, he asks?

The IT Infrastructure guy:

We can’t start your project for 2-4 weeks because noone is available. You should have told us about your project 6 months ago.  It’ll take 3 months to get you the platforms.

Is this true of your IT world? Any stories to share? Or is your IT shop squeaky clean? Leave a comment below: