If all the bits of your car lived in different countries, and you had to ship them to your house and assemble them every time you wanted to drive somewhere – it would get a bit annoying right? You would question why anyone thought this was an acceptable way of building something so critical.

Well, this is how banks treat their data. It lives in lots of different places and it can’t easily be put together to make sense of it. Data that are not accessible and timely is what I call “crappy data” and banks, insurers, FI’s have loads of the stuff.

But there’s more. Not only are you shipping all of the car parts, they’re all from different companies and not standard. Now you have to get a left-hand drive car to work with a dashboard that’s for a right-hand drive car…

AI is a Nice Thing™

(that’s not actually a trademark I just wanted to use ™)

AI is being sold as the next big thing for good reason. Take a look at the Google share price for an example of what mastery of data can do for an organisation that gets it right.

But there’s a difference. I can buy the same running shoes Usain Bolt has, but I can’t run as fast.

Every week I pass through an airport with some major consultancy extolling the virtues of “AI” and being an “AI-powered business”. But, like my hopes of winning a gold medal in the 100 meter sprint, using AI the way Google does may be beyond most incumbent financial institutions.

That doesn’t mean I should give up on the dream of being faster and better.

Getting the basics tight so we can have Nice Things™

(still not an actual trademark, it’s just an ASCII character that I like)

There’s often so much value in getting the basics right first before you get to machine learning (ML):

  1. Is your data accessible?
  2. Is that data accessible real time?
  3. Do you have the right consents recorded?
  4. How is the data tagged?
  5. How strong is your metadata?

Most banks can’t access their data in real time, nevermind have it in a format that is easy to query. So the first task is not defaulting to building a data lake, but ensuring it’s high quality. A data lake without live, real and broad data sets is bone dry. You can buy all the IBM Watson you like but you won’t get the results you desire aside from a press release.

If you can isolate a great data set you can go after very targeted use cases. E.g. Goldman Sachs has a service that will ingest global satellite data to tell you which retailers have full or empty car parks, that help you forecast their share price.

I suspect what banks are after at the moment is actually Robotic Process Automation (RPA). Which is a subject I generally hate. Because it’s what I was doing in 1999 for British Telecom. It essentially takes a paper form, scans it, has an algorithm figure out where the important bits are, then does some logic (like check if this is our customer, check what number the customer has, check their account balance, start a payment process).

RPA is gaining momentum because it can successfully automate a process performed by humans, and reduce operational expense. While that looks good on the P&L it does not magically resolve data quality issues or existing platform limitations. If a process is fundamentally flawed or broken and converted to robot-owned, the underlying data quality or connectivity issues that existed in the old world will also exist in the new.

Don’t fall for shiny new toy robot syndrome; the fidget spinner for the C Suite, focus on:

  1. Understanding your data situation
  2. Understanding your business goals
  3. Understanding what you actually want

Every billboard in every airport will tell you AI is going to save us money and make us more efficient. That may theoretically make sense but returning to Usain Bolt, two people running together doesn’t mean it’s an equal race.

While Facebook, Amazon and Google can and do use AI, the ensuing hype is loaded with some meaty caveats.

Firstly, they have the data, all the data, including data they are generating and the data they are collecting from other sources.
Second, they have the skills to make decisions based on the data. Call it data science if you must but it’s so much more. It’s a culture that understands it must be collected, understood and worked with.
Third, they have the scale to deliver new services based on what this data has revealed.

Banks tend to focus on the technology itself because they are engineered towards that way of thinking. So AI, as it exists today, is less about the potential for new operational infrastructures, or truly personalised services and has defaulted to offshoring and automation under the guise of ‘RPA as an enabler for AI’. Or worse, a chatbot labelled as improving customer journeys.

And frankly, it’s one of the crimes of our age. AI will play a significant role in delivering financial services but it will be an enabling technology, not the answer in of itself.

Let’s put the hype to one side and focus on fixing the how data is collected, where and why it is stored, how it is analysed and, critically, acting on what the data shows to define and deliver new services.

Some call that being data-driven, it’s a mindset, it’s a toolset and it’s a leap of faith…

But actually, there’s so much you can do without having to be the Usain Bolt of data. Let’s get beautiful, timely and accurate data, in everything we do. Then the fun really starts.