Insights

Blogs

2018 Cloud Trends in Financial Services

2018 is in full swing and from our recent conversations and work with CxO’s of global financial services firms, we note 10 major areas of focus in the delivery of their cloud programs.  These are our forecast for cloud trends in financial services for the remainder of 2018:

Stay current on your favourite topics

Subscribe

Shadow IT

Back in April 2015, I wrote about some of the risks of Shadow IT, suggesting that cloud usage (in particular, SaaS) may be highly prevalent across the Enterprise. In 2017, several of our clients started using CASB tooling to run discovery reports, many showing in the order of 9,000-10,000 cloud services in use. Those levels seem unbelievable (and the reports do contain a significant amount of noise with risk scores that are based on the opinion of the vendors) but they raise real questions about how well understood and managed the existing estate is.

Additionally, as financial institutions have been educating themselves on how to build out compliant public IaaS and PaaS environments, the complexities of managing risk in the SaaS model are becoming more well understood, as we explain in this article.

As the first cloud trend in 2018, we expect financial services firms to undertake Cloud “Look Back” programs, revisiting already sanctioned usage as well as working out how to respond to CASB discovery reports.

Compliance & Security

We’ve been heavily focused on cloud governance and controls for several years now. Back in 2014, we wrote a whitepaper opining on how Monetary Authority of Singapore (MAS) regulated entities could achieve compliance in AWS and in 2015, we wrote about specific control areas to focus on with public cloud.

Since then, we have helped several organizations, across a wide range of leading financial services firms and jurisdictions, evolve their approaches to accommodate the nuances of using public cloud services, and we have built an ever-maturing control library, with regulatory mappings, and reference models for governance and security which have directly enabled compliance across a wide range of jurisdictions.

“Cloud” regulations that have emerged in recent years have focused on managing outsourcing risk, rather than specific guidance on technical controls, so both financial institutions and cloud service providers are still having to feel their way through the compliance challenges.  This is preventing meaningful adoption of public cloud and several firms have scaled back their ambitions as a result.

This year, we hope to see the great work emerging from European Banking Authority, Association of Banks in Singapore and the FS-ISAC generate authoritative, consistent material around which the industry can coalesce.

We also anticipate more sophisticated automation of compliance to emerge this year in support of Continuous Delivery and Deployment efforts.

Public Cloud Business Case

Without a compelling data center level event, financial institutions have been struggling to make an infrastructure-wide business case for adopting public cloud. Most recently, capital avoidance for individual, massive-scale risk management use cases has been the sole compelling IaaS business case for several large institutions, driven by recent regulation such as CCAR, MiFID II and FRTB.

With many institutions looking to upskill in Machine Learning and Artificial Intelligence, the proprietary PaaS services from different vendors appear to make the next immediately compelling business case in public cloud.

Taking more of a long-term view, several firms are building strategies to containerize applications – both to enable greater portability, should a compelling event support wider adoption of public cloud, and to support emerging enterprise-wide DevOps strategies.

Portability & Exit Management

Topics receiving increasing attention, particularly where the business case for public cloud is being built around cloud providers’ proprietary services, are lock-in, exit management and portability.  Last year, we published ideas on approaching exit management with regulators and we expect this cloud trend to garner more serious attention this year.

In the world of cloud abstraction, the only constant is change.  Where historically firms have been trying to find a single do-it-all abstraction platform, it is becoming increasingly recognized that supporting hundreds of polyglot development teams requires multiple options for entry into the cloud, with different levels of built-in opinionation.

At the lowest, least opinionated level, Infrastructure as Code tools such as Ansible, Terraform and Salt are very mature and in broad use for provisioning VMs and Storage.

Kubernetes appears to have won the Container management battle, with huge industry momentum behind the project (all of the prominent vendors have integrated a Kubernetes into their platform – including Redhat, Pivotal, Mesosphere, Google, AWS and Microsoft Azure). Docker, synonymous with “containers” since 2013, is under threat as other OCI-compliant runtimes gain prominence (especially with Redhat’s purchase of CoreOS, with it’s rkt runtime).

Pivotal, Redhat and Microsoft are by far the most prominent vendors in the “portable” PaaS market, through CloudFoundry, OpenShift and Azure Stack respectively. Several vendors and open source projects are going after the nascent Serverless/FaaS space, including Pivotal (Spring Functions), IBM (OpenWhisk), Oracle (fn), Kubeless, OpenFaaS and Fission.

Developer Workflow Integration

Many firms are starting to see early benefits from tactical efforts to couple together cloud provisioning and CICD automation (a concept the team at WeaveWorks have termed “GitOps”).

By integrating cloud service management into the developer’s workflow, we hope to see fewer development teams feel the need to circumvent shared cloud service offerings, allowing them to focus on evolving application transformation work in API management, (Micro)service re-architecture, distributed system instrumentation and management (using ServiceMesh tooling) and massive scale processing.

As the concept is rolled out more broadly, the challenge becomes the ability to model complex, multi-cloud applications, which is fragmented across multiple tools each with their own proprietary modelling languages.  We believe the industry would benefit from momentum around a common language to describe the overall topology of an application, to support cross-industry collaboration around topics such as compliance automation and building a common approach towards meeting the regulatory requirements associated with broad adoption of straight-through automation.

Stay current on your favourite topics

Subscribe

Data Gravity

Technology to enable the portability of code is maturing and becoming mainstream. However, there are still major migration challenges at the data layer, as a result of legacy inter-system communication and data access patterns, many of which are built on assumptions about the reliability and performance of the network and the proximity of systems which don’t hold true in a hybrid cloud or multi-cloud model.

This complex web of dependencies requires deep analysis and close program coordination across hosting, connectivity, data services and application architecture to overcome the challenges that create centers of gravity around data.

We recently wrote about this topic of data gravity in more detail in this article.

Legacy Suitability & Migration Throughput

As we wrote in June, the prevailing approach we were seeing coming into financial services from other industries was the pattern-based migration factory. We’re seeing firms scaling back their cloud ambitions, not just for the compliance and security reasons mentioned above, but also because the throughput of migration programs has tailed off dramatically once the small number of “easy” applications that fit into a pattern have been moved.  This results in either short-cuts creeping into the migration program to maintain throughput or questioning of the true value of moving legacy applications to the cloud, with a shift towards a longer-term strategy of application modernization, operating model transformation and data governance.

James Akerman recently provided advice on setting up migration projects for success.

Massive Scale Risk Management Environments

Recent and emerging regulations, such CCAR, FRTB, MiFID II & CAT, are driving requirements for massive-scale processing requirements.  These are a natural fit for public cloud, given their large-scale bursty and periodic usage, avoiding tying up of capital in under-utilized hardware.  Future demand is also hard to predict, with a close correlation between capacity and business volume, and variability in the volume of regulatory requests that must be responded to.  Building on this, lines of business are using the same solutions to augment their own operations, with public cloud providing cost-benefit transparency – for example, a correlation between the frequency of calculations and a reduction in business risk.

The limits of the “infinite” capacity promised by public cloud providers have been reached on several occasions during our scale testing with clients, exposing big gaps between cloud providers’ ability to elastically scale even in mainstream US regions. Intervention has been required on the part of the public cloud providers to meet these massive scale needs, so testing and close provider relationships are a must.

These use cases are also prime candidates for the proprietary data analytics services available in the cloud, as well as test beds for alternate technology solutions based on GPGPU and FPGA.

Operating Cloud at Scale

As dynamic cloud use moves from a handful of niche use cases towards mainstream adoption, and DevOps transformation becomes an Enterprise-level concern, the “Ops” aspects have largely been left as an after-thought on the assumption that self-service and automation will have an organic reductionist effect on operations.

Dynamic cloud use brings important operational challenges, requiring greater holistic investment across the Cloud/DevOps spectrum than just offering self-service APIs and hiring a handful of Site Reliability Engineers (SREs) to write automation scripts, especially in the highly regulated environment.  Chris Allison wrote about several of these operational challenges last year.

Data Governance

Much of what we’ve mentioned above points to increasingly fragmented placement of data over time.

Enterprise data strategies, however, are generally pointing towards the ability to exploit advances in data analytics and AI technologies to interrogate data right across the organization to help improve the customer experience, identify business opportunities, manage risk, or even directly monetize the data itself.

SaaS usage represents a balkanization of data. Not only is it not in the corporate network, but it is often held in proprietary databases that cannot be easily interrogated and data extraction can be costly, slow or difficult (due to access rights, size, and custom extraction interfaces). Some firms are starting to consider backing out of SaaS-based CRM usage (e.g. through SalesForce and Microsoft Dynamics), partly for these reasons.

Finally, while it’s commonplace to have processes and tooling in place to stop the unauthorized transfer of sensitive data to SaaS services, it’s less common to be monitoring the SaaS data repositories directly to ensure its compliance and integrity e.g. through manual entry of personally identifiable information and who has access to that.

Stay current on your favourite topics

Subscribe

Summary

The industry has a dichotomy between the use of SaaS and IaaS/PaaS platforms: SaaS is in wide use with decentralized management making it relatively easy to consume; whereas application teams looking to exploit IaaS and PaaS platforms are largely directed to internally provisioned cloud (read managed VMs or limited PaaS choices) or centralized brokerage and control of public cloud services. Modern application teams would prefer to exploit PaaS or FaaS (“serverless”) services but are held back by compliance concerns, such as exit management. Most firms would like the ability to move workloads between multiple public cloud providers but the investment involved in onboarding to a public cloud means they only have active integration and viable contracts in place with one provider at this time.

While every CIO we work with acknowledges the inevitability of public cloud for IaaS and PaaS, few firms are migrating aggressively to the public cloud yet and the budgets available are limited. The themes listed above illustrate the complexities of identifying suitable workloads, building viable business cases and providing highly secure and well risk-managed routes to public cloud services.

For the next couple of years, the focus for most large regulated Financial Services firms will continue to be on safe, centralized brokerage of public cloud capabilities with limited adoption considering their overall footprint. The exceptions, with aggressive strategies, either have a board-driven agenda for modernization or a unique opportunity to exit existing data centers without lingering fixed cost issues.

In our view, firms need to tie together App Modernization, DevOps and Cloud strategies with a focus, not only on agility, but on aggressive reduction of production services / testing / QA costs. When this aggregate, cross-divisional business case is well articulated, the drivers for change are compelling.


Would you like to know more about our work?


The author

Ian Tivey

Ian Tivey

Associate Partner, New York

Ian is a competent and dynamic technologist with over 10 years of experience working across a diverse range of financial services technology including global real-time data delivery networks, low-latency trading environments, market data systems, enterprise platforms and public cloud. Ian has a proven track record of working with and influencing global teams of technical and business stakeholders to deliver high quality results within short timescales.

ian.tivey@citihub.com