To understand the future of Information Technology (IT), we must first understand where IT came from.
IT originally was the 'pipes' of the business world: a necessary expense that kept things working smoothly. In other words, IT was a utility that enabled analog businesses to function more efficiently, streamlining existing processes but not providing the capabilities for any additional value. In this model, infrastructure is a cost center, and it requires the host organization to conceive of, design, and deliver all value to the customer.
Initially, these systems were comprised of only a few key technologies and datasets. They expanded relatively uniformly, cloning functions for other departments and users.
Initially, these systems comprised only a few key technologies and datasets. They expanded relatively uniformly, cloning functions for other departments and users.
Most corporate employees know what happened next: Systems needed to connect as technology became more important. Digital infrastructure became hard to manage, secure, and keep within budget. In response, technology leaders attempted to create modular systems of 'enterprise architecture' to control and bring order to the rising chaos, which meant consolidating systems into centralized IT departments.
Consequently, they were generally closed to other systems, proprietary, and difficult for end-users to access.
Because of the cost and the learning curve for new technologies, early IT projects focused on specific functions (such as account reconciliation within a finance department or inventory tracking in a grocery store chain). This approach worked for predictable, low-change functions, but as more users, institutions, and datasets were added, their interdependencies became ever more complex. This resulted in infrastructure becoming a tangled mess, causing challenges when one company needed to connect to (or had acquired) another.
A classic example is the acquisition of Continental Airlines by United Airlines in the US—the merging of their two IT infrastructures took over a decade.
Individual companies, or even different departments within the same company, built custom solutions or combined several strategies to address this. It was (and still is) common to see a combination of:
When overwhelmed by the complexity, risk, and cost of those strategies, many companies postponed modernization altogether, creating 'technology debt', an ever-increasing backlog of necessary tech updates.
Most forms of IT focused on 'faster, better, or cheaper' versions of analog approaches to work.
In other words, the benefits were still limited to business models and offerings which would be recognizable to someone who wasn't aware of computers.
Because of this, the development of basic functionality was prioritized to keep costs low, and less attention was paid to interoperability and user interface.
This complexity gave rise to the idea of 'technology as a platform.'
However, 'platform' had different meanings in different contexts. "Tech platforms" could mean a collection of software produced by the same developer (like Microsoft, Novell, or Oracle), a company-specific set of tools developed in-house, or a place where lots of different software mixed (where all the tech 'pipes' came together).
'Platform' connotes a shared infrastructure and, sometimes, rigidly hierarchical models of leadership and control. Many firms experience a constant tension between the alignment required for stability and the autonomy necessary for innovation—especially for conglomerates or other complex firms growing in size to span many customer groups, geographies, and even moving into new industries.
It isn't enough to create an aligned set of technologies developed by one, central power. The speed and scope of innovation required for an ever-accelerating world make it hard to develop central technologies that work for everyone. Startups with less tech debt and inertia gladly mix and match more modern software, outpacing large companies' slower rate of innovation and stuck systems.
Firms of all sizes need to have room to add or subtract new technologies gracefully, stably, and securely, and sometimes allow 'competing' approaches to co-exist, such as two different customer relationship management (CRM) tools for different parts of the business.
Conceiving of "platform" as only a technology strategy isn't enough to modernize in the face of the ever-changing needs of companies and users.
As organizations create or participate in digital economies, they need to expand their focus from digital as a utility—infrastructure such as networking, database management, and internet access, basically confined to the IT department—to digital as a capability in every part of the business.
IT doesn't stop supporting essential functions of the company—instead, it starts connecting components needed for new and exciting digital value propositions.
One of the most well-known examples of this shift from utility to capability is Amazon's elevation of its internal web hosting technologies into its own product, Amazon Web Services, which then expanded into being a multi-sided platform in its own right.
By shifting the mental model to digital as a capability, companies can capture data and organize it into meaningful information. As part of this shift, organizations, and the individuals within them, must ensure they can offer value in today's shifting digital landscape.
To do this, each person in the firm must have at least a conversational ability—if not full Digital Fluency—in the leading technologies which affect and power their business.
Backend infrastructure for internal use only
Siloed and inflexible set of closed systems and data, built for specific goal(s)
Cost center: infrastructure streamlines existing processes but does not attract new customers or create new revenue opportunities
The company creates, delivers, and maintains all value in the customer relationship via a single product or suite of products
Backend infrastructure also interacts with users and partners
Fully open and interoperable systems and data, built for third parties to 'plug and play'
Revenue generator: infrastructure supports a marketplace where third parties can create independent offerings that attract new customers
The company hosts a platform that enables customers, strategic partners, and other third parties to co-create and extract value as well
Sometimes the best way to close a digital resource gap is to sort your needs into "build, buy, or partner" categories. The way to decide between these options comes down to organizational DNA or the core characteristics your company is good at and known for.
The DNA of an insurance company, for example, is usually an evaluation of risk. Look to your DNA to help you decide between building, buying, and partnering.
A 'build' strategy can allow you to maintain a competitive advantage. Building requires creating infrastructure and hiring and retaining top talent, which can be costly. In short, if you are creating something core to your value as a company, and you have the resources (like time, money, and skills), you should probably build it yourself to maintain a competitive advantage.
A 'buy' strategy can keep things simple. If the resource you need is not close to your DNA and is just a utility function you need to bring your company to parity with the market, consider buying and integrating existing technology to avoid distraction. While it can leave you dependent on vendors, it can get new digital offerings to market much more quickly, and it also often means that you have a lot of choices and mitigation of risk.
A 'partner' strategy can help you grow quickly when you don't have all the pieces of a digital value proposition. For example, you might have a large customer base, and another company has financial infrastructure. Even top tech companies do this, like Apple partnering with Goldman Sachs to create the Apple Credit Card. If the solution you need is part of your DNA, but you don’t have the internal resources, or you want to leap to the top of the market quickly, it may make sense to partner.
APIs allow organizations and even individuals to quickly close the gap between their current, analog way of doing things and digital opportunities—while still having the option to change strategies later through data portability.
There are effectively three approaches to creating value with APIs:
For more guidance on how to make the decision between those strategies, read the 'Build, Buy or Partner' section of "The Exponential Journey" in our Digital Fluency Guide.
GPS software (also known as satnav) software was one of the first places that automakers needed to update software regularly. The way update strategies progressed shows how the role of digital technologies has shifted in conventional companies.
A traditional automaker might have a built-in navigation system in their car. The first versions of these were never intended to be updated once the cars left the factory. The data was hard-coded into chips that couldn't be modified.
This is a pipe mindset versus a platform mindset. All of the value is conceived of and built and delivered by engineers, or creators of some sort, within the four walls of a company.
The mental model of in-car navigation systems needed to be updated to align with how maps themselves had changed. It took a long time for the auto industry to understand and adopt this mindset to navigation systems, and even longer to other parts of the car.
One automaker that embraced this concept from the outset was Tesla. Every time a Tesla customer's vehicle is parked at home, the car's operating system runs software updates via the home wifi, ensuring it is constantly being updated and adjusted.
Users might start their car one morning to find that entire features have been reworked or, in one case, removed entirely.
Tesla allowed users to customize their vehicle's horn with any sound or music they chose, with the intent of giving drivers an unprecedented level of individuality in their driving experience.
However, the US National Highway Traffic Safety Administration considered the horn to be a critical safety feature and compelled Tesla to disable these modifications and restore the horn's standard sound. So, while Tesla vehicles are all capable of individually-customized horn noises, the software that makes this possible has been disabled.
What Tesla was doing here was managing something that we call software-defined value, where a lot of the value of the car is defined by the way that the software runs. Tesla's software engineers redefined how the physical car's value is perceived by the user, through the use of updated software.
That's still not the mindset in a lot of auto manufacturers. The majority of the car's functions are built in the factory and never changed once it's shipped unless there's a major update and drivers must bring their car into the dealership, where technicians will physically replace a component or manually update a piece of software.
This shift fundamentally changes the role of IT from a utility to a software factory capability across an entire organization.
This shift in thinking from monolithic infrastructure into a platform model affects companies in the following key ways:
Platforms (and especially application program interfaces [APIs]) enable all of these changes:
API standardization radically lowers the cost and risk of innovating. In this way, network effects can flourish—the value of a network of people, machines, datasets, and developers increases exponentially as more members or 'nodes' are added. APIs are the connective tissue that allows people and organizations to quickly and securely create value with data.
In an infrastructure mindset, technology enables parts of the business that are already functioning. For example, the job of the IT department in a widget factory is to make all the processes run more smoothly, fulfill orders digitally, send invoices, etc.
In this mindset, the Monolithic Architecture approach is to have a big, seemingly-cohesive collection of technologies that streamline existing operations. The IT department owns this technology. It's not visible or accessible to others, so it doesn't allow the people in the factory to influence what it does or how it works. It is often static, and difficult to modify or adapt to changing conditions or circumstances.
The 'shared services' mindset suggests that large firms benefit from resources being consolidated in a central organization that then serves 'internal clients' within a company. For example, a single customer relationship management (CRM) tool & database might be deployed throughout a company.
This model sets out to more fairly attribute costs between various users of the shared services, achieve economies of scale, and control decisions to avoid duplicative or incompatible work. Effective shared services strategies seek input and feedback from many parts of the organization.
As an evolution of monolithic architecture, where very little communication occurred between IT and other parts of the organization, this can be useful. However, such a model will still struggle to understand needs at the 'edge' of the organization and can be slow, cumbersome, and costly if not consistently well-managed.
As thinking started to evolve, 'service-oriented architecture' began to emerge. This mindset broke the monolith into smaller pieces and offered some limited potential for customization. (Imagine a cafeteria with two mains and a few sides. There are options, but not many.)
The four types of service that service-oriented architecture tends to offer are:
This technology, much like monolithic architecture, mostly serves technologists. There is no accessibility for the end-user to mix and match these different services. There's often not enough Digital Fluency on both sides for the technologist and the business user to meaningfully converse about what additional options may be needed.
Service-oriented architecture is better than one size fits all, but it's not perfect. What is needed at this point is a shift away from the infrastructure mindset entirely. With an assembly mindset, people can mix and match different off-the-shelf components to come up with the perfect fit for themselves.
If service-oriented architecture is a cafeteria, then microservices are food carts where users can find all the best versions of specific products. There might be five coffee vendors at one collection of food carts, and you pick the one that's perfect for you.
With microservices, people can choose best-in-class options or the most well-fitting option and connect them to lots of other things so that they don't have to be locked into one specific way of doing things. Generally speaking, one-size-fits-all doesn't fit anyone well, and this is true for most services and technology contexts.
Microservices arose in the IT space to describe very specific services that do one or two key things, like processing payments or editing a photo.
That requires a different way of thinking about services, such as the concept of service mesh. It's like a web of different services that are all interconnected.
Often microservices are bundled together by their creators to create services. Typically a service is a cohesive solution, designed to solve an overarching business problem or set of business problems, whereas a microservice tends to do one very specific thing that must be assembled with other services for it to be useful.
For example, PayPal is a service for payments. It manages the entire payment cycle for both the user and the developer implementing it.
Stripe, by comparison, breaks that down into much smaller services. Stripe also manages payments but, unlike PayPal, Stripe doesn't combine it with a credit card, an account to hold money or any of the other services PayPal offers.
Over time Stripe has expanded its microservices to include other functionalities, but each is a self-contained, very specific query.
Monolithic architecture and shared-service-oriented architecture tends to be pretty top-down or center-out, and thus is tightly controlled. The microservices approach can, at first glance, seem chaotic to those not well-versed in it.
Time and again researchers have found that when users don't get their needs met, they often find another way. This is known as 'shadow IT'—IT that is not really managed by the IT organization but which is used by employees to do their work or which otherwise touches company data.
For example, if colleagues can't collaborate on a document inside a proprietary enterprise software, they might create a Google doc in a personal Gmail account and share it with your coworkers, no matter how many times they're told not to do it, as it's the only way to effectively get their job done. Shadow IT can be brought into the fold by using microservices and providing more ways for people to integrate or "plug in" new software.
Modern IT departments can manage access to microservices and opt into certain common standards that enable users to 'assemble' solutions that work for them. For example, microservices usually use APIs to connect to Salesforce, and Salesforce has very rigid security standards for passing data to and from its systems. The same is true with Microsoft. Therefore, data passed between Salesforce and Microsoft via API can be relied upon to be compliant with the security standards of both organizations.
Amazon Web Services, or AWS, are a great example of microservices as a product.
AWS started as a move by Amazon to break their monolithic approach to websites into several services, such as data storage and data processing, then further and further down into microservices to do very specific things. In short, they analyzed the source code for the existing amazon.com site, pulled out different chunks of code that served a specific purpose, and then wrapped them into a web service interface that made access possible via API. Each of those different functions was deployed on Amazon Web Services so that other companies could pick and choose which they needed for their offering's individual functionality.
As a result, Netflix deconstructed its proprietary, monolithic architecture, identified the microservices it needed, and bought those functionalities from AWS. This allowed them to scale without building all of the functionality from scratch and, crucially, without interrupting service to customers.
Microservices allow a company to focus on what it does best, rather than trying to recreate many individual technical functions itself or using an imperfect one-size-fits-all solution from a third party.
(For more on the mindset Amazon and Netflix used in this process, explore the Guidebook section on Computational Thinking.)
For more on microservices, including a much more detailed look at the architecture behind them and the detailed options available, visit microservices.io.