Over the last couple years, companies like Uber, Amazon, Netflix, and Twitter have publicly declared that they have chosen to adopt microservices application architectures and have detailed their decisions to refactor existing applications in order to implement this approach. For these companies, rewriting their core business applications was no small task and required broad collaboration and orchestration of their entire engineering organizations in a shared effort. In the end, they all concluded that having a microservices application architecture would be worth the trouble. So what is so appealing about microservices and why should more IT organizations take note?
Simply put, event-driven microservices applications are characterized by a series of small, functional processes that run continuously and independently of one another and that communicate with each other via an event-based message passing framework. In an application, each pair of microservices that communicate with each other can cooperate without information about the implementation of the other service — all that is required is that they have well-defined APIs that they support respectively. In addition, each microservice runs in isolation and asynchronous of all the other microservices.
The overall logic of a microservices application, therefore, can be represented by a directed graph, like a flow chart, in which each node represents a microservice and a directed edge represents a stream of messages that are published by one microservice and consumed by another. In other words, a large monolithic application that encodes complex logic could be refactored into a microservices application by decomposing the monolith into discrete functional units and then connecting them via message streams in a topology that resembles the control flow graph of the original monolithic application.
A Logical Representation of a Microservices Application
In many ways, the principles of a microservices application design are the fundamental principles that are taught in every introductory programming class. Instead of writing a monolithic application, a microservices implementation exhibits abstraction (each microservice implements an API, but the implementation does not affect other microservices) and modularization (the program is broken down into small and independent logical components), and promotes reusability (each microservice can be reused multiple times in the same application or across multiple applications). This makes the program easier to reason about. As a result, modifying the application and testing it for accuracy becomes easier.
In addition to being good coding practice, implementing a microservices architecture has profound impacts on application performance and resource utilization. The benefits include:
- Process isolation: Each microservice can be architected, implemented, and deployed without affecting other microservices.
- Continuous high availability: If one service fails, it will not bring down the other services, so the application can keep running.
- Scalability and extensibility: Adding new microservices to an existing application can be done seamlessly in order to scale out the capacity of the application or to add new features. A microservice that needs to be scaled up can be duplicated independent of the other microservices.
- Event-based processing: Microservices communicate via asynchronous, event-based data streams and each service can start processing as soon as data arrives. It is an ideal paradigm for implementing continuous, always-on applications.
- Non-blocking functions: As microservices communicate via asynchronous message passing, there are no artificial execution dependencies between microservices, so services do not waste time waiting for responses.
- Application development agility: The application can be modified in a continuous way. A microservice’s implementation can be modified by adding the new service implementation to the application and retiring the old implementation without affecting the other services, thus enabling rapid development and innovation.
- Better hardware utilization: Each microservice can be run on hardware that is appropriate for its requirements (e.g., high CPU requirements or high I/O requirements), providing granular control over hardware usage.
- Easy development collaboration: Each microservice can be implemented independently, enabling the development of a large application through very loose collaboration of a large number of small teams.
Microservices applications can be architected using a myriad of different technologies. There are many different tools and platforms that could be used to develop and deploy them. Now that microservices are starting to become mainstream, we can learn from the lessons of application development leaders, like Uber, to define the key requirements of a platform that can best support them:
- Support for an integrated, persistent, event-based, publish/subscribe message passing framework with unlimited data stream storage and stream replication with strong consistency for disaster recovery and cross-data center processing
- Support for multiple different types of processing services, including multiple different tools and compute engines
- Integrated data layer with native support for files, tables, and data streams, all in a single global namespace so that the microservices can perform analytical or operational processing as necessary and can pass messages to each other all within the same platform
- Scale-out hardware with massive storage capacity so the application can grow as needed
- Support for cloud processing with infrastructure-agnostic deployment, including hybrid cloud or multi-data center architectures
The flexibility and extensibility of microservices applications mean that teams that adopt the paradigm have shorter time-to-production, can innovate faster, and can improve efficiencies in their data centers and their engineering organizations. The paradigm shift to microservices, much like the growth of cloud computing, will continue to gain adoption and prominence within enterprise IT organizations for the foreseeable future. In the end, the success of a team that implements microservices will rely greatly on their ability to embrace this new, more agile mode of application design and development and, equally, on their choice of a broad and comprehensive platform that can simplify the application architecture and the development process and that can provide efficiencies at runtime by converging multiple capabilities and data structures in a single platform.
Crystal Valentine is Vice President of Technology Strategy at MapR. She has an extensive background in big data research and practice. Before joining MapR, she was a professor of computer science at Amherst College. She is the author of several academic publications in the areas of algorithms, high-performance computing, and computational biology and holds a patent for Extreme Virtual Memory. As a former consultant at Ab Initio Software working with Fortune 500 companies to design and implement high-throughput, mission-critical applications and as a tech expert consulting for equity investors focused on technology, she has developed significant business experience in the enterprise computing industry. Crystal received her doctorate in Computer Science from Brown University and was a Fulbright Scholar to Italy.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.