Architectural Design Patterns 1 – Layered (or Tiered) Pattern

Here in this first post of the blog, I want to also start with the first part of a series. In this post of the series, we will be talking about Layered (or Tiered) Pattern. When you’re building a software project, you need a plan to organize your code. That’s where architectural design patterns come in. Architectural design patterns is a huge topic getting more and more interest by software developers and businesses. I want to write about the most popular architectural design patterns as a series of blog posts. What is the Layered Pattern? Imagine a cake with several layers. Each layer has its own flavour and role in making the whole cake delicious. In the same way, the Layered Pattern splits a software application into different layers. Each layer has its own job, making sure the whole app runs smoothly. The layers we talk about are as follows: Presentation LayerThis is the layer the users see and interact with. It’s your app’s user interface (UI). For instance, this could be the web pages with buttons, text boxes, and images in a web site. In a mobile app, it’s the screens where users tap and swipe. Business Logic Layer (or Service Layer)This layer processes data and makes decisions. It’s like the brain of the application. Let’s say you have an online store. When a user places an order, this layer checks if items are in stock, calculates the total price, and decides if the order can be completed. Data Access LayerThis is where your app talks to databases or other places where data is stored. For example, when a user signs up for your app, their username and password might be saved in a database. This layer is responsible for saving and retrieving that info. Infrastructure LayerThis layer helps the other layers do their jobs by offering common services like logging, error handling, or communication tools. For example, if your app needs to send emails (maybe order confirmations for an online store), this layer could have the tools to do that. The cool thing about the Layered Pattern is that each layer only talks to the one below it. So, the Presentation Layer talks to the Business Logic Layer, but it doesn’t directly talk to the Data Access Layer. This keeps things neat and organized. When a user interacts with your app, their request moves down through the layers. Using our online store example: A user clicks ‘Buy Now’ (Presentation Layer) -> The order gets processed (Business Logic Layer) -> The order details are saved in the database (Data Access Layer). Suleyman Cabir Ataman, PhD

Architectural Design Patterns 2 – Model-View-Controller (MVC)

When you’re diving into the world of software design, you’ll often hear about the Model-View-Controller, or MVC for short. It’s one of those blueprints, or design patterns, that helps developers keep their code tidy and well-organized. In this piece, we’ll take a close look at what MVC is and how its parts play together. Model: The Model is the brain behind the data. It’s where you keep everything related to the data you’re working with: fetching it, updating it, or even deleting it. Imagine you have an app that keeps track of books in a library. The Model will have all the information about these books, like the title, author, and published date. More than just storing, the Model sets the rules on how data can change or get presented. If a book goes out of stock, for instance, it’s the Model’s responsibility to know this and update accordingly. View: The View is all about presentation. It’s what the user sees and interacts with. In our library app, the View would display the list of books, show pictures of the book covers, and maybe even have a search bar to find a specific title. It’s like a window into your app. But remember, the View doesn’t decide what to show on its own. It gets that info from the Model. So if a book is out of stock, the Model tells the View, and the View might show that book as “unavailable”. Controller: Now, the Controller is the mediator between the Model and the View. You can think of it as the manager of a store. When you, as a customer (or the View), want to buy a book, you tell the manager (the Controller). The managger then checks the stock (asks the Model) and handles the sale. If the book is out of stock, the manager informs you (updates the View). In software terms, if you click on a book title wanting to buy it, the Controller handles this request. It asks the Model for the book details and updates the View to show the book’s info. All in all, MVC is a cool way to split an application and making sure everything has its place and does its job efficiently. It helps keep the user interface clean, the data correct, and the actions or logic clear and separate. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 3 – Microservices

In this article, we will be talking about Microservices, a design pattern that has gained attraction for its approach to building large and complex software applications. At its core, Microservices is about breaking down an app into smaller parts where each part doing its own job. Instead of one big chunk of software that does everything, you have multiple little services working together. Each microservice is responsible for its own task but collaborating with others to function properly. This approach shines in many situations, especially in cloud-based systems. The cloud is like massive land ready for construction. Microservices allow developers to build or upgrade one part of their software (say, a single shop in the city) without disturbing the others. This flexibility means faster updates, easier scaling, and better fault isolation. If one service fails, it doesn’t bring down the whole application. However, like everything in tech, Microservices isn’t a magic wound. Critics often point out that this pattern can lead to complicated setups. Each service might need its database or server, leading to more things to manage and monitor. Also, while it’s easy to start with a few services, as the number grows, the complexity can become a headache. Is Microservices architecture possible in on-premise systems? Yes. While the pattern made waves in the cloud world, businesses with on-premise infrastructure can also benefit from it. It’s about rethinking and restructuring the software rather than where it’s hosted. The principles remain the same. One of the cool things about Microservices is how it gels with modern containerization technologies like Docker and Kubernetes. These tools offer ways to pack, deploy, and manage software. When each microservice is placed inside its container, it becomes portable, isolated, and easy to scale. It’s like giving each shop in our city analogy its own protective bubble, making it resilient and self-sufficient. Now, some folks might say, “Isn’t this just like SOA (Service-Oriented Architecture)?” Well they are not wrong since they’re cousins, but not twins. While both focus on breaking software into services, Microservices takes it further by ensuring each service is fully independent, often having its database and environment. SOA, on the other hand, might have shared databases and be more tightly coupled. In wrapping our chat about Microservices, it’s clear that while it offers many advantages, it is essential to understand its challenges. As with any architectural pattern, it is about using the right tool for the job, keeping both the advantages and potential pitfalls in mind. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 4 – Event-Driven Architecture

Event-Driven Architecture is currently enjoying a lot of attention from businesses around the world due to the enhanced capabilities offered by cloud-based systems. Imagine a digital ecosystem like a busy marketplace. In this environment, various systems act as vendors declaring their services and updates. Rather than each customer (or system) having to visit every vendor to check for new items, they simply wait and respond whenever a vendor announces a product they’re interested in. That’s precisely how Event-Driven Architecture operates. Systems no longer continuously scan for updates or changes; they react upon the occurrence of a significant event. For instance, in a shopping app, the moment when an item is purchased, an event is broadcasted signifying stock reduction. Other parts of the system, like the inventory management, will react to this event to update the available stock count. The domains where Event-Driven Architecture truly creates a value are those requiring instantaneous actions. Whether it is real-time stock updates on e-commerce platforms, or the rapid notifications users receive on social media platforms after a new post by someone they follow, Event-Driven Architecture is the unknown hero making all these millisecond-level updates possible. The advantages of Event-Driven Architecture are complex. By removing the need for continuous polling, system performance gets a significant boost. Scalability is also another benefit. As your user base grows, you can simply introduce more listeners or event consumers without having to reconfigure the event producers. However, it’s not without challenges. With larger systems, managing large number of events can be a terrible experience. Ensuring that every single event is processed reliably, especially when the volume is massive, can pose challenges. Tools like KAFKA, Rabbit MQ, SQS/SNS, and Azure Service Bus have assisted the implementation of Event-Driven Architecture, particularly within cloud domains. Yet, it’s essential to note that it’s not a one-size-fits-all solution. Some critics of Event-Driven Architecture point towards the cost of event management and the complexity introduced when debugging or tracing specific issues. In comparison to the Microservices architecture, while both are designed to increase the level of the responsiveness and scalability in systems, their approaches differ. Microservices involve decomposing an application into smaller, manageable services that operate independently. In contrast, Event-Driven Architecture is more about the interaction between these services, emphasizing a reaction-based communication instead of a continuous one. Implementing Event-Driven Architecture in AWS and Azure In world of the cloud-based solutions, AWS has positioned itself with tools tailored for Event-Driven Architecture. SQS, or Simple Queue Service, acts much like a post office, holding onto messages (or events) until a service is ready to process them. Paired with SNS (Simple Notification Service) which ensures these events are broadcasted to all subscribers, AWS provides a suitable environment for event-driven solutions. Here’s where AWS Lambda comes into play. Lambda lets you run code in response to specific events. Think of it as an automatic door: when someone approaches (an event), the door opens (Lambda runs a function). For instance, when a new file is uploaded to AWS’s S3 storage service, Lambda can trigger a function to process that file immediately. On the Azure front, the platform hasn’t been left behind. The Azure Service Bus, serves as a dependable message broker, ensuring messages find their way to the right recipients. Moreover, Azure also offers a unique approach with Table Storage. While not a direct tool for event-driven designs, this NoSQL data store for semi-structured data has been leveraged by ingenious developers as an event ledger. It logs these events, allowing services to react as necessary. When contrasting, Service Bus emphasizes real-time event communication, while Table Storage leans towards a log-react mechanism. I might provide sample code about this subject matter in the future in another blog post. Azure Functions is Azure’s counterpart to AWS Lambda. It allows developers to execute specific pieces of code in response to a vast array of events. For instance, when a new record is added to an Azure database, an Azure Function could be set up to process or analyse that data instantly, integrating seamlessly with the event-driven model. In summary, the world of Event-Driven Architecture, with its capabilities and potential pitfalls, presents a promising tool for developers. As cloud technologies continue to grow, tools and services tailored for event-driven designs are sure to play a significant role in the next wave of digital innovations. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 5 – Monolithic

I am aware that in our present day world, it is a sin to talk about monolith and not to curse it. However, I am will neither hallelujah nor curse it. I will just try to explain it as an architectural design pattern and try to expose both positive and negative sides of it. The Monolithic architecture stands as a testament to the earlier days of software development. A monolithic architecture is like a single, tightly packed unit where all the software components are bundled together. Think of it as a large factory where every product stage, from raw materials to the final product, is handled under one roof. This design pattern was especially popular when software applications weren’t as complex as they are today. Its straightforward nature makes it a solid choice for simpler applications. Everything is in one place, making it easy to develop, test, and deploy. You don’t have to juggle multiple services or databases. This made it preferred for many software projects in the earlier days of development. However, nothing is perfect. As software became more sophisticated, the Monolithic architecture began showing its limits. Making even a small tweak could mean you have to rebuild and redeploy the whole application. Imagine having to shut down the entire factory just to change one machine. Scaling specific parts becomes a challenge too. Plus, if something goes wrong, it could bring down the whole system. Many developers moved away from this design due to its challenges. The need to quickly update features, maintain large code-bases, and ensure uptime has made many lean towards more modular architectures like Microservices or Event-Driven architectures. Nevertheless, we cannot dismiss Monolithic entirely. There are cases where it’s not only useful but needed. For small teams working on less complex projects or applications where performance is a top concern, the Monolithic approach might be ideal. Everything is together, there are fewer moving parts, and with modern tools, many of its traditional problems can be reduced. Furthermore, in a Microservices architecture, we can consider each part a simple, self-containing, micro size Monolith within itself. In short, nothing is absolutely good or absolutely bad. Every tool and technique in software development has a specific purpose. Understanding Monolithic and its place in the software design helps developers make informed decisions. While it might not be the go-to for every project in the market today, it has its merits, proving invaluable in specific scenarios. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 6 – Service-Oriented Architecture

Service-Oriented Architecture, often known simply as SOA, can be thought of as a city of services. In this city, each service is like a shop. Every shop provides a unique product or service but doesn’t worry about the other shops around it. Instead, it focuses on doing its own job really well. So, SOA is about creating independent services that work together in a large system. Now, where would you find SOA being used? Imagine big companies with different departments, like finance, human resources, or sales. Each department uses different software. With SOA, these different pieces of software can communicate and share data more easily. So, if the sales department makes a sale, the finance software can quickly know about it and do its thing. One of the great things about SOA is that each service can be updated or changed without messing up the whole system. This means companies can make improvements without shutting everything down. Also, since services are independent, it’s easier to test them separately. But it’s not all sunshine and rainbows. One big challenge is that when many services constantly chat with each other, things can slow down. And if one service breaks, it might cause problems for the services that depend on it. There have been some strong opinions about SOA. Some folks say it’s too complex, especially when you have a ton of services talking to each other. Others argue that it’s tricky to manage and keep track of all these services. Both SOA and Microservices emphasize on decentralization. They both push for a structure where individual components or services operate in a loosely coupled manner. However, while SOA often sees services as larger, encompassing multiple operations, Microservices aim for single functionality, making them finer-grained. This means that in the world of Microservices, every small function gets its own service. Further, while SOA services might share databases leading to potential interdependencies, every microservice manages its own database ensuring full autonomy. But why is SOA not the go-to choice these days? Here’s the deal, there’s a new kid on the block called REST (Representational State Transfer). Unlike SOA, which is like calling a friend and waiting for them to pick up, REST is more like texting. You send a message (or request) and get a reply when the other side is ready. REST is simple and works really well on the web, making it a favourite for many modern apps and websites. Back in the day, SOA was super popular because it allowed big, old software systems to talk to each other. But as the tech world moved more towards the internet and web apps, the simpler and web-friendly REST started taking the spotlight. In the end, while SOA had its golden days and served its purpose well, the evolving needs of the tech world have seen newer architectures, like REST, take the lead. Still, understanding SOA is like appreciating a classic movie – it gives you an idea of how things were and how far we’ve come. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 7 – Domain-Driven Design

Domain-Driven Design, or DDD for short, isn’t about code at first. It’s about understanding the main business inside out, and then designing software that speaks the business’s language. Think about a hospital system. Before we write code, we would talk about patients, doctors, treatments, and appointments. By diving deep into the “domain” or the core of the business, we create software that feels like it was tailor-made for it. DDD is super useful for businesses with unique rules and ways of doing things. Let’s take insurance as an example. They’ve got tons of rules on who gets covered, how much they get, and under what conditions. DDD makes sense of all this and gives us a map to build the software. When you use DDD, you get software that can change and grow with the business. It’s like a tree planted in the right soil. But nothing’s perfect. DDD means spending a lot of time up front just understanding the business. This can feel slow for teams who are eager to start coding. Another thing some folks point out is that DDD has its own jargon. Words like “aggregates” and “entities” might sound fancy. But once everyone knows what they mean, it helps the team speak the same language and avoid mix-ups. Now, why do many teams go for DDD? It’s simple. By understanding the problem first, they save time later. Less going back and fixing things. DDD is also like a bridge between the tech team and the business folks. Both sides understand each other better. A real-life example might make this clearer. Let’s say we’re building a system for a library. The main challenge is keeping track of books, especially when they’re borrowed or returned late. With DDD, we start by talking to librarians. We learn about how they classify books, how they handle late fees, and how they track which member borrowed which book. From this chat, we gather terms like “book”, “member”, “due date”, and “late fee”. These terms become the building blocks of our software. So, when a member borrows a book, our system knows to set a “due date”. If the book comes back after this date, the system calculates a “late fee”. By focusing on the library’s real challenges, our software feels like a natural extension of the library. The strength of DDD is its focus on real-world problems. It’s like a tailor-made suit for the business. The weakness is that it needs a lot of chatting and understanding before any code gets written. It’s not a one-size-fits-all, but for many projects, it fits just right. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 8 – Serverless (Function as a Service – FaaS)

Today’s topic is a quite popular one, a fuzzy word getting more and more popular once it used to be a niche architectural pattern only. When we think about building software, a lot of the time and energy is spent thinking about where and how it will run. Serverless architecture, also known as Function as a Service or FaaS, helps this to change. Instead of planning the environment to run, developers mainly write functions and let cloud providers like AWS or Azure handle where it runs. Briefly describing, Serverless is an architectural model where cloud providers fully manage code execution. Developers write the code, and the chosen cloud platform executes and even scales the application depending on the need. This technique allows developers to concentrate on what they are good at, in other words code development. And handling operational concerns like scaling and infrastructure management will be “outsourced” to the cloud platforms. Serverless has got a a lot of benefits which are: Cost-Efficiency: The easily-seen advantage of Serverless is the pay-as-you-go pricing model. This means that you are charged only for resources you have spent, the compute time your application consumes. If your application is idle for some time, there are no charges for server time, for example. This can lead to a significant cost saving, especially for the applications don’t have a consistent patterns for load. For instance, an application publishing football matches will be extremely busy during a live event for a few hours, and will be quiet for all other times. Rapid Deployment and Updates: Serverless architectures allow for faster deployment and updates. Because the infrastructure is managed by the cloud provider itself, the time and effort spent in setting up and managing servers are reduced significantly. This enables a quicker release of applications and features fastens the development cycle. Reduced Management Overhead: For the sake of not repeating myself I will keep this point short, with Serverless the cloud provider takes care of infrastructure maintenance tasks. This reduces a significant amount of operational burden from the development team. Automatic Scalability: Serverless platforms automatically scale your application depending on the incoming traffic. Whether your app experiences sudden spikes in usage or steady increases over time, the Serverless infrastructure adjusts automatically and it ensures that the application is available without manual intervention. Built-in High Availability: Many Serverless platforms offer high availability and error tolerance out of the box. Applications are deployed across multiple data centres which ensures that they continue to operate even if one or more servers fail. Simplified Backend Code: In a Serverless environment, developers can simplify backend code by relying on the cloud provider to manage complex infrastructure tasks. This can lead to cleaner, more focused application code that is easier to maintain and update. Enhanced Flexibility: Serverless architecture offers the flexibility to build applications using a variety of programming languages. This allows teams to use the best tools for their specific requirements without being constrained by infrastructure considerations. However, it’s not all sunshine. Serverless also have following disadvantages and potential pitfalls: Dependence on Cloud Providers: One of the primary disadvantages of Serverless is relying on service providers. If the provider experiences downtime, technical issues, or changes in their price and service policies, it directly impacts your application’s performance and availability. This dependency also raises concerns about vendor lock-in. Complexity in Managing Functions: As applications grow and the number of functions increases, managing these individual functions becomes more and more complex. Organizing, monitoring, and ensuring the harmony of numerous functions requires a well-planned architecture and maintenence. It can also lead to increased overhead in both development and maintenance cycles. Cold Start Issues: Serverless functions may suffer from what is known as ‘cold start’. This happens when a function is invoked after being idle for some time, it requires some time to wake up since it causes a delay as the cloud provider allocates resources. While this may not be an important for some applications, it can be a critical issue for application requiring real-time responsiveness. Limited Control and Customization: Since the cloud provider manages the infrastructure, there is a limited control over the underlying servers and environment. This can be an obstacle for applications requiring specific configurations or customization at the server level. Security Concerns: Security in Serverless architecture has its challenges due to its widened attack surface. While cloud providers secure the infrastructure, developers must ensure the security of application code and configurations. This requires careful attention to function-specific permissions, updates to dependencies, and securing API gateways. Implementing best practices like strict access controls, continuous monitoring, code reviews, and data encryption is essential to minimize the risks in a Serverless environment. Testing and Debugging Challenges: Testing and debugging Serverless applications can be more difficult compared to on-premise systems. The distributed nature of these applications requires special tools and techniques for effective debugging and testing. Performance Constraints: Serverless platforms often comes with their limitations on resources such as memory, execution time, and concurrent executions. These constraints can affect the performance of applications, or exceeding the budget particularly for tasks requiring heavy usage of resources. Networking and Integration Issues: Integrating Serverless functions with existing applications or third-party services can sometimes be complex due to networking and communication constraints. This integration complexity can impact the overall architecture and design of the system. It’s easy to see why Serverless is gaining fans. It’s fast, cost-effective, and great for businesses that don’t want to deal with the nuts and bolts of server management. No doubt, AWS and Azure are two frontrunners in the Serverless computing. They both offer strong platforms for developers. AWS Lambda is a very popular Serverless service which easily integrates into AWS services. Azure Functions is Microsoft’s answer to Serverless computing. Similar to AWS Lambda, Azure Functions allows developers to execute code triggered by certain events, but within the Azure ecosystem. This integration with Azure’s other services. It is not only these two. Other cloud providers like Google Cloud, DigitalOcean, IBM Cloud, and Oracle Cloud offer their own Serverless platforms. Google Cloud Functions provides a highly scalable and event-driven service, ideal for applications already using the Google Cloud’s infrastructure. DigitalOcean which is a cloud platforms has been taking my attention recently, offers a straightforward and cost-effective Serverless platform called DigitalOcean Functions and it supports variety of languages. As two of the traditional companies, IBM Cloud Functions again supports various programming languages and integrates well with IBM’s cloud services. Oracle Cloud Functions is also a suitable choice for enterprises looking to leverage Oracle ecosystem. In short, each platform provides similar features they all have their pros and cons for different needs and different customers. On the other hand, what if you are not on the cloud? Can you do Serverless on-premise? This is a question asked to me in one of the recent job interviews. Even though it is challenging, it is possible. There are several tools and platforms which makes the Serverless experience possible in on-premise setups. For example, OpenFaaS (Functions as a Service) is an open-source project that provides a framework for building Serverless functions on top of containers, allowing you to run and scale those functions inside your own infrastructure. Similarly, Kubeless (unfortunately not being maintained by VMWare anymore) runs on top of your Kubernetes cluster and leverages its resources, bringing the Serverless experience without requiring an external cloud provider. Nevertheless, while these tools provide the basic Serverless functionalities, they might not offer the same features and ease of use as cloud providers. It’s essential to consider the trade-offs between flexibility, cost, and feature set when deciding on an on-premise Serverless solution. Suleyman Cabir Ataman, PhD

Architectural Design Patterns 9 – Circuit Breaker

When designing systems, especially distributed systems, the flow of data and service requests must remain continuous. This brings us to an architectural design pattern called the Circuit Breaker. It comes from electronics. Simply, it is designed to stop the flow when something goes wrong. This prevents potential cascading failures in a system. Understanding the Circuit Breaker Pattern Imagine a scenario where a service relies heavily on another third-party service. However, that third-party service becomes slow or starts to fail at some point. Without any preventive measures, our service will keep making requests, waiting for timeouts, even getting stuck. This is where the Circuit Breaker provides an advantage. It monitors the number of failed requests and once the threshold is reached, for a certain period no further requests are made to the failing service. During this state, the third-party service might have the time and resources to recover. A real-world scenario can be online booking systems. If a payment gateway is experiencing delays, the booking service can use the circuit breaker pattern to stop requests temporarily. Then it can provide a better response to users rather than crashing or delaying responses. Implementation Insights A typical circuit breaker keeps track of requests and their success or failure rates. Depending on the rate of failure, it moves between three states: Closed, Open, and Half-Open. In the closed state, requests flow through normally. However, if failures cross a threshold, the breaker moves to an open state and blocks all the requests. After a certain time, it moves to a half-open state, allowing a limited number of requests to pass. If those requests succeed then the breaker closes, else it remains open. Pros and Cons A primary advantage of the circuit breaker pattern is its ability to prevent system failures from wide-spreading. Instead of continuously sending requests to a failing service, it gives it a room for recover. Additionally, by using this pattern, systems can fail properly and gives meaningful feedback to the users or calling services. However, a misconfigured circuit breaker can harm the system. If it’s too sensitive, it’s state might change too often even for minor issues. This can cause services being unavailable when they don’t need to be. On the contrary, if it’s not sensitive enough, it might not change the state even though it is required. Circuit breaker is great for handling faults but it doesn’t address the root causes. It’s a preventive mechanism, a beta blocker. It is not a solution to the underlying problem. So, while they bring resilience, they shouldn’t be seen as a replacement to a proper error handling mechanism. Suleyman Cabir Ataman, PhD