Object-Oriented Architecture, Functional Design

Every developer in every paradigm is taught the evils of globals. We know that you don't want to expose data to global access, because it can become too difficult to predict how that data will behave. When globals are necessary, this is generally mitigated by the practice of making globals read-only: environment variables and other global application state that is either extracted from the system or set once upon application start is generally considered relatively safe as a global.

It is by this reasoning that, under OO designs, we justify the proliferation of globally-accessible classes. We don't often think of it this way, but a class is really just a globally accessible construct. Yes, in a lot of environments, a class must be explicitly imported into client code (although, critically in the current development ecosystem, not in Rails). But we still have intellectually designed a class to be an entity that is fundamentally shared: a class is useless if it has no clients.

In large, enterprise applications, with a quarter-million lines of code, this can give us hundreds of entities that can, in principle, be accessed anywhere in the codebase. It's practically impossible to keep track of all these, which is why we spend our time organizing our code under higher-order abstractions. In a Rails application, we are given the basic abstractions of Model, View, and Controller for free. High-quality teams will usually introduce other categories of classes, such as Presenters or Operations.

But we still maintain lexical exposure: a Rails model remains accessible from any other point in the codebase, and you have zero guarantee that a modification of that model won't break other code. We can grep, we can test, we can put all kinds of checks in place, but that's all just a stopgap. The only way to guarantee that code won't break in clients is to put strong lexical limits on what code can be a client.

This, combined with the ease of access to small, independent computing environments (by which I mean AWS instances, honestly) is what has driven me and my teams to generally tend towards microservice and microservice-like architectures. I can guarantee that there is no lexical scope bleed between services that are running on completely different virtual machines pretty easily--Meltdown notwithstanding.

It is this switch to microservices that has, in particular, made functional programming particularly appealing to me. I do not need large, robust frameworks if I'm using small, well-encapsulated services, and in these small codebases a very high-order separation of code with side effects and code without side effects generates codebases that are extremely easy to reason across and offer myself and my team strong assurances about code correctness before delivery.

But then we have the question of how to design the relationships between microservices. A common model is for the microservices to reflect the underlying ERD. This is a patently ridiculous model, because the abstraction needs for behavior and data are just plain different. Code reflects behavior, and thus our code-level abstractions should be behavioral. When we're hell-bent on making our code, at the highest level, reflect the ERD, we are not hiding secrets, we are exposing the lowest level of application structure at the highest level, and have completely negated the reasons for our abstractions.

Instead, I like to return back to object-model thinking in my microservices architecture, and I bring certain principles of object-oriented design and reasoning into my architectural plans. Microservices should be conceived of as entities that accept messages, and they alone are solely responsible for behavior that comes from those messages. This is the old-school "message-passing" object model. So my client microservice that needs to interact with another service can fire a message into whatever the intercommunication channel is--I'm a big fan of Rabbit, but gRPC or even HTTP can give you the same expressivity.

When we do this, we have lexical guarantees that don't exist in single-service object-oriented software. I know for a fact that my client services cannot directly manipulate the internal state of host services. I know for a fact that a host service cannot breach caller-callee boundaries and affect the client, not without an explicit message being returned to the client. Depending on how I do interservice communication, I can very easily create explicit command/query separation.

Walking into a microservices-oriented architecture with a plan to rely on our old principles of OO design also gives us a mental framework to handle long-term growth of the application suite. We can leverage the modes of thinking we've all spent years developing to inform our architecture as the application requirements change over the software lifecycle, and we can intelligently converse and think about when to extract new services.

This dovetails well with functional design at the lower level: if we build our microservices using lightweight, functional bodies of code, we can refactor across multiple services more reliably than we can if we build object-oriented microservices. If a microservice has grown to the point where it should be factored into two services, ripping out the pure functions and the impure functions that invoke them can be reasoned across with far more certainty than we can when we rip out whole interacting objects in an object-oriented microservice. Ripping out whole suites of functionality from a service becomes as easy as ripping methods out of an object.

We live and code in an age where the average working developer is dealing with enormous codebases. Our software has gotten more complicated and requires new abstractions. The old ideas still work, but the scale has exceeded their initial description. So let's abstract on that paradigm.