No. Microprocesses are the opposite, they don't RPC call another functions, they can only output data, and they don't own data; they talk to a database system.
each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API
have their own stack, inclusive of the database and data model
are organized by business capability, with the line separating services often referred to as a bounded context
Microservices are service functions that are executed by a request. This is what leads to a mud-ball of dependencies and tight coupling. Microservices call chains are unterminated, leading to call chain complexity and increased technical debt. On the other hand, Microprocesses have terminated call chains, they don't call other microprocesses, they only output to one or more databases.
Domain Complexity is inescapable, but software coupling need not be impacted by this. Whether it's an older monolith with services in inversion of control containers, or microservices with individually deployable services, function call chains are complex to manage.
The complexity is limited to the Data/Domain by isolating each logic function as a process of Input-Process-Output. Data comes from the Domain data, state is changed, then output goes back into Domain data.
Microprocesses mutate data
Microservices "own" data
Owning data means that a service becomes a gateway for data that must not be bypassed. This means that data cannot be "queried" in the data layer, leading to complexity that's duplicated in both the domain model and logic elements.
Some microservices will be a Web Restful gateway into a bounded context; the most atomic ones will perform database insert/update/select/delete on owned entities; others will check authorisation. Often data access, authorisation, and validation logic are combined into a single microservice.
With a lot of (unnecessary) effort, different types of "things" can be separated into individual microservices.
Altogether microprocesses only do one kind of thing, logic. Each Microprocess performs a different step of change of state. Without effort, the developer is restrained to simplify.
What does the rest? General-purpose protocols in the data-layer:
Database Web Gateway - is the Web Gateway
The database - does Authorisation
The database - does CRUD
Individual terminating steps
Propagation of service function calls
One thing: One step of changing logical system state
Multiple things: Authorisation, Logic Processing, Auditing, Data Storage
Does not "own" domain data. Talks to a database of some sort.
"Owns data". Has it's own isolated database - one for each microservice.
CRUD is not handled by Microprocess
One layer of microservices used for data collection/table (CRUD)
On the boundary between two microdata entities. The transform between two states.
Inline with one data collection/table
Only immediate data schema matters, no other dependencies with other Microprocesses
"Maintains own contracts". Defines interface to be called, calls other microservices. Tight coupling.
Only immediate microdata dependencies, no others.
Hierarchy of other microservice dependencies to "call"
Microdata contracts (eg VIEWS)
Service Contracts (RPC parameters, VIEW models)
Never called as web-service - only signals from subscribed db change.
Web Service URL is typical; also event mesaging.
Data being processed is generally stored and repeatable. Easy to troubleshoot and retry.
Data being processed is transient. Difficult to debug.
Overall benefits of Microprocess over Microservice:
A microprocess has a clearer definition and purpose - one modification of system state
Easier to design, document, and replace
Simpler local dependencies (input collection(s), and output collection(s)) - they don't "call" other microprocesses
Removes the need for redundant gateway code
(Makes it obvious what "information" is vs data)
Microservices repeat mechanisms for enforcing authentication and roles (given that they "own" the data collection). This is where most of the security bugs arise in software. This is what needs to be "code reviewed" for any security problems. Microprocesses don't have a security responsibility at all, the database already has the needed mechanisms, and doesn't require code review - anyone with a basic understanding of database roles can review for themselves.
RPC Call Tree Dependencies vs Independent Dataflows with Distributed Data Buses
Microservices are "called" to perform a process by another process (ie. orchestration or UI). This leads to a complex tree of RPC call dependencies and requires stable RPC contracts. Microprocesses only ever find and process from unprocessed data collection, triggered by database change. They are not "called", but may participate in a pipeline of changes. Dataflows are only ever localised to a part of the broader database.
URI vs Data Hub
Microservices are eventually bound to a URI endpoint, so that the frontend can "call" them. Microprocesses are never directly "called". They are standalone OS processes that are triggered by change in input data collection(s) and result in output(s). They are the pure logic only.
TODO: Diagram - Data/Comms - Function - Data/Comms - Function - etc.... Showing service dependency chains. There are Data (View Model) contracts; Data (Storage Schema) contracts; Service API contracts; and the services themselves must define a service API contract as well as consume one. As opposed to microprocesses: there are only Data (Storage Schema) contracts. A microprocess pulls in data knowing contracts it needs; and pushes data out knowing contracts it needs.
An eCommerce system consists of quite a few parts, but here we'll just focus on one flow. From order in the browser, to an email being sent out.
Although there is already stored data, with services, there are two more contracts required: the function endpoint address itself is a contract, and so is the POST data schema.
The flow structure is complex, with a call to a function and a return path. Sometimes reponses are just a completion, sometimes they contain data which necessitates yet another view model to document (not pictured in this diagram because there's no space).
The /api/orders service is not able to do just "one" thing, it's doing:
network communication and authorisation (see diamond shape)
storing the order data in the database [data dependency]
calling the email service [service dependency]
Only the data contracts matter now. There's no need for view-models, because the "functions" are not called, they are notified by the data collection (table)
The data layer directly takes care of the network communication and authorisation.
The only "custom" coded part of the system is the function, and other configurable responsibilities have been shifted to off-the-shelf components.
Without service contracts and service calls, functions only depend on Input/Output data collections.
There are fewer decisions that coders need to make. They don't need to consider:
return-path vs web-hook
[POST] vs [PUT]
should there be a separate /api/createOrderEmail
should we use an internal message bus so we can call /email service internally to bypass a second security check?
This is a multifaceted problem. Good tooling can overcome these issues for any project. But for a microservice, it's worse than it would seem on the surface. If a developer wanted to work on a microservice unit, they would need the subscriber, which needs other subscribers, and so on. A developer would need to compile the whole system on their own workstation.
This is mostly a result of Object Oriented thinking. Microservices distribute that thinking, and probably force the designer to think more critically about sub-system boundaries. But this doesn't solve the underlying problem - the object oriented approach naturally leads to highly coupled system unless you are an expert who is putting a lot of effort in resisting that force.
Without Microservices, Inversion of Control is already a suitable solution within an object-oriented code base. Microservice brokers are like a network-distributed type of inversion of control - more complex. There is no reduction in spaghetti relative to best practice without Microservices.
Proponents claim that large applications have larger resource footprint and consume more memory and computing power. Distributed, Clustered, and other solutions have existed long before Microservices. Each microservice actually has overhead that cannot be shared, and overall accumulates as a larger overhead, leaving fewer resources for actual data processing.
This is more a problem with particular runtime environments. Those with an Intermediate Language like C# and Java need to JIT. Go-lang and Rust don't have this issue. Microservices hosted on FaaS famously have go-cold problems. If they haven't been used for a while, the provider turns them off. When they are needed again they have to warm up (including the container/host).
Applying Microprocess Architecture principles to your existing microservices project to help make clear decisions. Microservices is a broad collection of ideas, some are quite extreme. Microprocess Architecture highlights which ones are too extreme.
Proponents say that Microservices should "own" their own data. This can happen in the Microprocess Architecture way by colocating data in the same database, but then having microservices "own" the data in terms of permissions. The data can be structured into schemas that are "owned" by a bounded context of microservices.
The colocation of data would make direct querying with JOINing much easier with VIEWs that are owned by the VIEW user.
A full Microprocess Architecture would go much further than this. Data would not be owned by any Microservice, it would be owned by the database engineers; modified and managed separately to microservices; and with entities clustered according to real-world metrics, replicated to read-only database, and partitioned geographically and per customer.
Event queues and brokers can be retained, but simplified to only signal that there has been a change. After being activated, the Microservice should query the database directly through a VIEW to get the right data that's needed. Existing complex queues can be simplified to only indicate that a table or partition of table has been modified.
A full Microprocess Architecture would go much further than this. The database itself should only be used for signalling, eliminating the infrastructure for events (Broker, Queues).
For those that have tried everything to make their Microservices project work well, but it isn't working.
Slow - because of a long chain of microservices
Unreliable because microservices fail and break a call chain
Unreproducible errors in production that don't seem to happen in development - and you have already thrown everything you have at the problem
Project cannot make money because it's incomplete - held up by quality issues
Migrate toward Microprocess Architecture - as far as you need before you get the stability you need
Start again from scratch - not a good option
How to gradually migrate away from microservices to microprocess architecture:
Target a critical subsection of your system that needs to be stablised, and apply the following gradually until you reach the level of stability you need.
Microdata is the most important - it terminates coupling
(This can begin in the critical areas that are most problematic)
Ensure you have a single database that all microservices will be able to access - PostgreSQL is recommended.
Moving domain entities from microservices to this central database
Tuning: Creation of read replica(s) if necessary for scalability, optimisation of queries and indexes, and increasing VPS resources.
Bypass microservices for basic read-in data
Instead of calling other microprocesses, query the database with SQL directly.
Replace event queues with task tables
Create task tables to hold event queue data
Upon publish of event, also duplicate event data to new task table(s)
Upon start of microprocess, query the task table for data along with additional filter clauses, and ignore the event message
Change event queue signal to use a database signalling mechanism instead
Further modification of the output state, and driving view of each target process