Skip to main content

The Coherent Business to SCS Model

3 min read

At first glance the self-contained system (SCS) approach [1] is a native software architecture topic. However, there is a tight correlation to the organisational resp. the business part of the story. Here's the hook: "Each SCS is owned by one team." [2]. Further: "The manageable domain specific scope enables the development, operation and maintenance of an SCS by a single team." [3]

What's the consequence? Let's change perspective and have a view from the business side. A plausible correlation between an SCS architecture and the business layer is constituted by a mapping from the core business processes to their enabling (or maybe just facilitating) software systems. The mapping is as follows:

  1. Identify a value creating business process P.
  2. Divide P into logical steps P1 - Px whereby "logical" means that each step represents disjoint, well-defined business logic.
  3. A step is casted to a business domain or - for short - domain.
  4. Finally, each domain is mapped to a self-contained system. Shared business objects such as customer or order are exchanged via RESTful HTTP or leightweight messaging as defined in the SCS approach.

The Coherent Business to SCS Model

The mapping of the business part to its technical counterpart is coherent. Therefore, this is called the "Coherent Business to SCS Model" (CBSM).

A product organization has product managers who manage their domain products powered by SCSs. There's a good chance that they do it the agile way having teams who develop, operate and maintain their SCSs.

On the business layer the domains are tied together by the company vision which enables the product managers deriving their product vision.

The core value of the CBSM is that it enables the domains to develop with maximum speed due to minimum dependencies on system level. To be precise, the only dependency is the API. If a domain does not change its API there will be (theoretically) no limit of development within this domain. This even allows replacement of the underlying SCS once it is at the end of its lifecycle. Equivalent leightweight communication on the business layer makes this a success story.

A congruent approach has been implemented at GALERIA Kaufhof (a member of Hudson's Bay Company) [4]. It further shows an evolution of the model: A domain may be powered by more than one SCS for technical reasons.

Well, I think this kind of interaction of business and technology is not really a new idea. Have a look at The Open Group's definition of a service in SOA which embraces some of the core ideas [5]. My argument is that the Coherent Business to SCS Model is a more leightweight approach (buzzword!). I just wonder if SCS is a consequence or the driver?

[4] (in german)

Just realized: My smartphone acts as an aggregator for e.g. email and calendar (business and private) edited on different computers. Isn't that economical and ecological nonsense? Having only one "computing unit" should be enough for common people like me.

REST vs. Message Queue

2 min read

I was asked whether to use REST or a Message Queue to realize data replication between two systems. Well, this depends on your requirements, doesn't it?

So, Carsten Blüm and I started a collection of arguments and observations:

Implementing a trans-system data replication

REST (via data feed e.g. using Atom)

  • rather for asynchronous data replication, delay tolerant
  • data replication is triggered client-side; a client decides when to fetch data from a server
  • i.e. temp. inconsistencies are acceptable
  • client manages a resync on client-side data loss autonomously
  • client is responsible to handle errors
  • client needs a navigable data history
  • generally irrelevant who and how many clients are fetching (the same) data
  • data replication over HTTP, therefore benefit from HTTP features like caching
  • data replication does not need to be reliable from server's perspective, no acknowledgements for receivings needed, i.e. no need to ensure a trans-system transaction
  • temp. downtime of the server i.e. temp. unavailability of data is acceptable
  • no further technology than HTTP wanted

Message Queuing

  • rather for virtually synchronous data replication, delay intolerant (assuming clients'/consumers' uptime is 24/7 and they are reading constantly)
  • i.e. temp. inconsistencies are rather inacceptable
  • data replication is triggered server-side (assuming clients'/consumers' uptime is 24/7 and they are reading constantly)
  • server is responsible to resend data on client-/consumer-side data loss
  • clients/consumers don’t need a server-side data history
  • observation: the number of clients/consumers tends to be smaller and clients/consumers are probably known; a need for routing and/or filtering and/or even a time to live may exist
  • no data replication over HTTP (assuming there is no RESTful API to a message queue)
  • data replication needs to be reliable, acknowledgements for receivings needed
  • published data must be available even when the server is down
  • it's OK to extend the technology stack and have a Message Queue


  • You can implement virtually synchronous data replication perfectly in REST (e.g. Atom feeds constantly polled) as well as asynchronous data replication using a message queue.
  • You can, of course, implement acknowledgements in REST. In [1] Atom is used for "reliable data distribution and consistent data replication".

[1] Firat Kart, L. E. Moser, P. M. Melliar-Smith: Reliable Data Distribution and Consistent Data Replication Using the Atom Syndication Technology. about digital tools has gained quite attention. Some of them look really promising.