Skip to main content

Just realized: My smartphone acts as an aggregator for e.g. email and calendar (business and private) edited on different computers. Isn't that economical and ecological nonsense? Having only one "computing unit" should be enough for common people like me.

REST vs. Message Queue

2 min read

I was asked whether to use REST or a Message Queue to realize data replication between two systems. Well, this depends on your requirements, doesn't it?

So, Carsten Blüm and I started a collection of arguments and observations:

Implementing a trans-system data replication

REST (via data feed e.g. using Atom)

  • rather for asynchronous data replication, delay tolerant
  • data replication is triggered client-side; a client decides when to fetch data from a server
  • i.e. temp. inconsistencies are acceptable
  • client manages a resync on client-side data loss autonomously
  • client is responsible to handle errors
  • client needs a navigable data history
  • generally irrelevant who and how many clients are fetching (the same) data
  • data replication over HTTP, therefore benefit from HTTP features like caching
  • data replication does not need to be reliable from server's perspective, no acknowledgements for receivings needed, i.e. no need to ensure a trans-system transaction
  • temp. downtime of the server i.e. temp. unavailability of data is acceptable
  • no further technology than HTTP wanted

Message Queuing

  • rather for virtually synchronous data replication, delay intolerant (assuming clients'/consumers' uptime is 24/7 and they are reading constantly)
  • i.e. temp. inconsistencies are rather inacceptable
  • data replication is triggered server-side (assuming clients'/consumers' uptime is 24/7 and they are reading constantly)
  • server is responsible to resend data on client-/consumer-side data loss
  • clients/consumers don’t need a server-side data history
  • observation: the number of clients/consumers tends to be smaller and clients/consumers are probably known; a need for routing and/or filtering and/or even a time to live may exist
  • no data replication over HTTP (assuming there is no RESTful API to a message queue)
  • data replication needs to be reliable, acknowledgements for receivings needed
  • published data must be available even when the server is down
  • it's OK to extend the technology stack and have a Message Queue


  • You can implement virtually synchronous data replication perfectly in REST (e.g. Atom feeds constantly polled) as well as asynchronous data replication using a message queue.
  • You can, of course, implement acknowledgements in REST. In [1] Atom is used for "reliable data distribution and consistent data replication".

[1] Firat Kart, L. E. Moser, P. M. Melliar-Smith: Reliable Data Distribution and Consistent Data Replication Using the Atom Syndication Technology. about digital tools has gained quite attention. Some of them look really promising.

"Let Technology Inspire You" Series

1 min read

Last monday we started our "Let Technology Inspire You" series at DI UNTERNEHMER. Offering a forum to data people and enabling vital discussions about "data" is part of our transformation towards a (digital) data company. First guest speaker was Tim Strehle explaining "How the Semantic Web can change Digital Asset Management" (slides in German).

My inspirations are as follows:

  1. "Using HTML as the Media Type for your API" including RDFa markup
  2. The power of Mediatypes to describe a system's domain in an SCS environment

Thoughts on #1:

  • Is it efficient to have have a hybrid resource representation for both humans and machines? Why not use content negotiation to provide a human and a machine readable representation? If a resource is data driven this separation should not result in much effort.?

Beside these inspirations it was interesting to see how the idea of Self-Contained Systems is spreading. (Tilkov, the Messiah.)

If you are interested in sharing your thoughts on data with us pls. drop me a line.