Skip to main content

Musings about Home JSON-LD

2 min read

The initial idea of a Home JSON-LD was described in http://blog.joschmidt.net/2016/resources-and-subjects-json-home-and-json-ld in the context of JSON Home (which rather justifies the term “JSON-LD Home”).
The following list of basic requirements represents the kick-off of deeper musings about Home JSON-LD.

  1. It is not really a standard but more a formalised agreement.
  2. It is in the public domain.
  3. It is hybrid.
  4. It does neither require changes in JSON Home nor in JSON-LD.
  5. It does not define any new format, language, syntax, rel(ationship) semantics, etc., etc. Consequently, it does not define a new mediatype.
  6. It does not violate REST.
  7. It is agnostic to implementations.
  8. It is open to evolution.


Further thoughts on these requirements

  1. "Not really a standard" means that nothing substantial new is created like in a typical RFC but a collection of agreements about specific known practices. "Formalised" means that a specification exists.
  2. "Public domain" is the most radical copyleft; as usual, no warranty on anything is granted.
  3. "Hybrid" simply means interconnecting JSON Home and JSON-LD. However, I think that no explicit but just a conceptual connection is needed.
  4. Enough said.
  5. Enough said. No new code has to be written.
  6. Enough said.
  7. It is all within the scope of REST and therefore within the scope of HTTP. The server side implementation of the protocol is totally irrelevant.
  8. It is an initial draft, embracing changes und evolution.

Resources and subjects, JSON Home and JSON-LD

10 min read

This blog post discloses the evolution of ideas and is not just the presentation of the final result. The latest draft of JSON Home published on 24 November is not covered. Note: I’m just on the way to understanding REST.

As HATEOAS is a constraint in REST and not an option [1] designers and implementors of RESTful APIs should have a look at JSON Home [2]. JSON Home - in it’s primary design goal - is a resource centric standard draft.

In a current project we have established a RESTful API for trans system data replication and - consequently - implementing JSON Home as a lookup mechanism for resources was on our roadmap.

So far, so good. However, Topic Mappers like me and the Semantic Web people in general are always asking for the subject(s) indicated / represented / identified by a web resource’s representation (see the difference between resource and representation in RFC 7231). I will use represented in the following to keep things easy.
A generic definition of subject can be found in the Topic Maps Data Model:

"A subject can be anything whatsoever, regardless of whether it exists or has any other specific characteristics, about which anything whatsoever may be asserted by any means whatsoever. In particular, it is anything about which the creator of a topic map chooses to discourse." [3]

The basic idea I would like to sketch out in the following is how to use JSON Home to enable clients to get an idea about the subjects represented by the resources’ representations in question.

In my opinion, this would add substantial benefit to JSON Home as "follow your nose" [2] could be improved or even replaced by an I know why I follow-approach.

JSON-LD is a serialization of RDF. RDF enables authors to identify subjects (by using resources and representations as intermediates!) and make statements about them using triples constituted by subject-predicate-object. However, by subject I still refer to the definition given above.
Having RDF at hand a connection to the semantic web stack is established.

The simple and obvious task is how to connect JSON Home and JSON-LD?

The latest draft of JSON Home provides two hints sections to add further information:
    1.    Resource Hints

    2.    Representation Hints


In order to be able to disclose information about the represented subjects we could add a third section:
    3.    Subjects Hints


The internet media type identifier tells clients which model is used to disclose subjects semantics. Example:
{
   "...": "...",
   "subjects-hints": {
      "application/ld+json": {
          "...": "..."
      }
   }
}

A more sophisticated approach could provide more information about the chosen representation. Example:
{
   "...": "...",
   "subject-hints": {
      "models": [{
          "media-type": "application/ld+json",
          "docs": "http://json-ld.org",
          "body": {
              "...": "..."
          }
      }]
   }
}

The key body holds the actual statements / data.

Both approaches allow the integration of n models.

But wait, let’s examine what has happened so far: I have mixed two mediatypes, i.e. application/json-home encloses application/ld+json. The resource’s representation in question is in a media type dilemma. Declaring both media types is no option as RFC 7231 simply does not allow assignments of more than one media type in the Content-Type header. Good bye, RESTfulness.
Last but not least, the bridge property - subjects-hints - introduces vocabulary which isn’t fixed in any global registry and / or in the standard (yet).

In a quick thought I wondered if JSON namespaces would help (and stumbled upon https://www.mnot.net/blog/2011/10/12/thinking_about_namespaces_in_json). But this doesn’t solve the media type dilemma. An aspect that I miss in the discussion in Mnot’s blog post (or I missed the point).

The new simple and obvious task is to separate JSON Home and JSON-LD and make the JSON-LD accessible via JSON Home.

Basically, what I really want to do is to establish a lookup mechanism for subjects while JSON Home is a lookup mechanism for resources. Consequently, the JSON-LD should reside at the base URI - as a "Home JSON-LD" - side by side with the JSON Home document. I.e. I want the resource identified by the base URI to represent (at least) these two desired states [4].

This is easily done by content negotiation which realizes the separation:

GET / HTTP/1.1

Host: example.org

Accept: application/json-home

GET / HTTP/1.1

Host: example.org

Accept: application/ld+json

The second part of the task is to make the JSON-LD accessible via JSON Home as the lookup of resources is the starting point for clients.

The initial approach sketched out above introduces new vocabulary in JSON Home: subject-hints
The idea was to attach pieces of JSON-LD to the particular resources which - in turn - requires JSON-LD to specify JSON pointer [5] as its fragment identifier syntax, though.

Extending the standard this way feels cumbersome somehow. Looking still deeper at the conceptual basis, the attempt is to
    •    Separate the subjects space from the resources space,

    •    As a meta layer,

    •    And make the subjects space accessible from the resources space.


I.e. the separation is still more conceptual than technical. As the subjects represented in the JSON-LD are about the domain space, the resources published in JSON Home will also occur in the JSON-LD. But with a specific purpose related to the subject in question.
E.g. in Topic Maps there is a precise classification / distinction of purposes:
"subject identifier
locator that refers to a subject indicator

subject indicator
information resource that is referred to from a topic map in an attempt to unambiguously identify the subject represented by a topic to a human being

subject locator
locator that refers to the information resource that is the subject of a topic" [6]

Given the URI http://example.org/people/John-Doe this is just a resource in a JSON Home document enriched by technically motivated hints. However, from the subjects perspective, a certain real Mr. John Doe may be a subject of interest and http://example.org/people/John-Doe may act as a subject identifier referring to a subject indicator. This is all well defined in Topic Maps which serves as a reference implementation for subject identification here. The JSON-LD document then contains further statements about the subject Mr. John Doe using RDF triples which constitute the subjects space. This is semantically motivated information.

Let’s talk about an example.
A very popular vocabulary widespread on the web and therefore - for short - hypothetically attributed with a certain quality of common understanding is schema.org. Please refer to http://schema.org for further information.
We find some inspiring examples in the schemas section, e.g. for the type “Person”:
Jane Doe
Photo of Jane Joe
Professor
20341 Whitworth Institute
405 Whitworth
Seattle WA 98052
(425) 123-4567
jane-doe@illinois.edu
Jane's home page:
janedoe.com
Graduate students:
Alice Jones
Bob Smith [7]

Let’s imagine we want to build a RESTful API to access public data about the staff and students of a university where Prof. Jane Doe is a guest lecturer. The basic design of the API consists of indexes (of the university people, i.e. staff / employees and students) represented as lists of hyperlinks to the people’s homepages, and items which are the people’s homepages.
Since REST is not about nice URIs [8], parsing the URIs and / or the URI patterns in our API’s JSON Home for meaningful strings fails; we have chosen arbitrary generated URIs. E.g. the URI http://example.org/indexes/foobarbaz identifies the index of the employees.

I guess you are getting the idea. We would establish a JSON-LD representation which discloses information about the involved subjects, the Whitworth Institute and its employees:

{
   "@context": "http://schema.org",
   "@type": "CollegeOrUniversity",
   "name": "Whitworth Institute",
   "url": "http://example.org/about",
   "address": {
      "@type": "PostalAddress",
      "addressLocality": "Seattle",
      "addressRegion": "WA",
      "postalCode": "98052",
      "streetAddress": "20341 Whitworth Institute 405 N. Whitworth"
   },
   "employees": {
      "@type": "Person",
      "url": "http://example.org/indexes/foobarbaz"
   }
}

The property url is specified as "URL of the item" [9]. Unfortunately, this is just vague semantics.

There is no guarantee that all resources from JSON Home reappear in the "Home JSON-LD". It is the author’s choice to determine the resources coverage as well as the richness and number of statements about the subjects. However, "Home JSON-LD" would impose on authors to set the url property for each subject which acts as a wormhole between JSON Home and "Home JSON-LD".

This is the subjects space. Based on shared semantics (in / via the established properties), clients can pick resources of interest from "Home JSON-LD" and retrieve access information from JSON Home afterwards (or vice versa). Little effort is required to map the string property url to the corresponding resource object.

I know that this is a naive approach; the connection between "Home JSON-LD" and JSON Home is not a sound one. URIs could change, and authors might forget to update the JSON-LD. Using the link relation types in JSON Home which allow custom relations like “tag:me@example.com,2016:widgets” [2] would be a far more stable mechanism.

Finally, I like the idea of a "Home JSON-LD" which is retrieved via the Base URI using content negotiation. Using a special resource in JSON Home requires a corresponding special link relation which discloses the specific "Home JSON-LD" semantics. This relation has to be registered and established (or the other way round) first. However, we should use the whole bandwidth of existing standards first before extending them. This is always a useful design goal.

Musing about “Home JSON-LD” is another story.

References

[1] https://www.infoq.com/articles/roy-fielding-on-versioning
[2] https://mnot.github.io/I-D/json-home/
[3] http://www.isotopicmaps.org/sam/sam-model/#d0e746
[4] https://tools.ietf.org/html/rfc7231#section-3
[5] https://tools.ietf.org/html/rfc6901
[6] http://www.isotopicmaps.org/sam/sam-model/#terms-and-definitions
[7] https://schema.org/Person, Example 8
[8] https://speakerdeck.com/stilkov/rest-not-an-intro-1?slide=14
[9] https://schema.org/CollegeOrUniversity

Migration of passwords from Revelation on Linux to KeePassX on OS X

1 min read

I have managed my passwords on my Ubuntu machine using Revelation which simply does not run on OS X.

After doing some research about password managers for OS X, I finally decided to give KeePassX a try (sorry, no LastPass) and, consequently, was faced with a data conversion and migration task.

After some trial and error the following steps led to success:

  1. Export your password data from Revelation as (Revelation) XML.
  2. Install Keepass on Ubuntu.
  3. Create a new database in Keepass and import Revelation XML (you will see all your data in Keepass then).
  4. Save the Keepass database in the default kdbx format.
  5. Transfer the kdbx file to your Mac.
  6. Open the kdbx file in KeePassX using the same master password defined in Keepass.

Done.

Collective product ownership

5 min read

Often, (locally) new concepts* are born when a more or less complex task has to be accomplished. Same applies here.

The story

Right on the halfway to an MVP I’m having a two months paternity leave. Good news for my little son, bad news for the project? Nobody likes bad news...

The task

Eliminate the single point of failure of the product ownership and establish a collective product ownership. This concept is inspired by collective (code) ownership in XP [1] which, however, only affects the development team. On the other hand collective product ownership is an holistic approach which, consequently, affects the whole team. Both are democratic, though. This concept looks so promising that I tend to introduce it as a common agile practice and not only using collective product ownership in exceptional situations.

Definition

Generally spoken collective product ownership means a masterless product ownership. I.e. there is not a single team member (traditionally the product owner) who feels strong empathy for the product’s success - this empathy is spread across the whole team. Consequently, the product vision as well as the product roadmap are virtually joint artifacts.
Product ownership has different facets, e.g. financial, legal, organisational, empathic / emotional ones. Collective product ownership targets the empathic / emotional level.

Benefits

There are three main benefits:

  1. Be more efficient and effective by using swarm / collective intelligence. The more complex the task is the bigger the benefit of swarm / collective intelligence is.
  2. Be more efficient and effective by enforcing universal motivation. The more complex the task is the bigger the benefit of universal motivation is.
  3. Be more secure by eliminating a single point of failure in a crucial job (like you do in software architecture and often neglect in organisations).

The role of the product owner

Collective product ownership doesn’t exclude the product owner (as a role assigned to a specific person). In my opinion, it is (still) naive to believe that the large majority of medium and large enterprises allows fully self sufficient product teams. From my experience, enterprises still need accountables (according to RACI [2]) to meet the requirements of their organisational structure. The concept of primus inter pares [3] seems to be a solution, a bridge, to connect collectivity with the requirement of having one accountable. I.e. from the external perspective the product owner is accountable for his product and responsible for the product vision, backlog and roadmap while internally creating and refining the vision, backlog and roadmap are collective tasks. As the primus inter pares the product owner has the right of a double voting on these topics e.g. if discussions are stuck. The product owner puts (creating) the product vision on the agenda, provides drafts of the product roadmap and backlog, leads the backlog refinements in Scrum. He is the driving force behind the product mission.

How to establish collective product ownership?

The crucial task is that every team member internalizes what the product to build is about; what the value of the product is (from the customer / user perspective). It’s about the big picture, not the details which are constantly discussed during the journey. The concept of Self-contained systems (SCS) [4] is beneficial here as a system in the SCS approach is reduced to only a few, or even only one, domain specific job(s). 

From my experience I consider the following actions effective:

  1. Schedule a minimum of one week before starting the iteration 0 (sprint 0 in Scrum) to draft, discuss, refine, and sharpen the product vision, the initial version of the roadmap as well as the architecture of the product to build. You may find the mind map for digital products inspiring and useful [5].
  2. Formalize and record the results together in iteration 0.
    1. E.g. use Roman Pichler’s “The Product Vision Board” [6] for the product vision.
    2. Create the initial prioritized set of epics and backlog items (stories in Scrum) which are subject of the following backlog refinements.
  3. Create the initial set of metrics and KPIs together: How do you objectively check if your product is moving right against the stakeholder’s objectives? You may find my template for application metrics helpful [7] (find a further read in [8]).
  4. Create the DoR and DoD as joint artifacts in iteration 0.
  5. Use the backlog refinements to check periodically how you are proceeding towards the product vision. Does the generated knowledge during implementation require corrections? Always remind the big picture and don’t lose yourself in the details.

Remember: It’s plausible and perfectly compliant to the idea of collective product ownership that the product owner creates the initial drafts and drives the discussions.

The action part is a field for discussion and learning. The verification of the hypothesis is still running. I’m really curious about the result when I return in the middle of September.

* After doing some research on the web I found an “experience report” [9] from 2006 which is about “Ript: Innovation and Collective Product Ownership” [9]. Some slides are available for free [10].

References, links

[1] http://www.extremeprogramming.org/rules/collective.html
[2] https://en.wikipedia.org/wiki/Responsibility_assignment_matrix
[3] https://en.wikipedia.org/wiki/Primus_inter_pares
[4] http://scs-architecture.org/
[5] http://blog.joschmidt.net/2016/digital-product-owners-beware-of-the-iceberg
[6] http://www.romanpichler.com/tools/vision-board/
[7] http://blog.joschmidt.net/2016/a-template-for-application-metrics
[8] http://blog.joschmidt.net/2016/a-classification-of-application-metrics
[9] https://www.computer.org/csdl/proceedings/agile/2007/2872/00/28720316.pdf
[10] https://www.scrumalliance.org/resource_download/289

A template for application metrics

1 min read

I love patterns and templates. I love them even more if they are proven in practice. In the context of metrics my template for application metrics is a really useful little helper. IMO it is not only a product owner’s artefact but also a valuable medium when discussing the overall nature of a digital product with the development team. Consequently it is a plus when a product owner has created a sufficient collection of application metrics and provides further information as given in the template before any code is written and the whole team is shaping the vision and the architecture of their digital product.

An important component of the template is the second part where it’s about business value, targets and business impact on target violation. In my observation this is often a neglected aspect.

Find the template including an example metric on Google Docs: https://docs.google.com/spreadsheets/d/1hIDWP787DNQNJrVFixUoCrl3I6nJ2ZEKNtWEO5OBraM/edit. Various formats are provided for download. Any feedback is appreciated!

The template is licensed under Creative Commons BY 4.0.