Distributed objects – how to cope with objects scattered across multiple Fedora’s

Posted on April 18, 2008


I was asked this question by Chris Wilper a couple of weeks ago:

From http://blogs.sun.com/georg/entry/open_respository_2008_day_3
“Scalability : objects can be placed in any object store on the
network, and located via their object meta data. This means that
scaling over multiple Fedora instances is a no brainer.”

Sounds cool. How’d you do resolution? (e.g. a request comes in,
which repository is it in?)

At the time, I had a good number of solutions I had tried, with one solution waiting on something to become more stable for it to work. They all had their plus sides and their minuses. They generally followed the tried and tested method of either a centralised source, or having a DNS style system.

What I didn’t write to him about was an idea that’s been fermenting in my mind for a little while now but which wasn’t thought through enough to explain at that time.

De-centralised database for Fedora object URIs

The base premise is to use something called a distributed hash table or DHT to hold the link between URI and base Fedora URL. Now, as DHTs are kinda new and tend to be found in 2 main fields – research projects and trackerless bitorrent – I’ll just write a brief summary of what they are.

Distributed Hash Tables (DHT)

From Wikipedia:

Distributed hash tables (DHTs) are a class of decentralized distributed systems that provide a lookup service similar to a hash table: (name, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given name. Responsibility for maintaining the mapping from names to values is distributed among the nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows DHTs to scale to extremely large numbers of nodes and to handle continual node arrivals, departures, and failures.

DHTs form an infrastructure that can be used to build more complex services, such as distributed file systems, peer-to-peer file sharing and content distribution systems, cooperative web caching, multicast, anycast, domain name services, and instant messaging. Notable distributed networks that use DHTs include BitTorrent (with extensions), eDonkey network, YaCy, and the Coral Content Distribution Network.

(Emphasis my own)

As other, much more qualified people, have written about the underlying algorithms that power DHT, I will simply link to the articles that I have found illuminating. (DHT can also be treated as a black-box technology if you wish too)

Let’s illustrate how having a DHT of URI-to-Fedora-Server pairs will help us, by skipping ahead and imaging that we already have the situation that each Fedora server hes a DHT node service of its own – just like a peer2peer filesharing program, it maintains a list of value pairs for the items it holds.

Let’s now imagine 3 commands that a Fedora instance can use to work with it’s own DHT node – get, put and remove:

  • get key – gets the values associated with a given key
  • put key value – puts the given key-value pair into the DHT
  • remove key value – removes the given key-value pair from the DHT

So, as a new item is allocated to a given Fedora, it can add a pair to the DHT, the key being the item’s URI <info:fedora/ns:id>, and the value being the base URL for that Fedora, e.g. <http://host:8080/fedora&gt;

$ ./put.py info:fedora/uuid:00e41229-1c9f-4c1d-ac3a-b51d34bbbe8f http://archive.sers.ox.ac.uk:8080/fedora

$ ./get.py info:fedora/uuid:00e41229-1c9f-4c1d-ac3a-b51d34bbbe8f

Specific implementation notes

The Bamboo DHT implementation is a very useful implementation of a DHT system, and has a good amount of helpful documentation. It is also the basis for a good test-bed service, the OpenDHT service mentioned earlier.

In fact, the three commands illustrated above, have real, live python implementations which are presented on the OpenDHT site – get.py, put.py, and rm.py. The underlying mechanics to the protocol is just simple XMLRPC, so most languages have solid libraries for interfacing with the API.

I see this service hooking into Fedora by using a listening service on the ActiveMQ message queue – put’ing the URI/URL hash pair when items are added, and rm’ing when purged. The Fedora and the Bamboo service should be booted and shutdown together, adding or removing whole sets of hashes to or from the DHT, accurately reflecting the accessibility of the Fedora item.

Now that we have a DHT, why not…

Another beneficial use of the hash table may be to encode certain information, such as the template URL for a HTML splash page (should one exist) – perhaps the template style as used by OpenSearch 1.1 – e.g. “http://archive.sers.ox.ac.uk:5000/resolve/{uri}” (Note: I have added this resolving service to handle both info:fedora/ns:id and info:fedora/ns:id/dsid type URIs – the first leads to the splash page and the second form redirects to the download for that given datastream)

Important final note

It is very important in distributed environments that you have UUIDs (which may or may not involve the numerical UUIDs I promote the use of) – the important part is that each Fedora identifier is unique across the whole set of Fedora instances. As there the only issue with joining these URI lookup tables across institutional boundaries is political, it may be a good thing to adopt a consistent and bullet-proof mechanism for ensuring that your id system is not going to collide with someone elses.

Posted in: Uncategorized