Project Clearwater over RINA

Clearwater over RINALast weekend, I attended the TADHack Global hackathon in London.  I’d attended a few of the previous hackathons, in particular hacking on Matrix a couple of times, but this time it was the “RINA Rumble” challenge that most appealed.

RINA is the Recursive InterNetwork Architecture, a modern replacement for both TCP and IP.  It’s still fairly early days, and I expect it will be at least several years before it’s deployed at all widely.  However, it’s interesting to understand where technology might be going, and this was a great opportunity to play with it.

So, along with my team-mate Yin Yee, we set out to get Project Clearwater (the open-source, cloud-native IMS core that I work on for Metaswitch) running over RINA.  Specifically, we set out to get the internal HTTP communication running over RINA – Clearwater has many interfaces (as you can see from the architecture diagram below), but the HTTP ones were easiest because they only involved changing Clearwater code – if, for example, we’d picked the SIP protocol, we’d also have had to get a SIP phone running over RINA.

Clearwater Architecture

RINA is a standard, and there are multiple implementations – on the recommendation of the Arcfire team, we chose the rlite implementation.  While I understand there’s ongoing research into integrating with the standard “BSD Sockets” networking API, rlite currently requires you to integrate with a “librina-api” library to open RINA connections.  Once RINA connections are open, though, they appear (as is standard in POSIX) as just another file descriptor.

Clearwater’s HTTP communications use the standard libcurl and libevhtp libraries.  Fortunately, both have some support for plugging new transports in – libcurl allows you to define a callback function for connecting to the peer and returning a file descriptor, and libevhtp is built on the libevent framework, which is agnostic to the type of file descriptors it’s operating on.  So, I took on the libevhtp changes, while Yin Yee took on the libcurl changes.  I also took on getting virtual machines turned up to run the software on (deployed in Amazon AWS EC2) and she took on setting up the RINA configuration on them.

RINA expects to replace both TCP and IP, running directly over Ethernet.  Since EC2 does not allow direct Ethernet connectivity between instances, we had to work around this.  Fortunately, RINA can be tunnelled over UDP, so we tried to set that up.  We had a bit of a problem with this – in particular, it wasn’t clear on which branch of the rlite code we should be running on, or which DNS records we should be setting up (rlite complained about missing DNS records but that turned out to be a red herring).  Marco from Arcfire helped us get going, and late on Saturday we had the UDP tunnel up and running.

Unfortunately, when we came in on Sunday morning (with our code all written and ready to test), we found that the tunnel was no longer operational and despite all our (and Marco’s) attempts, we couldn’t get it back.  We successfully got the Homestead process (running on the Dime node shown in the diagram above) listening for RINA connections, and the Sprout node trying to connect over RINA, but without the tunnel it wasn’t possible to progress any further.

While Yin Yee was working with Marco on this, I had realized that porting each library in turn to use RINA was going to be quite slow and laborious, and started prototyping a new approach: writing an “interposer” that intercepted BSD Sockets API calls from existing (unmodified) programs and translating them into RINA API calls.  As shown in the diagram below, an unmodified process (on the left) uses the standard BSD Sockets/POSIX APIs to talk to the interposer (rather than talking directly to libc), and the interposer translates these calls into TCP/IP-related calls to libc or RINA-related calls to librina-api according to its configuration.

Proposed interposer architecture

This appeared to work, but actual communication again failed due to the lack of a tunnel.

We presented, although with no demo.  🙁

Fortunately, the Arcfire team recognized our efforts anyway and we won the “RINA Rumble” challenge!

This weekend (with some advice from the author of rlite), I’ve set up the UDP tunnel again and successfully registered and made a call using Clearwater communicating over HTTP/RINA – some network traffic captured from the instances shows that it’s communicating (over HTTP over RINA) over the UDP tunnel, rather than HTTP/TCP.

Successful HTTP over RINA

I also got the interposer working – allowing a unmodified netcat client to talk to an unmodified netcat server over RINA.

If you’re interested in reproducing any of these, the Clearwater code is in the rina branches of my sprout, homestead and ralf repositories on Github, and the interposer is in my rina-interposer repository.  For Clearwater, after setting up a deployment as normal, and establishing RINA connection between the instances, you need to set the new rina_local_appl” and “rina_remote_appl” configuration options for Sprout and “homestead_rina_dif_name” and “homestead_rina_appl_name” configuration options for Homestead to make these processes use RINA.

Thanks to Alan for organizing TADHack and for the support of Miguel and Marco from Arcfire!

(From Alan – well done Matt this is a great example of the power of TADHack through your and Yin’s expertise to to help bring the whole of the industry forward.)

3 thoughts on “Project Clearwater over RINA”

  1. Hi Matt,
    Thanks a lot for your work and for posting your experience!
    I’ll definitely give a try to your RINA interposer.
    The reason we have not implemented a socket family for RINA (achieving then complete integration with the socekt API), is that we didn’t want to stick to the “limitations” of the socket API. For example, with “connect()” there is no way to specify QoS. Also, with “bind()” you can’t register multiple names to the same listening file descriptors.

    Regarding the UDP tunnel, that was unfortunate, especially because of the misleading error message.
    If not due to misconfiguration, the UDP tunnel may also have gone down because of the keepalive mechanism.
    Once an IPC Process decides the neighbor is down (e.g. not responding for more than 10 seconds), it closes the connection (and it does not try to reconnect again autonomously). But in this case you would have seen in the daemon logs that “a neighbor has been pruned”.
    In any case, your experience definitely helped us to improve the software, so thanks for that!

Comments are closed.