The idea for RTCEmergency started the Friday before TADHack. We are a UK based RTC application development company and have been working with WebRTC, which was the core of our hack, for two and a half years now. For TADHack, we were looking for an interesting application which demonstrated the value of combining the strengths of the traditional telephone infrastructure with imaginative features implemented using over the top service enablers like WebRTC.
We are based at Bletchley Park, just south of Milton Keynes, and the idea took shape as we walked to the pub for a “technical brainstorming” session one Friday lunchtime. As we walked down the road, somewhere between the Fire Station and Police Station, the inspiration for RTCEmergency was born. The logic went something like this:
In all of our pockets, we have a device which knows where it is, can capture front and back video streams of it’s immediate surroundings, and send these to any other device that has an IP connection. If however we suffer some kind of emergency, we can only get help by using them to place a limited bandwidth audio call to an operator. In the heat of a possibly noisy, confusing the situation, we will have to verbally describe where we are, and what we can see. The operator will take these details and pass them on to the crew of a response vehicle, who will then set off for where we said we were, to deal with what we said the problem was. On the way, for example in a medical emergency, the paramedic will simply be a passenger and won’t be able to do anything constructive to help anyone until they get here (assuming our verbal description of here was sufficient to find us).
RTCEmergency is a proof of concept which solves these problems by allowing a normal call to the emergency services operator to be “upgraded” by adding over the top video and the ability to transmit GPS location data to the operator. Further, it allows a responder en-route to the emergency to be part of the conversation and interact with those on scene so that triage and first aid advice can commence from the moment that a responder is tasked.
How it works
- The caller initiates a normal Emergency Services call which flows across the PSTN, we terminate this call via an ipcortex WebRTC audio gateway onto the RTCEmergency web console application.
- The operator speaks to caller over the PSTN to determine the nature of their emergency in the normal way. WebRTC is is just playing a bit-part as a way of delivering this conversation in the operator web app at this point.
- During the course of the conversation the operator decides that location and video is needed, clicks on the button to send SMS and talks the caller through clicking on the link contained in it.
- Caller specific mobile web page launched via the SMS message sends location information to the operator via the RTCEmergency server.
- The caller web page also connects the smartphone camera peerConnection to operator, sending live video. Audio continues via GSM for resilience.
- Operator tasks responder via a tablet optimsed responder specific RTCEmergency web page – transmits location map to responder.
- Caller video sent via operator to responder as a three way call (caller, operator and responder) continues through normal GSM/PSTN emergency network.
We used a few different technologies to hang this together in such a short period of time:
- A WebRTC enabled development copy of the IPCortex.PBX API (accessed from the operator web client to present the PSTN audio call and provide CLI information)
- Node.JS – used to build the core of the application and serve pages for the three views (operator, caller, responder)
- express.js with serve-static and body-parser – rapid development framework
- socket.io – real time communication between server and client web pages (e.g. location info and WebRTC signalling O/A)
- simple-nexmo – used to send the SMS messages to the caller
The source code for the hack is here at github.
Taking part in TADHack
I flew out to Madrid and participated onsite. Due to other commitments Matt, Steve and Jamie got left behind to develop most of the hack remotely. This turned out to help us quite a bit as the WiFi at both my hotel and TadHack itself wasn’t that great, probably due to large numbers of developers onsite. Moving packages and code around was fairly painful and hosting RTCEmergency on a webserver back at our own site, close to three quarters of the team ended up being the best way to develop it all on a tight timescale.
That did mean that the other three folks missed out on the speaker sessions and unique atmosphere at TadHack so I had quite a lot to share with them when I got back!
Well, as I pointed out at TADHack, RTCEmergency isn’t for us a production quality bid to change the way that the world does emergency services calls. It is however a good illustration of a class of solutions where new capabilities can be delivered quickly by applying new technology API’s to old communications problems. That said, we seem to have been invited to participate in various emergency services initiatives in the UK, so maybe there will be further life in the idea.
On a more solid commercial note, all of the technology used for the hack was developed, and is being used to deliver tools which give similar new capabilities in everyday communication systems. In particular, our next generation web based desktop phone replacement, Open Communications Manager, will shortly be shipping with the WebRTC telephony gateway, video, desktop sharing and file transfer, using the same underlying infrastructure components that were used in RTCEmergency!