CoffeeStrap: a great purpose for a great technology

 Blog  Comments Off on CoffeeStrap: a great purpose for a great technology
Jan 292015

The following post has been written by the team of CoffeeStrap. We would like to thank them for the participation and their willingness to collaborate.

CoffeeStrap is an educational technology startup that allows people to learn by simply meeting new people. How can you do this? Well, CoffeeStrap is The First Platform for Adaptive Conversation Enhancement. Which means that people get matched up with other people in order to increase the quality of their conversations. Not on the basis of their faces, bodies, or professions. Butaccording to the features of the conversation itself. You have fun by meeting interesting people that are compatible with you, you get to talk about meaningful things, and you learn stuff. You can join text, voice, and video peer to peer engaging social interactions, and you learn collaterally.

coffe1Well, year 2014 turned out to be the most awesome adventure for CoffeeStrap. We got together in an smokin' hot team, we found a nest of great people to work with at our Rockstart Amsterdam headquarters, and we have just been selected as one of the 20 most disruptive European superearly stage mobile companies – out of more than 846 applications! – by the European Union and the incredible guys at ISDI, TU Aps,, and Seaya Ventures. Steve, Nacho, Simona, Rodrigo, Henrik and many others are now also on our side, with invaluable advice, and strategical help. You can find the press release here.

Right now CoffeeStrap is focusing on language learning. We decided to start form the language domain because it is relatively easy for us to provide a compatible and interesting partner while you are learning, practicing, or improving a new language. We could in turn test our basic assumption that people learn a new language by simply improving the quality of their conversations with natively speaking partners. And this seems indeed to be the case, since the community of CoffeeStrappers is growing steadily: more than 4000 people are already learning languages on CoffeeStrap Beta, and more than 200 CoffeeStrappers everyday improve their languages by simply meeting up with compatible conversational partners. This is an important proof that people can learn by simply being partnered up intelligently for smart conversations with relevant compatible matches.coffee2

Language can also be a good entry point because language proficiency is more straightforward to index and track with intelligent machine than different sorts of expertise, in other domains, such as coding, science, or entrepreneurship. We plan to extend CoffeeStrap to multiple knowledge domains in future, but focusing on the language vertical is now allowing us to improve and tune our algorithms. Getting more and more targeted on our analyses, we can improve CoffeeStrap's accuracy in matching you up with a conversational partner that in compatible in terms of pesonality, interests, and proficiency levels.

We are therefore devoting our efforts to building up the necessary technology in order to automate the analysis of the conversational sessions. This task is especially complicated because we need the realtime analysis of the coffee3conversational streams to be completely automated. This is the only way to effectively keep the information from your conversations safe and anonymous. This means that you text, talk, or videochat with a conversational partner, and CoffeeStrap's secure architecture extracts information on the fly from your conversations in a secure and automated fashion.

This unique experimental infrastructure, which we are developing in collaboration with FIWARE Kurento, subsequently uses the information from your conversation in order to identify for you always new intriguing conversational partners. In turn, you are in charge of your learning progress, and you can always keep your information under your control. This not only means that we will not have physical access to your conversations, but also that you can edit/delete your profile and the related infos anytime.

As data scientists, we praise the importance of complete personal control over private information, and we understand the importance of privacy. We also believe these are these are the only correct premises for building The First Platform for Adaptive Conversation Enhancement. Time will tell if we were right. :-)


How can I contribute to CoffeeStrap?

CoffeeStrap is based in Herengracht 182, 1016BR, Amsterdam, The Netherlands. We would love to hear from you, really. If you feel like grabbing coffee, or simply reaching over for suggestions or feedback, this is us. We can also grab beer. We consume a lot of it.

If you live far away, and you would like to contribute to CoffeeStrap, the best thing you can do is joining coffeestrap here, and begin today learning new languages for free by meeting super interesting people. You can also share about CoffeeStrap on Twitter or Facebook.


Happy Conversations.

The CoffeeStrap Crew

Kurento received 2014 WebRTC Conference & Expo V Demo Award

 Blog  Comments Off on Kurento received 2014 WebRTC Conference & Expo V Demo Award
Dec 192014

The following post is an adaptation of a press release delivered by Kurento. We would like to thank them for their collaboration and to congratulate them for receiving these awards.


FIWARE gets "Best of Show" award at WebRTC Conference Expo Paris 2014. This is the most important conference in Europe for WebRTC, a new wave of real-time multimedia technologies bringing video-conferences and video-communications directly to the browser in a standard way and without requiring any kind of installation. FIWARE got the award through an application combining Kurento (the Stream Oriented Generic Enabler implementation) and Orion (the Context Broker) to demonstrate the viability of using advanced computer vision technologies and augmented reality to improve the security of cities and the safety of citizens. This award reinforces FIWARE approach: combining last generation multimedia technologies with Internet of Things and thus providing innovative ways to exploit machine-to-machine and person-to-machine video communications for Smart Cities.

In November 2014, announced that it had received the “Best WOW Factor” and “Audience Choice” awards from TMC, Systemwide Media and PKE Consulting, at 2014 WebRTC Conference & Expo V. is an open source software project providing a WebRTC media server and a set of client APIs, making simple the development of advanced video applications for WWW and smartphone platforms. Kurento Media Server features include group communications, transcoding, recording, mixing, broadcasting and routing of audiovisual flows. This makes possible for developers to create applications going beyond the plain call model.Audience Choice Award at WebRTC

“These awards, obtained at the heart of Silicon Valley, in the most important WebRTC Conference of the world and competing against the most relevant companies in the WebRTC arena, are a very relevant milestone in our project and reinforce our vision and our energy for continuing to contribute to creating an ecosystem of really open technologies around WebRTC”, said Dr. Lopez, the project coordinator. These awards are very important for the project, which is still on maturation status and needs to gain the appropriate critical mass for becoming a successful open source community.

“On behalf of WebRTC Conference & Expo, I am very pleased to recognize the innovation demonstrated by at the 2014 Demo Awards” said Carl Ford, Event Co-Producer, WebRTC Conference & Expo V. “Companies like are the driving force behind the growth of WebRTC. truly deserves this award and I look forward to more innovative solutions from them in the future.” “It is my pleasure to grant a 2014 WebRTC Conference & Expo Demo Award to for their innovative media server solution,” said Event Producer and TMC President Dave Rodriguez. “It is our pleasure to be able to honor for their inspiring work.”

Dr. Luis Lopez explains the project in this terms: “If I see and X-ray or if I listen to a heart beating through a video-conference, the information I obtain is quite different to the one perceived by an expert cardiologist. However, current state Best WOW Factor at WebRTCof the art makes it possible to use computer vision, speech analysis and augmented reality to capture the expert’s knowledge and depict the appropriate diagnosis information to me in a comprehensive way onto my video-conference. In this way, my knowledge and my senses could be extended by the communication system in ways I cannot event imagine today. provides the appropriate framework for integrating this type of capabilities with WebRTC communication systems in a seamless and direct way.” can also be used in segments beyond person to person communications, making new business models for advertising, smart cities and other person-to-machine and machine-to-machine scenarios possible. Dr. Lopez presents it in this way “our demos have shown that, by using Kurento, it is is possible to improve the security of cities by integrating computer vision algorithms with street IP cameras, which, can be visualized by the appropriate personnel using WebRTC when an interesting event occurs. We have also shown that it is possible to create new advertising models based on embedding users, through their WebRTC enabled browsers, into virtual environments where they can interact with celebrities or other virtual objects in impressive ways.”

Kurento: the Stream-oriented Generic Enabler

 Blog, Developers  Comments Off on Kurento: the Stream-oriented Generic Enabler
Jul 042014

Kurento, FI-WARE's stream-oriented Generic Enabler, was chosen last month as one the most innovative WebRTC technologies in the world! Want to know more about Kurento? Read our guest post by Luis López Hernández, Kurento's Coordinator:

Humans don’t like bits. The friendlier an information representation format is for computers, the harder is for humans to manipulate it. If you try to read information represented in formats, such as XML, JSON you will feel your neurons crunching for a while. Our brains have been designed by evolution for processing audiovisual information efficiently, but not for understanding and managing complex data formats.

Perhaps for this reason multimedia services and technologies are pervasive in today’s Internet. People prefer viewing a video than reading an exhaustive document describing a concept or a situation. The FI-WARE platform could not ignore this and, for this reason, a specific Stream-oriented Generic Enabler has been created for dealing with multimedia information: Kurento.

Kurento is the Esperanto word for the English term “stream”. Hence, Kurento’s name is a declaration of intentions on its objectives, which can be summarized in two words: universality and openness. Universality means that Kurento is a “Swiss knife” for multimedia exposing pluggable capabilities that can be used with independence of the application or user scenario. Using Kurento you can create person-to-person services (e.g. video conferencing, etc.), person-to-machine services (e.g. video recording, video on demand, etc.) and machine-to-machine services (e.g. computerized video-surveillance, video-sensors, etc.).

Kurento provides support for most standard multimedia protocols and codecs including RTP, SRTP, RTSP, HTTP, H.264, VP8, AMR and many others. In addition, Kurento is compatible with latest WWW technologies including WebRTC and the HTML5 <video> tag.

Openness, on the other hand, denotes that Kurento has been designed as open source software basing on open standards. All pieces of the Kurento architecture have been released through the LGPL v2.1 license and have been made available at a public repository where anyone can freely access the code and the knowledge. This open vision is being reinforced through an open source software community which, in coordination with other FI-WARE instruments, is trying to support, enrich and promote the technology and vision of the project.

If you have ever developed a multimedia capable application you might have noticed that most frameworks offer limited capabilities such as media transcoding, recording or routing. Kurento's flexible programming models make possible to go beyond introducing features such as augmented reality, computer vision, blending and mixing. These kinds of capabilities might provide differentiation and added value to applications in many specific verticals including e-Health, e-Learning, security, entertainment, games, advertising or CRMs just to cite a few.

For example, we have used Kurento for blurring faces on videoconferences where participants want to maintain anonymous video interviews with doctors or other medical professionals. We also have used it for replacing backgrounds or adding costumes on a videoconference so that participants feel “inside” a virtual world in an advertisement. We can also detect and track user’s movements in front of their web-cam for creating interactive WebRTC (Kinect-like) games, or we can use it for reporting incidents (e.g. specific faces, violence, crowds) from security video streams.

Developing applications with Kurento is quite simple, you can launch you own Kurento instance at the FI-LAB through recipes or through any of our pre-built images. After that, you just need to develop your application using a quite simple API (Application Programming Interface) that we call the Media API. The Media API is based on an abstraction called Media Elements. Each Media Element holds a specific media capability, whose details are fully hidden to application developers. Media elements are not restricted to a specific kind of processing or to a given fixed format or media codec. There are media elements capable of, for example, recording streams, mixing streams, applying computer vision to streams, augmenting or blending streams, etc. Of course, developers can also create and plug their very own media elements. From the application developer perspective, media elements are like Lego pieces: one just needs to take the elements needed for an application and connect them following the desired topology. In Kurento jargon, a graph of connected media elements is called a Media Pipeline.

Media Pipeline

Figure 1. Architecture of the WebRTC loopback example. The media pipeline is composed of a single media element (WebRtcEndpoint), which receives media and sends it back to the client.

To get familiar with the Media API let’s create an example. One of the simplest multimedia applications we can imagine is a WebRTC loopback (i.e. an application where a browser sends a WebRTC stream to Kurento and the server gives it back to the client). The source code in Table 1 implements that functionality using the Java version of Media API. You can implement it in JavaScript (both for browser and Node.js) following the same principles:

Source code for the WebRTC loopback example

Table 1. Source code for the WebRTC loopback example.

>Remark that, for simplicity, we have omitted most of the signaling code. For having a working example, you should add your very own signaling mechanism. At Kurento, we have implemented a very simple API providing basic signaling capabilities. This is what we call the Content API. You can take a look to Kurento developer’s guide if you want to have a clearer image about the Content API.

You can see below the links to a fully functional WebRTC loopback example based on Content API signaling

Full source code and video showing the result of executing the WebRTC loopback example<

Browser source code of the WebRTC loopback example: 

Java (Application Server) source code of the WebRTC loopback example: 

Video-clip showing the WebRTC loopback example working: 

To make the example more interesting, let’s add some interactivity. Look at the code in Table 3 and try to figure out what it’s doing:

WebRTC + PointerDetectorFilter + FaceOverlayFilter

Table 3. Source code for the WebRTC + PointerDetectorFilter + FaceOverlayFilter example.

The PointerDetectorFilter is a specific type of computer vision capability that detects the position of a pointer (an object having a specific shape or color that can be configured). The filter makes possible to define configurable square regions called Windows. If the pointer enters into a window, the filter generates a WindowInEvent. Of course, a program can subscribe to that event just adding listeners to the filter. Upon reception of the event, we can execute the actions we want, including changing the pipeline topology or modifying media elements behavior. In the example above, the event is firing the setting of the overlay image on the FaceOverlayFilter. As a result, the loopback application now gives back the user image and only when the pointer enters into the Window region (a virtual button depicted at a fixed position of user’s screen) a hat is depicted on top of her face.

Full source code and video showing the result of executing the WebRTC + PointerDetectorFitler + FaceOverlayFilter:

Browser source code of the application WebRTC+PointerDetectorFilter+FaceOverlayFilter: 
Handler source code of the application WebRTC+PointerDetectorFilter+FaceOverlayFilter: 

Video-clip showing the application WebRTC+PointerDetectorFilter+FaceOverlayFilter: 

If you want to see more complex applications you can take a look to Kurento Github repository where we have made available the source code of services involving advanced features such as media processing chains, group communications, media interoperability, smart-city applications, etc.

WebRTC Architecture

Figure 2. Architecture of the WebRTC + PointerDetectorFilter + FaceOverlayFilter example

Luis Lopez Fernandez
Coordinator of FI-WARE Stream-oriented GE
Universidad Rey Juan Carlos 

Attend our webinars!

 Blog, Webinar  Comments Off on Attend our webinars!
Mar 272014
FI-WARE webinars

On Monday (March 31) and Tuesday (April 1) we will have 7 webinars open to anyone who wants to participate. These webinars are focused on the most used General Enablers at our previous Challenges and Hackathons. This way all participants in our FI-WARE Challenges can learn more about FI-WARE and FI-LAB

Below you can find the agenda for each day and a description of each webinar. 

Next Monday, next Tuesday… go to to attend! Hope to see you there! 

(*) Chrome browser + Java7u51 in a Windows machine are recommended.


* All times are CEST.

Monday, March 31st 

10:00 – 10:55 (CEST) – Identity Management and Access Control – KeyRock
11:00 – 11:55 (CEST) – Advanced Cloud capabilities
12:00 – 12:55 (CEST)– Mashup technologies – Wirecloud

Tuesday, April 1st

12:00 – 12:55 (CEST) – Real-time Multimedia Stream Processing – Kurento
15:30 – 16:25 (CEST) – Connection to the Internet of Things: DCA and Figway
16:30 – 17:25 (CEST)– Context Awareness: Orion Context Broker
17:30 – 18:25 (CEST)– Map/Reduce – Cosmos Big Data


Advanced Cloud Capabilities
This webinar will be practical session on FI-LAB Cloud. We will how to use of the FI-LAB Cloud portal so that you will be able to deploy and access to virtual machines, create containers and objects as well as instantiate blueprints (VMs along with software) 

Identity Management and Access Control – KeyRock
In this webinar we will explain how to secure your applications and GE's using FI-WARE Identity Management. We will explain how to create a FI-WARE account and register an application in the platform managing organizations, roles and permissions. We also describe OAuth2, the protocol that your application uses to allow access to your users with their FI-WARE accounts. And we will have a live demo in which you will learn how to implement that protocol in your application in a few easy steps.

Mashup technologies – Wirecloud
WireCloud builds on cutting-edge end-user development, RIA and semantic technologies to offer a next-generation end-user centred web application mashup platform. This webinar will teach how to develop those mashups, including development of the mashable application components used as building blocks.

Real-time Multimedia Stream Processing – Kurento
This webminar introduces Kurento, a framework for building multimedia and streaming applications based on predefined blocks: RTP, WebRTC, HTTP and RTSP senders/receivers and face detection, plate recognition or object tracking, augmented reality, group communications or media mixing and blending among others. During the webminar, we will use Kurento APIs for showing how to create media applications for videoconferencing or video streaming in a simple and seamless manner. We will also demonstrate how these applications can be enriched with Kurento's advanced processing capabilities.

Connection to the Internet of Things
This webinar will explain how to connect Internet-of-Things settings to FI-WARE through DCA and Figway enablers and how to exploit such resources in your Future Internet Apps. Two different scenarios are considered: large IoT settings such as smartcities and scope-limited deployments, typically smart spaces.

Context Awareness: Orion Context Broker
This webinar will be practical session on Orion Context Broker. We will start describing where to find the Orion information in the FI-WARE Catalogue, then how a FI-LAB user can create her/his out-of-the-box ready-to-use Orion instance. Finally, we will walk-through the main operations to manage context in Orion Context Broker. 

Map/Reduce – Cosmos Big Data
This webinar will explain Cosmos, the Big Data and Open Data platform, focusing on how to manage your data and how you can obtain value-added information from it. Several demos will be shown, such as basic WebHDFS usage, HiveQL and MapReduce examples.