FiWare

Search Results : Context Broker » FIWARE

IoT Tutorial – Orion Context Broker & Arduino

 Developers  Comments Off on IoT Tutorial – Orion Context Broker & Arduino
Jun 102015
 
Orion Context Broker

By Telefónica  R&D Chile.

This tutorial will be divided into parts for easier reading and practical purposes.

1. Introduction to the technologies used

2. Hardware configuration

3. Arduino, software and communications

4. Orion Context Broker (FIWARE)

5. Action from a remote web

The idea of ​​this tutorial is to learn some theoretical concepts and quickly apply them in practice. Possibly, when you finish this tutorial, you will have imagined hundreds of alternatives based on the original concept to create a sensor-based solution connected to the Internet (basic principle of the Internet of Things).

Come on, let's get started!

Introduction to the technologies used

Before we start, it’s important to briefly review and understand some key concepts. If you are familiar with these concepts, you can proceed directly to the next section.

First of all, the Internet of Things corresponds to the third great wave in computer technology. The first wave was the development of computers in the 60s, then the use and exploitation of the Internet, with mass penetration starting in the eighties, and now the IoT. But what is the Internet of things?

To answer the question above, we must understand the concept of 'connected things' whereby electronic devices can send and receive information via the Internet. Examples including home thermostats, smart cars, entry controls and a thousand other devices. But the reader might be wondering why these devices should be connected to the Internet.

Primarily because the data obtained from these devices can later be combined with other data to obtain more advanced functionalities. Imagine you set your smartphone alarm to wake you up in the morning. It's winter so there is not much light when you wake up. At the sound of the alarm, soft lighting is activated, the toaster starts to warm your bread and the coffee begins to heat up. This may be a rather elementary example, but it helps us to understand that the more information we have and can interrelate, the more devices we can actually create to help improve our quality of life.

One interesting point to discuss is the use of standards. For the specific case we will review later in Orion Context Broker section, an adaptation based on the OMA (Open Mobile Alliance) NGSI (Next Generation Service Interface) specification is used. In simple terms, this means that the HTTP requests or actions to be used are those that are currently employed by browsers such as GET, POST, DELETE and PUT to interact with the Context Broker.

We have reached the end of part one of this tutorial, so let’s get started with the practical information.

Hardware configuration

This is now the practical part of tutorial, so let’s get started!

The components we will be using are:

• An Arduino Board (there are many alternatives, but a version with WiFi is essential)

• A breadboard

• LEDs

• Connecting cables

• A router or a cellular device that can deliver WiFi (tethering)

As a brief introduction to Arduino, it should be highlighted that this hardware is based on an Open-Source specification that, using a board with a microcontroller, allows interaction with numerous devices such as sensors, lights, switches, etc.

Arduino has its own interface development using the C ++ language and integrates a number of libraries to facilitate the 1implementation of prototypes. This does not mean that Arduino cannot be used in industrial or high-demand environments. However, in these scenarios cost issues usually lead to the use of ad-hoc components.

When observing the structure, you can recognize some digital pins on the top and analogue pins at the bottom. Also, at the bottom, is the row of connectors to power a testing board or breadboard. The board also must have a connector to an electrical outlet and a USB mini connector, among other components, depending on the version of the board and if you use add-on 'shields' or not.

If we connect a LED to the board we can do so directly, connecting the LED to digital pin 13 and the GND as seen here. Although it should be noted that the digital pin 13 comes with a resistor, so it would be unnecessary in the image below. (In other pins resistors must be installed) .

 

2

Lastly, the same result can be obtained using a breadboard. This is a good idea if you want to add more LEDs or sensors to our Arduino board so you can add more functionalities. Remember that on a breadboard power runs horizontally in the outer points and vertically in the inner points. So the result is:

 

3

Take note: Specifically, the Intel Edison Arduino board requires 2 Micro USB cables and a power connection.

Hard to understand? I hope not. This concludes part two of our tutorial.

Arduino, software and communications

In this tutorial we will learn how to program the Arduino board so as to turn the LED we installed in part two on and off. Then, we’ll use an internet connection with the board’s WIFI.

As a prerequisite, we must have already configured the Arduino software as per our operating system. Also, we must keep the board’s USB connected to our computer to load the program to our board. Look here to see how to install the (Intel Edison) software.

You must select the version of the software that corresponds to your operating system.

Once the software is configured and installed we open our IDE and start the coding.

This is an example of the Arduino IDE.  This is specifically the IDE for Intel sets, although the concepts are the same.

 

4

In the second row menu (where the check icon is), you’ll find the commands to compile and upload our developments to the board.

Looking at the code, we have two functions. One is setup, where variables are normally initialized and the loop where the operations are executed as required.

In the File menu we have the option Examples – 01 Basic – Blink. This will display a new window with the code needed to test our LED:

/*

Blink

Turns on an LED on for one second, then off for one second, repeatedly.

This example code is in the public domain.

*/

// Pin 13 has an LED connected on most Arduino boards.

// give it a name:

int led = 13;

 

// the setup routine runs once when you press reset:

void setup() {

// initialize the digital pin as an output.

pinMode(led, OUTPUT);

}

 

// the loop routine runs over and over again forever:

void loop() {

digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)

delay(1000); // wait for a second

digitalWrite(led, LOW); // turn the LED off by making the voltage LOW

delay(1000); // wait for a second

}

The example that Arduino generates is quite simple. In line 10 a variable is set with the corresponding pin number on the board. Afterwards, the pin with the variable 'led' as an output is initialized. And, in the loop, the LED is turned on and off separated by a delay of one second. Before loading the code to the board, the IDE must be configured so it knows which board and what port we’re using:

Select Tools> Board> Intel Edison

Select Tools> Port> dev / ttyACM0

Now, if the board is properly plugged into the USB port, we can 'Upload' the code to the board (Ctrl + U) and we should see our LED turning on and off every second. Amazing! Right?

Now to use the WIFI, we need to work a little harder. Luckily, in the Arduino examples, we have a WIFI section with different alternatives when using networking solutions. Among them are Telnet servers and clients, Web servers and clients and even a Twitter client.

TIP: In our case, for purposes of simplicity, we can use a Web client since we will subsequently send requests to the Orion Context Broker using the HTTP protocol. Note that there are better solutions, but for educational purposes we’ll try to minimize the code as much as possible.

#include <SPI.h>

#include <WiFi.h>

   /**************************/

  /*  Setup configuration              */

/************************/

char ssid[] = "YourWifiSSID";                     //  Name of the network

char pass[] = "WifiPassword";                       // Network password

char server[] = "130.206.80.47";                    // ORION IP address -> Create in /lab/

int status = WL_IDLE_STATUS;   // we predefine the status as On but not connected

int led = 13;                                     // We initialize a variable to assign the pin number to which the led will be connected

/**

* Arduino Setup configuration

* (Execute only once)

**/

void setup() {

  // Inititialization of the Arduino serial port

  Serial.begin(9600);

  while (!Serial) {

    ; // wait for serial port to connect. Needed for Leonardo only

  }

  // Verify that the board has a WiFi shield

  if (WiFi.status() == WL_NO_SHIELD) {

    Serial.println("Wifi shield is not available");

    // Do not continue with setup, or in other words, stay here forever

    while(true);

  }

The complete code is available in:

https://bitbucket.org/tidchile/ecosystem/src/

FIWARE and Orion Context Broker

As discussed earlier in this tutorial, the Orion Context Broker is a service that based on the OMA NGSI 9/10 standard and can handle sending and receiving contextual information. What does this mean? Primarily, to handle a large number of messages from entities and manage updates, queries, and also handle data subscriptions from the entities. Remember that according to the NGSI 9 and 10 standards, we handle entities as an abstraction of the physical nodes or devices used in IoT solutions.

In the example above, we made an update request to an entity already created. But first let's review how to work with Orion. A simple way to test the OCB service is to create an account in https://account.lab.fiware.org/ and create a virtual machine with Orion preconfigured in the Cloud section. Alternatively, access Orion’s GitHub site and download a virtual machine to run in our local environment

Another useful tool is a REST client, but we can use cURL if it seems simpler. RESTClient is a client for Firefox that is fairly easy to use.

The configuration aspects of the OCB are outside the scope of this tutorial, as it would require too much detail. Regarding the FIWARE Lab, it is important to note that FIWARE provides virtual machines in the Cloud for free to test FIWARE compontents. You only need to create an account to access the services. Only a quick caveat. As of today (19-03-2015) and temporarily, Spain has no resources available, but there are other regions where VMs can be created. 

When we have the necessary tools, the most basic way to interact with the OCB is:

1. Create an entity:

To do this you must take into consideration several factors.  Firstly, the call is sent as an HTTP POST request, for example, http://myhost.com:1026/v1/updateContext. By this we mean that we are occupying version 1 of the API with the updateContext operation.

We also have to define several variables in the header of the request:

Accept: application/json

Content-Type: application/json

X-Auth-Token: [TOKEN AUTHENTICATION]

Regarding the token generation, the simplest way is to use python script created by Carlos Ralli on GitHub. A FIWARE account and running the 'get_token.py' script is required.

After setting the header of the request, configure the 'body' of the request using the following JSON code:

{

  "contextElements":[

    {

      "type":"LED",

      "isPattern":"false",

      "id":"LED001",

      "attributes":[

        {

          "name":"switch",

          "type":"bool",

          "value":"false"

        }

      ]

    }

  ],

  "updateAction":"APPEND"

}

Here is the structure of a "context Elements" which is a group of entities with certain attributes such as "type", "isPattern" and "id". "type" refers to a defined type and allows searching for entities by a particular type. "id" is an attribute that must be unique for each entity to execute searches based on this ID. "IsPattern" will be explained later in point No. 2.

You can also add a number of attributes to the entity in the "attributes" property, where each attribute is defined by  "name", "type" and  "value". Finally, "updateAction" defines whether we will perform an "APPEND" or and "UPDATE".

If all went well, we will receive a 200 OK response from the server and it will give us the details of the entity created:

{

  "contextResponses" : [

    {

      "contextElement" : {

        "type" : "LED",

        "isPattern" : "false",

        "id" : "LED001",

        "attributes" : [

          {

            "name" : "switch",

            "type" : "bool",

            "value" : ""

          }

        ]

      },

      "statusCode" : {

        "code" : "200",

        "reasonPhrase" : "OK"

      }

    }

  ]

}

2. Consult an entity:

To consult an entity, the standard operation is 'queryContext' which would be http://myhost.com:1026/v1/queryContext . We also apply the headers described in point No. 1 and use POST.

The JSON used in the body of the request would be as follows:

{

  "entities":[

    {

      "type":"LED",

      "isPattern":"false",

      "id":"LED001"

    }

  ]

}

This is where you can use "isPattern" in "true" and work either the "type" field or "id" field with regular expressions if we want to execute a slightly more complex search. In the example above we are only looking for the same entity created through the "id".

There is also a simpler way to do the same query using the following request: GET http://myhost.com:1026/v1/contextEntities/LED001 where LED001 is the "id" of the entity to search for.

3. Update an entity:

This is identical to point No. 1 but changing the "updateAction" attribute from "APPEND" to "UPDATE".

Finally, integrating all that we reviewed, we will be able to generate an action from a simple Web page that can be deployed on a remote server and actually prove that the application’s LED is activated remotely using the OCB.

To do this we will use the LED001 recently created, by setting the 'switch' attribute from true to false and vice versa to check the action.

Our web would look like this:

 

5

For this, the html code, css and js are shared in:

https://bitbucket.org/tidchile/ecosystem/src/

CORS Support for Orion Context Broker

 Blog, Developers  Comments Off on CORS Support for Orion Context Broker
May 272015
 

The latest release of Orion Context Broker (0.22) includes CORS support for GET requests. What that means in practice? It means that you can query context data from your Web App without having to develop a server-side component to act as a proxy for Orion.

How to activate it?

First you need to start Orion with the option '-corsOrigin'. That option allows you to specify what origins are allowed to query your Orion Context Broker. So if your application origin is http://www.example.com, then you should run contextBroker -corsOrigin http://www.example.com. If you do not know in advance what is going to be the origin of your app you can specify the special value '__ALL' . That would mean that any Web origin will be allowed to get data from your Orion Context Broker. 

This code snippet shows how would you query context by using XHR (AJAX).  

Future Work

What about other operations? Future releases of Orion Context Broker will allow to perform POST or DELETE operations following the same schema . Nonetheless, typically Web Apps would act as context data consumers and that's why we have given priority to GET requests. 

Orion Context Broker: introduction to context management (I)

 Blog, Developers  Comments Off on Orion Context Broker: introduction to context management (I)
Feb 192015
 
Big Data

The following post has been written by Fermín Galán Márquez, part of the FIWARE technical team in Telefonica R+D. We would like to thank him for his collaboration and his willingness to participate.

Your application dwells in a universe of information. This information is what we call the context of that application.

More specifically, context information is represented by values assigned to attributes that characterize the entities relevant to your application. So, context is about entities and their attributes. For example, if your application is about weather measurement based on sensors deployed in the streets and parks of your city, the entities are the sensors themselves and the attributes are the different weather parameters a sensor may measure (such as temperature, humidity, luminance, atmospheric pressure, etc.) along with other operational parameters (such as the sensor location, battery level, etc.)

Let’s consider another example: a traffic and route planning application. That application may use two different kinds of entities. First, vehicles: cars, buses, trucks, etc., whose basic attributes are speed and location. The second kind of entities could be city places, such as streets, roads, junctions, etc., whose attributes may be location, traffic intensity (e.g. number of vehicles per minute), congestion level, etc.

Taking into account these two cases, we can see two of the main values of context management as design paradigm for applications. First, its simplicity. Everything is about entities and attributes. No complex modeling needed. No complex data relationships or complicated SQL statements to get your data. Modeling your application in terms of entities and attributes is generally easy, as these concepts naturally  arise from your application design. Second, its flexibility. Context is a rather generic concept, so it is suited for many applications, no matter whether the application is related to weather measurement, traffic or whatever other domain. As part of this flexibility, take into account that an entity doesn’t necessarily model things in the real world (such as sensors or cars). It can also model things in the virtual world, such as an “alarm” in a trouble ticket system (which doesn’t have any physical representation and only exists within the IT system which manages alarms).

FIWARE provides you with means to produce, gather, publish and consume context information at large scale. Context management as provided by FIWARE introduces two basic actors: context producers and context consumers. Context producers are the sources of context, the ones that create or update context information. A typical case of context producer is a sensor measuring some metrics. On the other side, context consumers are the sinks for context, the ones that receive context information and do something interesting with it. Of course, the particular actions to do depend on the application. For example, it could draw the temperature evolution over time in a chart or provide dress tips to users (“don’t forget your coat, it’s cold out there!”) depending on the weather context in the case of a weather application. Another example could be recommending alternative routes to a driver based on the overall traffic context of the city, in the case of a traffic application.

It is important to note that producer and consumer are independent roles. Although a given application may play both roles at the same time (for example, a smartphone application running in a smartphone that at the same time produces some context information measured by the phone and consumes context coming from other sources), context consumers don’t need to know about producers and vice versa. Some big applications have some parts playing the producer role and others playing the consumer role to provide a global service. For example, a weather application could have two parts: the first one runs in the sensors (context producers) and the second in the users’ smartphones (context consumers) to provide real-time weather information and recommendations.

In the next post, we will talk about Orion Context Broker, which is the piece of software within FIWARE that materializes context management. Orion Context Broker allows context producers and consumers interact in a decoupled way using different communication paradigms (synchronous or asynchronous), implementing some advanced features (such as geo-localization).

Attend our FIWARE Webinars: FIWARE LAB Cloud and Blueprint Capabilities & Orion Context Broker

 Blog  Comments Off on Attend our FIWARE Webinars: FIWARE LAB Cloud and Blueprint Capabilities & Orion Context Broker
Jan 222014
 

Today, January 22nd, we will have two webinars open to anyone who would like to learn about how to use FIWARE LAB's Cloud and Blueprint capabilities and the Orion Context Broker, two essential tools that FIWARE offers to developers. 

Webinar

FIWARE LAB Cloud and Blueprint Capabilities webminar

Wed 22, 1:30pm CET

This webminar is a practical session on FI-LAB Cloud. We will show the usage of the FI-LAB Cloud portal so that you will be able to deploy and access to virtual machines, create containers and objects as well as instantiate blueprints (VMs together with software) Webminar taught by Henar Muñoz.

Here you can find the Slides that will be used in this webinar: Setting up your virtual infrastructure using FI-LAB Cloud

Orion Context Broker webminar

Wed 22, 18:30pm CET

This webminar is a practical session on Orion Context Broker. We start describing where to find the Orion information in the FI-WARE catalogue, then how a FI-LAB user can create her/his out-of-the-box ready-to-use Orion instance. Finally, we will walk-through the main operations to manage context in Orion Context Broker. Webminar taught by Fermín Galán.

Enter this URL: http://www.mashme.tv/M/2qvRiO at the given times and be sure to use Firefox or Chrome (Internet Explorer and Safari are not supported) 

 

Big Data analysis of historic context information

 

Similarly to what has been described in section Publication of context information as Open Data, Cygnus software allows to store all the selected data published in the Context Broker in an HDFS based storage. This allows having a long term historic database of context information that can be used for later analysis, for instance implementing map & reducing algorithm or performing queries over big data through Hive.

Similarly to what has been described in the section Publication of context information as Open Data, the Cygnus component can be configured to gather data from the Context Broker and store it in HDFS. The configuration in this case should include the Cosmos Namenode endpoints, the service port, user’s credential, the Cosmos API used (webhdfs, infinity, httpfs), the type of attribute  and endpoint of Hive server.

Once the Context Data has been stored, it is possible to use the Big Data GE to process it either with a map & reduce application or with Hive. It is of course also possible to process in the Big Data GEs other big datasets, either by themselves or in combination with context information.

A typical example would be to analyse massive information gathered from sensors in a city over a long period of type. All the data would have been gathered through Context Broker and Cygnus and stored in Big Data for a long period of time. In order to analyse the data, a few steps should be followed (the examples in this whitepaper are based on a global and shared instance of Cosmos Big Data GE):

Browse the Cosmos portal (http://cosmos.lab.fiware.org/cosmos-gui/). Use an already registered user in FI-LAB to create a Cosmos account. The details of your account will be given once registered, typically:

  • Cosmos username: if your FI-LAB username is <my_user>@mailprovider.com, your cosmos username will be <my_user>. This will give you a Unix-like account in the Head Node of the global instance, being your user space /home/<my_user>/.
  •  Cosmos HDFS space: Apart from your Unix-like user space in the Head Node, you will have a HDFS space located at the entire cluster, it will be/user/<my_user>/

Now you should be ready to login into the Head Node of the global instance of Cosmos in FI-LAB, simply using your FI-LAB credentials:

[remote-vm]$ export COSMOS_USER= // this is not strictly necessary, junt in order the example commands can be copied&pasted
 [remote-vm]$ ssh $COSMOS_USER@cosmos.lab.fiware.org
 

Once logged, you can have access to your HDFS space by using the Hadoop file system commands:

[head-node]$ export COSMOS_USER= // this is not strictly necessary, junt in order the example commands can be copied&pasted
 [head-node]$ hadoop fs -ls /user/$COSMOS_USER // lists your HDFS space
 [head-node]$ hadoop fs -mkdir /user/$COSMOS_USER/new_folder // creates a new directory called "new_folder" under your HDFS space
 

 …

Apart from using the context data stored, you can upload your own data to your HDFS space using the Hadoop file system commands. This can be only done after logging into the Head Node, and allows uploading Unix-like local files placed in the Head node:

[head-node]$ echo "long time ago, in a galaxy far far away…" > unstructured_data.txt
 [head-node]$ hadoop fs -mkdir /user/$COSMOS_USER/input/unstructured/
 [head-node]$ hadoop fs -put unstructured_data.txt /user/$COSMOS_USER/input/unstructured/
 

However, using the WebHDFS/HttpFS RESTful API will allow you to upload files existing outside the global instance of Cosmos in FI-LAB. The following example uses HttpFS instead of WebHDFS (uses the TCP/14000 port instead of TCP/50070), and curl is used as HTTP client (but your applications should implement your own HTTP client):

[remote-vm]$ curl -i -X PUT "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/$COSMOS_USER/input_data?op=MKDIRS&user.name=$COSMOS_USER"
 [remote-vm]$ curl -i -X PUT "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/$COSMOS_USER/input_data/unstructured_data.txt?op=CREATE&user.name=$COSMOS_USER"
 [remote-vm]$ curl -i -X PUT -T unstructured_data.txt –header "content-type: application/octet-stream" http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/$COSMOS_USER/input_data/unstructured_data.txt?op=CREATE&user.name=$COSMOS_USER&data=true
 

As you can see, the data uploading is a two-step operation, as stated in the WebHDFS specification: the first invocation of the API talks directly with the Head Node, specifying the new file creation and its name; then the Head Node sends a temporary redirection response, specifying the Data Node among all the existing ones in the cluster where the data has to be stored, which is the endpoint of the second step. Nevertheless, the HttpFS gateway implements the same API but its internal behaviour changes, making the redirection to point to the Head Node itself.

If the data you have uploaded to your HDFS space is a CSV-like file, i.e. a structured file containing lines of data fields separated by a common character, then you can use Hive to query the data:

[head-node]$ echo "luke,tatooine,jedi,25" >> structured_data.txt
 [head-node]$ echo "leia,alderaan,politician,25" >> structured_data.txt
 [head-node]$ echo "solo,corellia,pilot,32" >> structured_data.txt
 [head-node]$ echo "yoda,dagobah,jedi,275" >> structured_data.txt
 [head-node]$ echo "vader,tatooine,sith,50" >> structured_data.txt
 [head-node]$ hadoop fs -mkdir /user/$COSMOS_USER/input/structured/
 [head-node]$ hadoop fs -put structured_data.txt /user/$COSMOS_USER/input/structured/
 

A Hive table can be created, which is like a SQL table. Log into the Head Node, invoke the Hive CLI and type the following in order to create a Hive table:

[head-node]$ hive
 hive> create external table <my_user>_star_wars (name string, planet string, profession string, age int) row format delimited fields terminated by ',' location '/user/<my_user>/input/structured/';
  

These Hive tables can be queried locally, by using the Hive CLI as well:

[head-node]$ hive
 hive> select * from <my_user>_star_wars; // or any other SQL-like sentence, properly called HiveQL
 

Or remotely, by developing a Hive client (typically, using JDBC, but there are some other options for other non-Java programming languages) connecting tocosmos.lab.fi-ware.org:10000.
Several pre-loaded MapReduce examples can be found in every Hadoop distribution. You can list them by ssh'ing the Head Node and commanding Hadoop:

[head-node]$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar

For instance, you can run the word count example (this is also known as the "hello world" of Hadoop) by typing:

[head-node]$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount /user/$COSMOS_USER/input/unstructured/unstructured_data.txt /user/$COSMOS_USER/output/

Please observe the output HDFS folder is automatically created.
The MapReduce results are stored in HDFS. You can download them to your Unix user space within the Head Node by doing:

[head-node]$ hadoop fs -getmerge /user/$COSMOS_USER/output /home/$COSMOS_USER/count_result.txt

You can also download any HDFS file to you home user in the Head Node by doing:

[head-node]$ hadoop fs -get /user/$COSMOS_USER/structured/structured_data.txt /home/$COSMOS_USER/

If you want to download the HDFS file directly to a remote machine, you must use the WebHDFS/HttpFS RESTful API:

[remote-vm]$ curl -i -L "http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/$COSMOS_USER/structured/structured_data.txt?op=OPEN&user.name=$COSMOS_USER"

How to Publish Context Information as (Open) Data in CKAN

 

Publishing and consuming open data is a cornerstone for the development of applications and the creation of an innovation ecosystem. Through the mechanisms described in the section Development of context-aware applications, the Context Broker can be used to publish and consume context information. In particular, this information can be indeed open data and consumed through the queries and subscriptions APIs (NGSI10). This way, it is possible to publish real time or dynamic data, typically well structured, and offered it as open data through the reutilization by developers. For instance, it is possible to offer in real time data from sensors or systems to leverage the creation of new applications.

However, the Context Broker only provides the latest snapshot of the context information in a given moment, but there are many cases where it is also required to store and publish historical information of the context data generated over time. This is one of the usages of the Open Data publication GE, CKAN, in FIWARE.

CKAN is an open source solution for the publication, management and consumption of open data, usually, but not only, through static datasets. CKAN allows to catalogue, upload and manage open datasets and data sources, while supports searching, browsing, visualizing or accessing open data. CKAN is the Open Data publication platform that is most widely used by many cities, public authorities and organizations.

You may take advantage of the connectors supported by the Context Broker that automatically generate historic records generated each time there is a change in the context information and make those records available for upload on the Open Data publication GE. The data is then stored in a Datastore, and can be downloaded and queried through REST APIs.

In order to achieve this behaviour it is necessary to deploy and configure Cygnus, a piece of software complementary to the Context Broker GE. The instructions to install Cygnus can be found here

Once Cygnus has been installed, it is required to configure it. In a nutshell, there are three steps: configure CKAN storage, create the desired subscriptions in the Context Broker and run the process.

This sink persists the data in a datastore in CKAN. Datastores are associated to CKAN resources and as CKAN resources we use the entityId-entityType string concatenation. All CKAN resource IDs belong to the same dataset (also referred as package in CKAN terms), which name is specified with the default_dataset property (prefixed by organization name) in the CKAN sink configuration.

In order to configure CKAN storage, the file cygnus.conf has to be edited, specifying the CKAN sink, the sink channel (where to read the notifications from), the CKAN’s user API key, CKAN instance detail (IP, port, etc.), and Context Broker instance endpoint. All the details can be found at:

https://github.com/telefonicaid/fiware-connectors/blob/master/flume/README.md

Once the storage has been configured, it is required to run the process with, for instance, the following command:

$ APACHE_FLUME_HOME/bin/flume-ng agent –conf APACHE_FLUME_HOME/conf -f 
APACHE_FLUME_HOME/conf/cygnus.conf -n cygnusagent -Dflume.root.logger=INFO,console

Once the connector is running, it is necessary to tell Orion Context Broker about it, in order Orion can send context data notifications to the connector. This can be done on behalf of the connector by performing the following curl command (specifying the endpoint where Cygnus is listening):

(curl localhost:1026/v1/subscribeContext -s -S –header 'Content-Type: application/json' –header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "entities": [
        {
            "type": "Room",
            "isPattern": "false",
            "id": "Room1"
        }
    ],
    "attributes": [
        "temperature"
    ],
    "reference": "http://host_running_cygnus:5050/notify",
    "duration": "P1M",
    "notifyConditions": [
        {
            "type": "ONCHANGE",
            "condValues": [
                "pressure"
            ]
        }
    ],
    "throttling": "PT5S"
}
EOF

Once the process starts storing data, the dataset and resource will appear in CKAN and it will be possible to browse and download data from CKAN portal, or query it throught he Datastore API. More information at:

http://docs.ckan.org/en/ckan-2.2/datastore.html#the-datastore-api 

For instance, the following query would return the first 5 results of a dataset.
GET CKAN_HOST/api/action/datastore_search?resource_id=5a2ed5ca-2024-48d7-b198-cf9d95c7374d&limit=5

Real-time processing of context events

 

You may want to perform some processing on available context information. As an example, you may want to automatically detect patterns that require triggering some action or raising some alarm. You can use the FIWARE Complex Event Processing (CEP) as part of the architecture of your applications for this purpose. The Complex Event Processing GE allows you to detect patterns above contexts. This way, instead of reacting to a single context information you can identify and react to patterns over the contexts of several entities or over a context that was changed over time. The Complex Event Processing receives contexts information as input events and generates observations that sometimes called situations, as output events. The CEP GE analyses input event data in real-time, generates immediate insight and enables instant response to changing conditions. The technology and implementations of the CEP provide means to expressively and flexibly define and maintain the event processing logic of the application.

Applications connected to the CEP GE (external applications or some other GEs like the Context Broker GE) can play two different roles: the role of Event Producer or the role of Event Consumer. Note that nothing precludes that a given application plays both roles. Event Producers are the source of events for event processing. Following are some examples of event producers:

  • External applications reporting events on user activities such as "user placed a new order", and on operation activities such as "delivery has been shipped".
  • Sensors reporting on a new measurement. Events generated by such sensors can be consumed directly by the CEP GE. Another alternative is that the sensor events are gathered and processed through the IoT GEs, which publish context events to the Context Broker GE, having the CEP acting as a context consumer of the Context Broker GE.

Event Producers can provide events in two modes:

  • "Push" mode: the Event Producers push events into the CEP by means of invoking a REST API.
  • ”Pull” mode: the Event Producer exports a REST API that the CEP can invoke to retrieve events.

Event Consumers are the destination point of events. Following are some examples of event consumers:

  • Dashboard: a type of event consumer that displays alarms defined when certain conditions hold on events related to some entities user community or produced by a number of devices.
  • Handling process: a type of event consumer that consumes meaningful events (such as opportunities or threats) and performs a concrete action.
  • The Context Broker GE which can connect as an event consumer to the CEP and forward the events it consumes to all interested applications based on a subscription model.

The CEP sends output events to the event consumers in a “push” mode by activating their REST API.

The CEP allows you to define patterns over selected events occurring in event processing contexts (such as a time window or segmentation) with optional additional conditions. Those patterns are defined using a Web based authoring tool without the need to write any code. This makes it easier to write the event processing logic and to maintain and change it over time. Examples for supported patterns are:

  • Sequence, meaning events need to occur in a specified order for the pattern to be detected. E.g., follow a sensor context, and detect if the sensor status was “fixed” and later was “failed” within a time window.
  • Aggregate, compute some aggregation functions on a set of incoming events. E.g., compute the percentage of the sensors events that arrived with a fail status out of all the sensors events arrived in the time window. Alert if the percentage of the failed sensors is higher than 10 percent.
  • Absent, meaning no event holding some condition arrived within the time window for the pattern to match. E.g., alert if within the time window no sensor events arriving from specific source have arrived. This may indicate that the source is down.
  • All, meaning that all the events specified should arrive for the pattern to match. E.g., wait to get status events from all the 4 locations, where each status event arrives with the quantity of reservations. Alert if the total reservations are higher than some threshold.

Every pattern is associated with an Event processing context. Event processing context groups event instances so that they can be processed in a related way. It assigns each event instance to one or more processing context partitions. Event processing context can be a temporal processing context, a segmentation processing context, or a composite context that is to say one made up of other processing context specifications.

  • Temporal processing context defines a time window. If a pattern is associated with a temporal processing context, it will process only the events arriving within this time window. E.g., “within 5 seconds after sensor events with high value”, “A time window bounded by Order-Placed and Order-Delivered events”
  • Segmentation processing context defined a matching criteria based on the attribute values of the events. E.g., “Sensor’s ID”, “Shipment’s ID”, “Building’s ID”. Events that belong to the same matching criteria (e.g., have the same sensor ID) will be processed together within the same processing context partition.

If you are interested in more details check out:

Geolocated context queries

 

One very powerful feature in Context Broker GE is the ability to perform geo-located queries. You can query entities located inside (or outside) a region defined by a circle or a polygon.
For example, to query for all the restaurants within 13 km of the Madrid city center (identified by GPS coordinates 40.418889, -3.691944) a Context Consumer application will use the following query:

POST <cb_host>:<cb_port>/v1/queryContext
{
    "entities": [
        {
            "type": "Restaurant",
            "isPattern": "true",
            "id": ".*"
        }
    ],
    "restriction": {
        "scopes": [
            {
                "type": "FIWARE::Location",
                "value": {
                    "circle": {
                        "centerLatitude": "40.418889",
                        "centerLongitude": "-3.691944",
                        "radius": "13000"
                    }
                }
            }
        ]
    }
}
 Posted by at 10:56 am