Sábado , 23 de Abril DE 2016

Nervousnet Hackathon, Zürich, 2016




This was a Hackathon of the Nervousnet project (http://www.nervousnet.ethz.ch/), a hacking event made for developers, entrepreneurs, data specialists, early adopters, end users and everybody else interested in building and enforcing Digital Democracy together.


Nervousnet is a decentralized Internet of Things platform for privacy-preserving social sensing services provided as public good. It is implemented as a mobile app and it is open source under the GPL v3 license. Nervousnet is capable to collect and manage sensor data from Android and iOS smartphones by letting users self-determine the data they locally preserve and the data they remotely share. This forms the main privacy-by-design functionality of the nervousnet backend. A lightweight local analytics engine residing in the Nervousnet backend provides a high-level API for developers to build data-driven applications. Analytics can be also performed across devices with an implementation of a truly decentralized and privacy-preserving Big Data paradigm: the global analytics engine.



This Hackathon provided three types of challenges for online and remote participants to contribute to, with money prices ranging from 400 CHF (3rd place) to 1.000 CHF (1st place) per category. The challenges were as follows


1 - Design and Development of the nervousnet Backend:

Here you find opportunities like for e.g. extending the API of the local analytics engine, implementing communication security or integrating web views and an application store.


2 - Design and Development of nervousnet apps:

Here you can build your own data-driven applications. Some examples: earthquake detection, localization and navigation, ambient assisting living, smart homes, IoT games and more. There will be new LoRa sensor nodes to play with!


3 - The nervousnet Privacy / Accuracy Challenge:

Come up with your own data summarization algorithm that guarantees the highest privacy protection level and at the same time performs accurate data analytics.


I did the first challenge. I decided to not only use smartphones to interact with this social privacy-preserving system, but extend it to smart devices like microcontrollers sensing our homes or public buildings. Actually, if we look into the envisioned architecture for Nervousnet at this point, IoT should be accounted for (and so, that's what I did):



With the code I developed (and available here on GitHub), you are able to use a specific microcontroller (Arduino), with a wifi shield on top of it (but can be used with Ethernet shield or anyother thing, with proper adaptation) and a Sensor of your choosing (I chose to mimic an axelerometer), to save one sensor Data to the backbone of Nervousnet (called Router, in the code). With the code, I provide a mysql Data Base which is a preliminary (still) centralized approach to Nervousnet Backbone, and presents a ten fold of different types of sensors (in the github link I provided, this is the file), being them:

  • Accelerometer
  • Battery
  • Gyroscope
  • Humidity
  • Light
  • Magnetic
  • Proximity
  • Temperature
  • Noise
  • Pressure
  • Connectivity
  • BLEBeacon

Of these sensors, I used Arduino to publish dumy sensor data for the Accelerometer sensor (because I didn't actually have the hardware sensor with me, but that's changeable). This code is composed by a file to be flashed into the microcontroller (Here's the code for Arduino), a Java Server (composed by Netbeans and Glassfish application server, Here) providing the web interface between the microcontroler and the Nervousnet database, and the Nervousnet mysql DB (Here).


Considering the fact we had less than some practical 24H of hands on coding, you're not finding here a neat, full bullet proof code. I'm ok with that, and so should you.

The integration with the microcontroller was also done by Web Services; Previous integrations with both the smartphones and the backbone servers have been done by Google Protocol Buffers, and I also wanted to challenge this (not challenging, but enabling other Protocols to be used, and stretch Nervousnet's compatibility). As such, I used SOAP RESTful web services:



In the image above we get to see the communication between the microcontroller and the Nervousnet Backbone server I created through SOAP envelopes, XML communication (JSON can be used as well :P ). As I emulated an accelerometer sensor, the request I do in here (coming from the Arduino) has 3 values with it (X, Y and Z directions of the acceleration sensor). The answer from the backbone server is either a "true" or "false", meaning "YES, I recorded your data in the database", or "No, I didn't understand what you wanted, and so I failed to act".

Using stuff, the behavior of the Arduino is as follows:



And the result of this communication initiated by the Arduino is the following, registered in the database:



You get to see that, from the record 51 onwards, that's the data I saved from the arduino (I didn't care much about the dummy values recorded)  ;)

This idea of mine was an Award Wining Solution, earning me the 3rd place of this challenge category. Thank you ETH Zürich.



Published by fxsf às 21:23
Quarta-feira , 23 de Setembro DE 2015

VoIP videoconference IOS App, Open WebRTC


My application you can download it HERE: https://www.dropbox.com/sh/f74br70w7ziwtbn/AAAZ9FHakrZT9FeSb01MyrZ1a?dl=0




 In this post I will show my work on WebRTC, an open project that provides browsers, embedded and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. With OpenWebRTC you can build native WebRTC apps that communicate with browsers that supports the WebRTC standard, such as Chrome, Firefox and Bowser. OpenWebRTC is especially focused on mobile platforms, with powerful features such as hardware accelerated video coding and OpenGL-based video rendering. Sounds fancy right? Yes, it does. But why choosing WebRTC to do VoIP over, lets say, other existent signalling (like SIP or skype) and encoding technologies (just like H264, H323, V8, etc...)? For me, the answer was simply: Copyright issues. So, I wanted to develop an IOS app for my Iphone 6, brand new IOS 9 (not the preview version already), and that meant that I could not use code covered by the GPL licensing.


That is right. GPL licensing allows you to freely use and redistribute the code, but it does not allow you to pose further restrictions while doing so. Sadly, Apple App store does, REGARDLESS of it being paid or free. WebRTC is distributed with a BSD licensing, meaning that you can do whatever you like with it, for as long as you put the due disclaimer in your code. So, currently you can choose one out of two possible implementations for WebRTC: WebRTC from Google, and OpenWebRTC from Ericsson (Germany). Both do almost the same, but I prefer the OpenWebRTC due to the fact that I want to develop for mobile. Quick comparison can be found at (https://bloggeek.me/ericssons-openwebrtc-project/): 

WebRTC vs. OpenWebRTC


Therefore, I am using OpenWebRTC as a choice. I followed the official tutorials to build a native IOS application (you can build a hybrid one, meaning the use of a brower view inside an app), and there are other tutorials for other platforms just like Android:




The app I developed is based on the Original Native app from the OpenWebRTC available here (https://github.com/EricssonResearch/openwebrtc-examples/tree/master/ios/NativeDemo). What I did in this application was creating a XCode project with all the dependencies solved (you do not need CocoaPods to run it), and I solved most of the problems to run in in IOS 9 (it does compile for previous versions like IOS 8, with extra warnings naturally). 


So, The Portrait view that is buggy from the NativeDemo app from OpenWebRTC is:




Portrait, after fixing the error, as well as Landscape:





As you can see, in Portrait is not perfect, but the video is stretched to meet the box. As the pull I did through CocoaPods grabbed the compiled binaries from OpenWebRTC and OpenWebRTC-SDK, I could not edit the video feed itself. For that, I would have to build the application from scratch and build the OpenWebRTC source files (wasting almost 10 hours of my life! Trust me, I did it). So the solution I found was manipulating the view itself, through rotations and translations and scaling. Though, in Landscape mode, it works better.

The application was tested by the following demo: http://demo.openwebrtc.org:38080 





Then, you have to select the same id in the application and in the demo in the browser.



At the server side, once the connection is established in the same room, you can see as follows:





and in Landscape as well. You can see the server updated the orientation of the video properly:


Published by fxsf às 14:21
Segunda-feira , 26 de Abril DE 2010

Kannel SMS Gateway Center


 Kannel SMS Gateway




Autor: Francisco Xavier
Data: 2010-04-22
Âmbito: Engenharia de Serviços leccionada na Universidade de Aveiro







O objectivo foi enviar e receber mensagens sms com o Kannel, que é uma Gateway de envio de conteúdos, tanto para WAP como para SMS. Como no nosso projecto não pretendiamos usar wap, centrei-me unica e exclusivamente no envio de sms.
O diagrama do funcionamento do Kannel no que toca a envio e recepção de SMS, e que está disponível no userguide da página do Kannel http://www.kannel.org/, é o seguinte:

Passando a explicar: o telemóvel comunica por sms com o centro de mensagens Kannel. Por exemplo, envia uma mensagem para este a pedir um conteúdo; este comunica com o servidor do Kannel e, quando este tiver uma resposta à mensagem, reencaminha de volta para o mesmo servidor o conteúdo pretendido (isto, se tudo correr bem). O smsc envia a resposta ao equipamento com o conteúdo fornecido pelo Kannel.
Há várias maneiras de colocar o smsc a comunicar por mensagens, que das quais se destacam:

  • Fake, simulando assim as comunicações com um dispositivo real

  • http, mandando as mensagens para, por exemplo, email

  • GSM, usando um modem para mandar MESMO mensagens pela rede GSM

Eu optei por reutilizar uma pen da Vodafone perdida em minha casa, mais concretamente o modelo Huawei K3520 e coloquei lá o meu cartão SIM da Vodafone, para aproveitar o tarifário de mensagens de graça e testar assim o serviço sem “entraves” financeiros.



Passo número um: configurar a máquina virtual com o Kannel e instalar modem Huawei

Bem, depois de instalar a pen modem e resolver as dependências, arranquei o software da vodafone apenas para ter a certeza em que dispositivo se encontrava instalada no linux; pela imagem seguinte descobre-se que está instalada em /dev/ttyUSB2. Este passo é fulcral para configurar o Kannel.





 Passo número dois: colocar os servidores Kannel & SMSC a correr


Apesar de o Kannel ter um daemon que o coloca a correr sempre que o linux é iniciado, eu para mostrar a troca de mensagens explícita, resolvi arrancar o Kannel de uma forma explícita:


O resultado é, Para o servidor do kannel:


Para o smsc. é:



Passo número três: criar Web Serviçes para o handler dos http request’s
A nível topológico, a troca de mensagens entre os Web Services, Servidores do Kannel e equipamento móvel é processada da seguinte maneira:
Envio de mensagens usando o Web Serviçe sendsms:



Recepção de mensagens usando o Web Serviçe receive:



De salientar que o servidor Kannel não responde de uma forma automática a uma incoming sms. Quando uma mensagem chega, é enviada para o glassfish que há-de tratar de reencaminhar a mensagem para a nossa aplicação principal.

Apesar de não ser um expert no deployment de Web Serviçes, criei 2: um para envio de sms’s e outro para tratar dos requests do servidor do Kannel após chegada de sms.



Layout do Web Service de envio de mensagens: (destnum, text)



Layout do Web Service de recepção de mensagens: (to, text)




Última parte: Teste do serviço

Usando este Web serviçe desta maneira ou usando o URL directamente no browser produz o mesmo resultado:






No servidor do Glassfish é detectado o pedido sendsms por parte do Web Serviçe, ficando assim devidamente registado:



 No servidor SMSc o envio de mensagens fica devidamente registado:



A mensagem de teste foi enviada para o mesmo número, tendo portanto, recebido no cartão SIM a respectiva resposta:




Aqui Aconteceram duas coisas:
1. O serviço de incoming message do Kannel foi chamado, fazendo um get-url para o endereço do web Serviçe receive do glassFish:
http://localhost:27838/KannelServer/resources/receivesmsport/receive? Sendo "to" e "text" preenchido automaticamente com os conteúdos da mensagem recebida
2. Avisa que foi recusado um reenvio automático À origem da mensagem (terminal movel)

Por fim, o Web Serviçe receive imprimiu no servidor Glassfish o conteúdo da mensagem e para quem está destinada. Mais tarde quando for criado o Web Serviçe na aplicação java, este web service receive vai tratar de comunicar com a aplicação java, em vez de mostrar o conteúdo da mensagem recebida:



Published by fxsf às 16:14

About Me