Project Descriptions

From EmoSpaces
Jump to navigation Jump to search
Project Objectives

The main objectives of the demo spaces to develop IOT platform for effective services which includes:

  • Detect state of individuals through smart sensors (sensor + data analytics) (Localization, activity, Emotion, …)
  • Analyze sentiments and abnormal behaviors
  • Adapt the environment and develop effective services (EmoServices)
  • Develop dedicated IoT platform
  • Build relevant and innovative use cases
  • Study societal relevance and ethical issues
  • Technological transfers & exploitation
Project Innovations

The development of a smart environment is a hot topic, a lot of efforts in academia and industry have been done with the focus to carry out to the market the vision of the smart space, and in particular, the smart home by merging technologies from wide range of research domains: pervasive computing, Internet of Things (IoT), home automation, situation awareness, decision support, etc. In this kind of a smart environment the user will be assisted and supported in the best way during his/her daily live (professional and private) activities.

Despite the spent efforts in research, innovation, and piloting, the smart space has remained a challenging topic for RTDI (research, technical development and innovation), and an unrealised concept in mainstream building provision. EmoSpaces appeals to innovate in a variety of discipline including multimedia computing, multimodal data analysis oriented information extraction, sensing based affect recognition, Big Data handling and semantic fusion of co-operating sensor data. The partitioning and compartmentalization of R&D is another obstacle in the achievement of a smart space, for instance video analysis techniques are utilized in both sides in detecting human activity and video content classification. In detecting human activity from video data is usually focused on elementary gesture detection, for instance for human-robot interaction. In video type classification indexing and retrieval techniques are preconized. However, real-time detection (requiring simultaneous segmentation and classification) of human real daily-life activities is poorly investigated in the literature. The contribution in the Emospaces project in such area will provide significant improvement over the state-of-the-art in detecting activities with a semantic level corresponding to “eating” or “cooking” related/connected to grocery shopping.

IoT services considering emotions are still an unexplored innovation and EmoSpaces aim at going a step further and advance in IoT automation based on affective and persuasive technology. As displayed in the next figure, the major expected technical outcomes in EmoSpaces are: i) Technologies for Multimedia Affect recognition based on Sensing and Smart Devices and eCoaching; ii) Big Data Platform for Semantic Sensor Fusion; iii) Context-aware adaptation and Automation of IoT Environment; and iv) Social Simulation and Testing tool for developing cost-effective affective services. The outcomes of EmoSpaces are shown in following figure

Innovations-updated.png

EmoSpaces takes advantage from the advance in miniaturization of computer technology which will have as result the integration of processor and sensors in everyday objects leading to the disappearance of frontier between desktop computers and the physical. EmoSpaces project innovations fall under two main categories. In the first one, EmoSpaces deals with the development of a smart environment upon an intelligent IoT platform where advance processing tools are integrated to gather and analysis heterogeneous sensor data (video, image, text, daily life and physiological data). These data is used to sense the surrounding space of users and characterize their behaviour and affective state. The second category of innovations involved building, upon the proposed platform, affective services responding established profiles of the inhabitants and in accordance with their expectations and desires. Therefore, the Emospaces project aims at gathering all available IoT data from various vertical sectors (security, health, family, disaster management, etc...) by fusing information coming from fixed home sensors with wearable quantified-self devices with the aim to determine context awareness, with emphasis in emotion detection, as means to improve the quality of many IoT vertical services (contextual well-being, gaming, caregiving, etc...).

The social impact and acceptability of these technologies is a cornerstone requirement. The Emospaces project will solve this issue by studying and adapting the different components of the architecture to the requirement of the targeted populations. Elderly, which form the first audience, have prepared and motivated to stay socially active and at the same time be able to live independently and manage their health & wellbeing. The project will go beyond, current state of the art, which is in most cases not user-friendly technologies, non-sensitive to users mental and emotional state or does not motivate them to keep online due to bad interfaces, lack of efficiency of context and behaviour sensing and analysis or limitation to physiological/medical monitoring.

Beside the development of content analysis tools to provide relevant and contextual information, the EmoSpaces project will fix the Semantic Sensor Network concept as basis to continue its development with the required additions for covering two fundamental areas such as the IoT and Internet of Service. Social Simulation is another innovation aspect of the project, it will allow to alleviate the cost of the development phase at propose at the scale IoT networks since flaws can be detected and solved before deploying the system in living labs or real scenarios.

Thanks to the complementary expertise and skill of the consortium, we project to go beyond the existing solution and propose a flexible and open framework with real added value services for users by developing composite applications that involve sensing, actuation and adaptation of the physical world in real-time through co-operating devices. To illustrate the outcomes of the proposed smart framework we will develop innovative services based on adaption of the environment to user profile such as: the recommendation of multimedia contents; physical activities to improve the wellbeing and resilience users; and the assistance of the elderly to live independently and to cope with crises.

Project structure

The work in the EmoSpaces is divided in seven different work packages. The detailed relationship of inputs and outputs of the WPs is shown in the following Figure

Workpackage2.png

The project is structured into 3 iterations, following an agile approach, with one iteration per year. Every iteration starts with a refinement of use cases (WP1), detailed definition of the business demonstrators (WP5) and data collection for the rest of WPs. This phase lasts 3 months in the first iteration and 1 month in the other two iterations, as shown in the figure above. In the first iteration, WP1 will define an architecture specification based on the identified requirements. This specification will be refined in the rest of iterations.

In the second cycle of every iteration, the three main technical work packages (WP2, WP3 and WP4) are executed in parallel with the goal of improving their coordination. WP2 will leverage state of the art intention and emotion analysis algorithms based on the data of the IoT Platform, including visual (facial recognition), behaviour (activity monitoring), biometrics, text (social networks and speech transcription) and speech. These results will influence the decisions taken in WP3 where fusion algorithms will be selected for improving emotion recognition based on multimodal features. WP2 and WP3 will use a Big Data platform for all these analysis. Based on the emotion context models defined in WP2 and WP3, WP4 will address the automation of the environment in the different scenarios defined in WP5. WP5 provide simulation and testing facilities which include the generation of analytic data for WP2, WP3 and WP4.