WP4. Massive and heterogeneous data
This work package aims to contribute to the major challenge of tackling multimodal and heterogeneous data coming from various data basis and/or from systems of interconnected objects. These challenges are highly important for the 3 transverse work packages and they should offer the opportunity of collaborations with the 3 others methodological packages. Raising this changes clearly requires a multidisciplinary approach, which we aim two implement via two tasks.
T4.1 Heterogeneous and Big Data
Christophe JEGO, IMS
T4.1 aims to develop a methodology involving, in particular, statistics and data mining. This approach should lead to a large scale use of smart and autonomous sensors and distributed algorithms for data analysis. The ultimate goal here is to distribute a large part of the data analysis on smart sensor and to exchange only a limited amount of these data to the different node of the network. The main methodological tools could be dedicated to many applications, such as massive sensor systems for marine navigation, urban supervision networks or massive sensor networks for medical diagnosis.
T4.2, entitled Multimodal Imaging
Jean-François AUJOL, IMB
This task addresses some of the challenges in image processing which are raised by the heterogeneous nature and the mass of data. To achieve this goal, it is crucial to process simultaneously and efficiently multi-modal images (2D, 3D, 2D+t, …) and to integrate complementary information obtained via various acquisition modalities (multi-spectral, RNM, LiDAR, infra-red, optical, Radar, X band, ….). Underlying questions are the ones of image registration, super-resolution, texture analysis, fusion, multi-modal classification, … This is of course related to the problems of computation speed for the proposed algorithms.
The members of WP4 intend to address two major bottlenecks:
- Homogenization of several modalities: a major problem with multi-modal imaging lies in image homogenization (size, resolution, heterogeneous constraints, ..). One needs to register, reconstruct, (super-resolution inpainting,…), and normalize all these images so that all the complementary multi-modal information can be used.
- Multi-modal feature fusion: another major challenge lies in proposing efficient methods to fuse the information contained in the different modalities so that new information can be recovered. This fusion can enrich the content extracted from each modality.