Keynote Speakers

Prof. Mounir Ghogho,
University of Leeds, UK

Mounir Ghogho has received the M.S. degree in 1993 and the PhD degree in 1997 from the National Polytechnic Institute of Toulouse, France. He was an EPSRC Research Fellow with the University of Strathclyde (Scotland), from Sept 1997 to Nov 2001. Since Dec 2001, he has been a faculty member with the school of Electronic and Electrical Engineering at the University of Leeds (England), where he is currently a Professor. Since 2010, he has also been affiliated with the International University of Rabat, where he is currently a Scientific Advisor to the President and ICT Research Director. He was awarded the UK Royal Academy of Engineering Research Fellowship in September 2000. He is a recipient of the 2013 IBM Faculty Award. He is currently an associate editor of the Signal Processing Magazine, and a steering committee member of the IEEE Transactions on Signals and Information Processing over Networks. He served as an Associate Editor of the IEEE Transactions on Signal Processing from 2005 to 2008, the IEEE Signal Processing Letters from 2001 to 2004, and the Elsevier Digital Signal Processing journal from 2011 to 1012. He served as a member of the IEEE Signal Processing Society SPCOM Technical Committee from 2005 to 2010, a member of IEEE Signal Processing Society SPTM Technical Committee from 2006 to 2011, and is currently a member of the IEEE Signal Processing Society SAM Technical Committee. He was the General Chair of the European SIgnal Processing conference Eusipco2013 and the IEEE workshop on Signal Processing for Advanced Wireless Communications SPAWC’2010. He has published over 300 journal and conference papers in the areas of signal processing, machine leaning and wireless communication. He held invited scientist/professor positions at Telecom Paris-Tech (France), NII (Japan), BUPT (China), University Carlos 3rd of Madrid (Spain), ENSICA (Toulouse), Darmstadt Technical University (Germany), and Minnesota University (USA). 

Speech Title: Cognitive sensing and communication for the Internet of Things
Abstract: In this talk, I will briefly introduce the Internet of things (IoT) and its enabling technologies. Then, I will give a brief overview of the different wireless solutions (cellular and non-cellular) that compete to have a share of the IoT market. Fundamental limits on coverage/throughput and their relation with network density will be presented and used to explain the advantages and disadvantages of the different wireless solutions. I will then present the concept of cognitive IoT, its components and its challenges. Finally, I will focus on the role that signal processing can play in cognitive IoT through cognitive sensing, communication and energy harvesting. Particular attention will be given to the problem of spectrum sensing for machine-to-machine (M2M) communication for IoT applications. Stochastic geometry will be used to evaluate the performance of the proposed cognitive IoT system. 

Prof. Jacques Blanc-Talon, Université Paris XI, France
Jacques Blanc-Talon received the Ph.D. degree from Paris XI (Orsay) University in 1991. After a postdoc during 1991-1992 at the CSIRO of Canberra, Australia, he joined the Ministry of Defence procurement agency (DGA) in France. He worked as department Scientific Manager, Head of the "Information Engineering and Robotics" scientific domain at the DGA/MRIS and is currently with the Integrated Navigation Systems department.
Over the years, he has conducted and supervised more than 50 industrial and research contracts. He was the French Delegate of several NATO Groups and of the Horizon 2020 Security Research Programme Committee.
J. Blanc-Talon has conducted the review of around 400 Ph.D. and postdoc grant applications, has participated in 80 defence jurys and has supervised some 40 Ph.D. students. He was author or co-author of about 90 scientific papers and the editor or co-editor of 13 books and special issues of international journals. He served as Associate Editor for IOS Integrated Computer-Aided Engineering from 2000 to 2006, and IEEE TIP from 2005 to 2008; he was a reviewer for IEEE PAMI, IEE Electronics Letters, SIAM on Applied Mathematics and IAPR Pattern Recognition. He has been involved in the organization of more than 90 international conferences
J. Blanc-Talon was with the SEE (Société des Electriciens et Electroniciens) and the Australian Computing Society (ACS). He received the Outstanding Paper Award from the SCS in 1993; he was promoted "Officier de l'ordre des Palmes Académiques" in 2017, and IEEE Senior Member in 2015. He is currently the IEEE Chapter Chair for the French Signal Processing Chapter.

Speech Title: Advances in hyperspectral image processing, multilinear methods and related problems
Abstract: Hyperspectral imaging (HSI) has been a growing field over the last decade, leading to a large variety of civilian and military applications today. Contrary to multispectral "images" (which are made of a few images with different spectral bands), hyperspectral "images" gather a collection of several hundreds of images, spatially registered, taken at different wavelengths but with a narrow bandwidth and as regular a spectral sampling step as possible. Processing such a huge collection of "monospectral" images (often called "spectral cube") can be achieved, as in multispectral image processing, by processing every image separately, then by fusing the results; however, processing the data cube as a whole outperforms mere 2D approaches. Techniques based on multilinear algebra (tensorial algebra) provide a fine and unified framework for dealing with such complex data.
After a general picture of what can be achieved (or not!) in the field of hyperspectral image processing – ranging from target detection, smoke detection, agriculture to land characterization, we introduce the basic tools of multilinear algebra for image processing.
In a second part, we discuss recent results in the field of anomaly detection, unmixing and target detection.
Anomaly detection is a widely studied approach in HSI, when spectrum and spatial information related to the target are unknown. In anomaly detection, every pixel is tested separately and is considered as an anomaly when its statistics significantly differ from the background ones. Among other improvements, one can estimate the target shape, leading to the design of adaptive detector.
Despite an easy definition, the unmixing process is an ill-posed problem; however, it can be solved using a few strong assumptions. The first assumption is that images with contiguous spectral bands are similar, avoiding any "strong" variation, and the second one that spectral components (so-called "endmembers") are mutually independent. This leads to the unicity of the solution, and to the same number of independent components and endmembers. We recall some recent techniques related to linear unmixing, and generalize the approach in several directions.
Statistical detection theory provides a strong theoretical background for defining target detectors; in particular, the Likelihood Ratio defined as the ratio of conditional probability density functions can lead to different functions according to particular assumptions: the Matched Filter (MF), the Adaptive Matched Filter (AMF), the Adaptive Coherence-Cosine Estimator (ACE)… We introduce recent filters and provide real results highlighting the application of detectors to hyperspectral cubes.
The conclusion of our talk presents some new ideas about the application of tensorial methods to fractals, as well as some trends on the compression of hyperspectral data.

Prof. Gouenou Coatrieux,
Institut Mines-Telecom, Telecom Bretagne, INSERM UMR1101 LaTIM, France


He received the Ph.D. degree in signal processing and telecommunication from the University of Rennes I, Rennes, France, in collaboration with the Ecole Nationale Supérieure des Télécommunications, Paris, France, in 2002. His research is conducted in the LaTIM Laboratory, INSERM U1101, Brest. He is currently a full professor with the Information and Image Processing Department, Institut Mines-Telecom, Telecom Bretagne, Brest, France. He conducts his research in the Laboratory of Medical Information Processing, Institut National de la Santé et de la Recherche Médicale, Brest. He is also the head of the joint laboratory SePEMeD (Security and Processing of Externalized Medical Image Data). His primary research interests concern watermarking (images and databases), crypto-watermarking, secure processing of outsourced data, information system security, digital forensics with a special interest for the medical field. Prof. Coatrieux is an Associate Editor of the IEEE JOURNAL ON BIOMEDICAL AND HEALTH INFORMATICS, Digital Signal Processing, and Innovation and Research in BioMedical Engineering. He is a member of the International Federation for Medical and Biological Engineering "Global Citizen Safety and Security Working Group" and the European Federation for Medical Informatics "Security, Safety, and Ethics Working Group," and has contributed to the Technical Committee of "Information Technology for Health" of the IEEE Engineering in Medicine and Biology Society.

Speech Title: Watermarking in the medical field
Abstract: Advances in information and communication technologies provide new means to access, share, duplicate and manipulate medica data. But if daily medical practices take advantage of such an evolution, this facility to handle data also compromises their security. In this talk, we will focus on the security of multimedia digital contents in medical information systems and how they can be secured by means of the watermarking technology. This technology can advantageously complete existing measures for protecting medical data (e.g. encryption and digital signature) through the insertion of messages/watermarks into data (e.g. images, medical record databases). Depending on the relationships between the watermark and the data, watermarking can serve different security services within a medical information system (authenticity, integrity, traceability and so on). However, the deployment of such a solution is not free of constraints. For signals, embedding is made by modulating the values of the signal samples in order to encode the message bits. That is also the case when watermarking medical databases. As a consequence, preserving the signal/data quality for the diagnosis is of major concern and there are necessary requirements for such a system to be accepted by medical staff. In this lecture, we will expose: i) the complementary role of watermarking with respect with existing security systems and policy; ii) different watermarking modulations for medical data (e.g. reversible watermarking); iii) their combination with encryption mechanisms.

 

Assoc. Prof. Dr. Huseyin Seker
The University of Northumbria at Newcastle, UK


Dr Huseyin Seker is a multi-disciplinary researcher and data scientist with a particular interest in big data mining, machine learning, and bio-medical and industrial applications. He has published over 100 peer-reviewed papers, led a number of projects, delivered keynote and invited talks at several events and organised a number of conferences and special sessions. He is currently a Reader in the Department of Computer and Information Sciences of Northumbria University in Newcastle-upon-Tyne (UK). He is also the Director of Enterprise and Engagement, and leads Bio-Health Informatics Research Team and Big Data Analytics Lab within the department. In addition to his academic duties, he is an Advisory Board Member of the North East Satellite Applications Centre of Excellence, Steering Group Member of Digital Catapult North East and Tees Valley, and a member of the CyberNorth Initiative in the UK. Further information about his projects and publications can be found at http://computing.unn.ac.uk/staff/yqqd6/home.htm

Speech Title: Signal processing and feature selection-based methods for the analysis of protein sequences in the age of big data
Abstract:
Biological data being generated at fast speed in this digital age is revolutionizing almost every aspect of life sciences and the humanity. Due to the availability of such massive biological data sets and their complexity and diversity, they need to be effectively analysed in order to drive life-saving and actionable knowledge. Among these biological data sets, proteomics data sets are of a particular interest as proteins play an important role in helping better understand molecular mechanism of living body and effective drug design. Although there have been a number of attempts to develop computational methods to be able to analyse such big data sets over the last decade or so, there is still a lack of computationally efficient and robust methods, and, in particular, techniques that could utilise natural characteristics of protein sequences.  Therefore, this talk focuses on recent developments in developing computationally efficient methods for the analysis of proteomic data sets by using signal processing and feature selection-based methods as well as machine learning techniques. This talk will include how the proteins have been numerated and represented to form signals and high-dimensional feature space using natural characteristics of proteins based on amino acid scales. In addition, complex signal processing algorithms, feature selection methods in both supervised and unsupervised manners, machine learning methods (e.g., support vector regression and fuzzy support vector regression) that have been developed for this study will be explained to show how they help analyse the protein sequences. The robustness of the proposed approaches is then demonstrated over a number of different case studies including the prediction of peptide binding affinity and analysis of evolution of influenza sub-types along with SProteomics webserver (http://sproteomics.com/). It will also explore possible ways and suggestions that lead efficient analysis of protein sequences and computational discovery of new peptides with desired binding affinity as a result of these promising hybrid intelligent predictive models.

 

Jędrzej Bieniasz
Warsaw University of Technology, Poland


Speech Title: Can your OSN profile be a part of the biggest data leakage in history
Abstract: Steganography seems to be a very promising technology for sharing information, especially at the time ?before‖ post quantum cryptography, when there is still a need for the design of tools to communicate securely and no certainty that most of the contemporary cryptography will survive. As it is observed recently, major attention has been paid to constructing image and network steganography methods. Lately, less effort has been applied to text steganography and this presentation revisited this attractive area for research in combination with social media. The idea of StegHash was applied into a novel type of mass storage called SocialStegDisc. It is based on the use of hashtags on various social networks to connect multimedia files, like images, movies or songs, with embedded hidden messages. The concept is characterized with the unlimited data space and the limited address space. Three versions of SocialStegDisc were developed:

1) with direct application of StegHash
2) with elimination of dictionary of used permutations of hashtags
3) with fully dynamic generation and reproduction of multimedia object chains

The third version is the most significant output of our work, which we would like to introduce. At first, the concept was implemented as an interactive console application which replicates the well-known interface of filesystems in Unix terminals. Next, the application of SocialStegDisc as a submodule of a malware bootloader was investigated. In this scenario, the malware bootloader accesses a storage based on SocialStegDisc remotely from where it downloads the execution code. As an example, we used the code to establish a reverse connection which mimics the real situation of cyber-attack