New publication “A Novel System for Nighttime Vehicle Detection Based on …”

We are pleased to announce our new publication “A Novel System for Nighttime Vehicle Detection Based on Foveal Classifiers with Real-Time Performance” in IEEE Trans. Intelligent Transportation Systems.

See vídeos in [Link].

Check the database in [Link].


Multi-UAV online mission planning

An online mission planning system based on deep reinforcement learning and graph neural networks has been developed, which is capable of managing the routes of a fleet of UAVs in real time and in an optimal way to achieve visiting as many locations as possible with a region-sharing strategy. The cooperation strategy is specially designed for non-communications environments among the UAVs (or radio silence) during the mission execution. [More info]


New publication “Robust people indoor localization with omnidirectional…”

Our paper Robust people indoor localization with omnidirectional cameras using a Grid of Spatial-Aware Classifiers has been accepted in Signal Processing: Image Communication.

See also our database PIROPO.


New database for parking lot occupancy

See [here] our new published database for estimating parking lot occupancy using color cameras in a complex environment (day and night, perspective occlusions, background occlusions, etc.). Using this database, we have trained our system ParkingNet. See the vídeo demo:


New publication: Robust Nighttime Vehicle Detection Based on Foveal Classifiers

We have just published a new article called: “Robust Nighttime Vehicle Detection Based on Foveal Classifiers”. You can find it in [Link], where you can download the paper and watch a video about it. See our presentation below.

Abstract:Visual vehicle surveillance has become an important research field due to its wide range of traffic applications. This task becomes more relevant in nighttime because accidents considerably increase. Typically, this problem is addressed by segmenting the bright image regions produced by vehicle lights, assuming they are well defined. But, often there are only flashes that occupy large image regions, invalidating the previous strategy. Thus, a real-time vehicle detection algorithm for nighttime that addresses the previous challenge is presented. First, the whole image is characterized by only one descriptor. Then, a grid of foveal classifiers that share the same previous image descriptor (unlike the traditional sliding window scheme) estimates the vehicle positions. Every classifier is trained to detect vehicles in specific image regions by analyzing the complex light patterns in the night. Furthermore, a new nighttime database has been also created to assess the effectiveness of the proposed method.

Video demos


Image annotation tool for machine learning

VGG Image Annotator (VIA) is an image annotation tool that can be used to define regions in an image and create textual descriptions of those regions. VIA is an open source project developed at the Visual Geometry Group and released under the BSD-2 clause license.

Here is a list of some salient features of VIA:

  • based solely on HTML, CSS and Javascript (no external javascript libraries)
  • can be used off-line (full application in a single html file of size < 400KB)
  • requires nothing more than a modern web browser (tested on Firefox, Chrome and Safari)
  • supported region shapes: rectangle, circle, ellipse, polygon, point and polyline
  • import/export of region data in csv and json file format
  • supports bulk update of annotations in image grid view
  • quick update of annotations using on-image annotation editor
  • keyboard shortcuts to speed up annotation



[Spanish] Ganadores a la mejor idea de negocio de actúaupm

Estamos orgullosos de ser uno de los 10 ganadores de la competición de emprendimiento y empresas Actúaupm. Nuestra idea ganadora se llama:

Espacios 2.0 para el comercio y la empresa: algoritmos basados en visión artificial y aprendizaje profundo para el procesamiento de imágenes, incrementando la productividad y seguridad de espacios comerciales y empresariales.

Más información en: 


Visit our website: Machine learning at GTI-UPM

Machine Learning at GTI-UPM:

This website is focused on our advances in the field of Machine Learning and Deep Learning. For other research areas, please check:


Renovación licencia de Matlab UPM

El mensaje que aparece en Matlab de expiración de licencia (Figura 0) se produce porque la UPM renueva la licencia de Matlab año por año, y cuando se acerca la fecha de expiración, Matlab lo recuerda. La UPM a día de hoy sigue renovando Matlab, por lo que dicho mensaje de expiración no debe asustarnos. Probablemente, este proceder es un mecanismo para cerrar el derecho de uso de Matlab a antiguos alumnos o trabajadores que ya no tengan vinculación con la UPM. Es importante mencionar que para renovar la licencia de Matlab, hay que esperar a que expire el periodo por completo. Si realizamos los métodos que se proponen a continuación con antelación, simplemente no funcionarán (seguirá apareciendo el mensaje de expiración). Se propone tres métodos para activar la licencia de Matlab: inmediato, rápido y lento.

Figura 0

Método inmediato
El método inmediato consiste en ejecutar el programa Activate Matlab R20XXz (R20XXz debe sustituirse por la versión concreta del matlab instalado), el cual nos guiará en el proceso de activación (o re-activación de Matlab). Pasos concretos:

  1. Abrir Activate Matlab R20XXz pulsando la tecla de Windows (típicamente la tecla con el icono de una ventana). En vez de pulsar la tecla de Windows, puede hacer click en el botón Inicio.
  2. Escribir los primeros caracteres del nombre programa (tal y como aparece en la Figura 1).
  3. Click sobre el nombre/icono del programa.

Figura 1

  1. Aparece la primera de una serie de ventanas que nos ayudarán a activar Matlab de nuevo. En la primera ventana, seleccionar la opción marcada en la Figura 2 y hacer click en Next.

Figura 2

  1. En la siguiente ventana, seleccionar la opción marcada en la Figura 3, introducir tu email de la upm en Email Address (que coincide con la cuenta de usuario Matlab) y tu contraseña de la cuenta de usuario Matlab en Password. Recuerda que la cuenta de Matlab es la que tuviste que crear para poder acceder al software de Matlab desde su página oficial. Finalmente, hacer click en Next.

Figura 3

  1. En la siguiente ventana, seleccione la licencia de la UPM. Si es personal docente o investigador, seleccione la licencia etiquetada con el nombre de Campus (Figura 4). Si es estudiante, seleccione la licencia etiquetada con el nombre de Student(Figura 5). Finalmente, hacer click en Next.

Figure 4

Figura 5

  1. En la siguiente ventana (Figura 6), hacer click en Confirm.

Figura 6

  1. Finalmente, en la siguiente ventana (Figura 7), hacer click en Finish.

Figura 7

Método rápido
El método rápido consiste en activar Matlab desde dentro de la propia aplicación de Matlab. Pasos concretos:

  1. Abrir Matlab.
  2. En la interfaz de Matlab (Figura 8), hacer click en Help, hacer click en Licensing y finalmente hacer click en Activate Software.

Figura 8

  1. Seguir los pasos 4 a 8 del Método inmediato.

Método lento
El método lento consiste en desinstalar Matlab e instalarlo de nuevo. Al principio y al final del proceso de instalación aparecerán unos pasos similares a los métodos anteriores para activar Matlab.


2 PhD Studentships in audio-visual intelligent sensing at Queen Mary University of London

PhD Studentships in audio-visual intelligent sensing

Applications are invited for 2 (two) PhD Studentships to undertake research in the areas of computer vision and audio processing for people monitoring in multi-camera multi-microphone environments, and will be part of an interdisciplinary project on mobile audio-visual monitoring for smart interactive and reactive environments. The Studentships (to be started in or after January 2017) are part of an interdisciplinary project between the Centre for Intelligent Sensing ( at Queen Mary University of London (QMUL) and the Centre for Information Technology ( at the Fondazione Bruno Kessler (FBK), Trento, Italy.

The Project will focus on methods for people tracking, activity recognition, acoustic scene analysis, behaviour analysis, distant-speech recognition and understanding applied to individuals as well as groups. Such information will enable learning 'patterns of usage' of the environment, and patterns can in turn be used to adapt and optimise the sensing accordingly.

Each PhD student will spend approximatively 50% of their time in London and 50% of their PhD time in Trento and will have access to state-of-the-art audio-visual laboratories, including robotic sensors, a multi-camera multi-microphone installation at a large open hallway and a smart home facility equipped with cameras and microphones.

Candidates should have a first-class honours degree or equivalent, or a good MSc Degree, in Computer Science, Physics, Mathematics or Electronic Engineering. Candidates must be confident in applied mathematics, and should have good programming experience, in particular of C/C++ language and of MATLAB environment. Previous knowledge of Signal Processing is a requirement. Previous knowledge of Computer Vision or Deep Learning/Machine Learning or Robotic Sensing or Audio Signal Processing and/or Speech Recognition is desired, but not required. 

The studentships will be based at Centre for Intelligent Sensing in the School of Electronic Engineering and Computer Science at Queen Mary, University of London and will be supervised by Professor Andrea Cavallaro ( and Dr Oswald Lanz ( or, depending on the type of the PhD project chosen by the candidate, Dr Maurizio Omologo ( To apply please follow the on-line process at by selecting Electronic Engineering or Computer Science in the A-Z list of research opportunities and following the instructions on the right hand side of the web page.

Please note that instead of the 'Research Proposal' we request a 'Statement of Research Interests'. Your Statement of Research Interest (no more than 500 words or one side of A4 paper) should state whether you are interested in a computer vision PhD project, or an audio processing PhD project, or an audio-visual processing PhD project. Moreover, your Statement of Research Interest should answer two questions: Why are you interested in the proposed area? What is your experience in the proposed area? In addition we would also like you to send a sample of your written work. This might be a chapter of your final year dissertation, or a published conference or journal paper. More details can be found .

Informal enquiries can be made by email to Professor Andrea Cavallaro (

The closing date for the applications is 15 November 2016.

Interviews are expected to take place during the week commencing 28 November 2016.