2 PhD Studentships in audio-visual intelligent sensing at Queen Mary University of London

PhD Studentships in audio-visual intelligent sensing

Applications are invited for 2 (two) PhD Studentships to undertake research in the areas of computer vision and audio processing for people monitoring in multi-camera multi-microphone environments, and will be part of an interdisciplinary project on mobile audio-visual monitoring for smart interactive and reactive environments. The Studentships (to be started in or after January 2017) are part of an interdisciplinary project between the Centre for Intelligent Sensing (http://cis.eecs.qmul.ac.uk) at Queen Mary University of London (QMUL) and the Centre for Information Technology (http://ict.fbk.eu) at the Fondazione Bruno Kessler (FBK), Trento, Italy.

The Project will focus on methods for people tracking, activity recognition, acoustic scene analysis, behaviour analysis, distant-speech recognition and understanding applied to individuals as well as groups. Such information will enable learning 'patterns of usage' of the environment, and patterns can in turn be used to adapt and optimise the sensing accordingly.

Each PhD student will spend approximatively 50% of their time in London and 50% of their PhD time in Trento and will have access to state-of-the-art audio-visual laboratories, including robotic sensors, a multi-camera multi-microphone installation at a large open hallway and a smart home facility equipped with cameras and microphones.

Candidates should have a first-class honours degree or equivalent, or a good MSc Degree, in Computer Science, Physics, Mathematics or Electronic Engineering. Candidates must be confident in applied mathematics, and should have good programming experience, in particular of C/C++ language and of MATLAB environment. Previous knowledge of Signal Processing is a requirement. Previous knowledge of Computer Vision or Deep Learning/Machine Learning or Robotic Sensing or Audio Signal Processing and/or Speech Recognition is desired, but not required. 

The studentships will be based at Centre for Intelligent Sensing in the School of Electronic Engineering and Computer Science at Queen Mary, University of London and will be supervised by Professor Andrea Cavallaro (http://www.eecs.qmul.ac.uk/~andrea/) and Dr Oswald Lanz (https://tev.fbk.eu/people/profile/lanz) or, depending on the type of the PhD project chosen by the candidate, Dr Maurizio Omologo (http://shine.fbk.eu/people/omologo). To apply please follow the on-line process at http://www.qmul.ac.uk/postgraduate/applyresearchdegrees/index.html by selecting Electronic Engineering or Computer Science in the A-Z list of research opportunities and following the instructions on the right hand side of the web page.

Please note that instead of the 'Research Proposal' we request a 'Statement of Research Interests'. Your Statement of Research Interest (no more than 500 words or one side of A4 paper) should state whether you are interested in a computer vision PhD project, or an audio processing PhD project, or an audio-visual processing PhD project. Moreover, your Statement of Research Interest should answer two questions: Why are you interested in the proposed area? What is your experience in the proposed area? In addition we would also like you to send a sample of your written work. This might be a chapter of your final year dissertation, or a published conference or journal paper. More details can be found at:www.eecs.qmul.ac.uk/phd/apply.php .

Informal enquiries can be made by email to Professor Andrea Cavallaro (a.cavallaro@qmul.ac.uk).

The closing date for the applications is 15 November 2016.

Interviews are expected to take place during the week commencing 28 November 2016.


Edinburgh PhD places in vision, robotics, datascience, parallel processing

Edinburgh University has 3 Centres for Doctoral Training (CDT)
that would be of interest to computer vision and robotics students.
(See: http://www.ed.ac.uk/schools-departments/informatics/postgraduate/cdts/informatics-cdts)

EPSRC CDT in Robotics and Autonomous Systems –
      including computer vision:
      10 new PhD places

EPSRC CDT in Data Science –
      includes analysis of datasets (including `big data') that might arise
      from computer vision applications:
      10 new PhD places
      First deadline: 9 December 2016

EPSRC CDT in Pervasive Parallelism –
      investigating approaches to parallelism that could be used
      with big datasets, video, etc:
      10 new PhD places

Funding is primarily for UK & EC students, but there is a small amount of
funding for a few outstanding overseas students.

Office en el ámbito educativo para alumnos, profesores y escuelas

Si eres alumno o profesor, no dejes escapar esta oportunidad. Obtén las versiones online de Office y 1 TB de almacenamiento online de forma totalmente gratuita.

Origen: Office en el ámbito educativo para alumnos, profesores y escuelas

The 2016 Top Programming Languages


C is No. 1, but big data is still the big winner

Origen: The 2016 Top Programming Languages

Talking with your hands: How Microsoft researchers are moving beyond keyboard and mouse

Andrew Fitzgibbon showing detailed hand tracking.

Kfir Karmon imagines a world in which a person putting together a presentation can add a quote or move an image with a flick of the wrist instead of a click of a mouse.

Jamie Shotton envisions a future in which we can easily interact in virtual reality much like we do in actual reality, using our hands for small, sophisticated movements like picking up a tool, pushing a button or squeezing a soft object in front of us.

And Hrvoje Benko sees a way in which those types of advances could be combined with simple physical objects, such as a few buttons on a piece of wood, to recreate complex, immersive simulators – replacing expensive hardware that people use today for those purposes.

Microsoft researchers are looking at a number of ways in which technology can start to recognize detailed hand motion — and engineers can put those breakthroughs to use in a wide variety of fields.

The ultimate goal: Allowing us to interact with technology in more natural ways than ever before.

Read more in  [https://blogs.microsoft.com/next/2016/06/26/talking-hands-microsoft-researchers-moving-beyond-keyboard-mouse/#sm.00000c9skobz7uf1xvhwqzxn5juqd]

MsC Track in Signal Processing and Machine Learning for Big Data

Master In Signal Theory and Communications

The MSTC program offered by the Signals, Systems and Radiocommunications department is aimed to equip highly motivated students with up-to-date skills in some of the hottest topics demanded worldwide by industries, research centers and academia.

Track: Signal Processing and Machine Learning for Big Data

The MSTC track on Signal Processing and Machine Learning for Big Data extends Big Data and Analytics instruction to new scientific carriers training professionals and researchers in principles and technologies for extracting knowledge from the increasing number of real-world signals: speech, images, movies, music, biological and sensor readings, robotic sensors, financial series, etc. Students will learn through practical application projects and real case studies. The program provides fundamental courses on statistical analysis, time series analysis and optimization, machine learning courses (predictive and descriptive learning, reinforcement learning and biologically-inspired models), and courses on advanced signal processing techniques for large-scale data and massive processing.

In MSTC students can choose between three tracks to meet their academic needs and achieve their personal and professional objectives. For more information please follow the Full Track Program link below.

Full Track Program Pre-register now

© 2016 Department of Signals Systems and Radiocommunications, UPM. All Rights Reserved. 
Our mailing address is: info-mstc@ssr.upm.es

Sweep Is a $250 LIDAR With Range of 40 Meters That Works Outdoors

A San Leandro, Calif.-based startup called Scanse has developed a 2D LIDAR system that promises to be simultaneosly much cheaper and much better than what’s out there. For $250, you get a spinning LIDAR sensor with a range of 40 meters, even outdoors.

Continue reading here: [Link]

Xiaomi presenta su propio dron

El fabricante chino de electrónica crea un aparato capaz de volar 27 minutos mientras graba en calidad 4K o hace fotos de 16 megapíxeles.

Lea la noticia entera en [http://tecnologia.elpais.com/tecnologia/2016/05/25/actualidad/1464188379_614912.html]

Artículo aceptado en 2016 XXXI Simposium de la Unión Científica Internacional de Radio

Título del artículo: "An extended Volumegrams of Local Binary sub-Patterns Descriptor for Hand-Gesture Recognition"

Autores: Ana I Maqueda, Carlos R. del-Blanco, Fernando Jaureguizar, Narciso García

Paper accepted at 2016 3DTV Conference

Paper title: "Improved 2D-to-3D video conversion by fusing optical flow analysis and scene depth learning"

Authors: J.L. Herrera, C.R. delBlanco, N. García