sábado, 16 de marzo de 2013

Detecting the invisible: Software that can see invisible motion

ORIGINAL: H+ Magazine
By: Lochlan Bloom
Published: March 16, 2013


New software from MIT can now reveal details in videos previously hidden to the human eye. The technique known as Eulerian Video Magnification was developed by graduate student Michael Rubinstein, recent alumni Hao-Yu Wu ’12, MNG ’12 and Eugene Shih SM ’01, PhD ’10, and professors William Freeman, Fredo Durand and John Guttag was presented this past summer at SIGGRAPH 2012.

The ground-breaking computer code analyses each frame of a video to determine invisible variations and offers some truly exciting possibilities for machine interaction. It also raises an interesting question – if machines can look back over our recorded lives and pull out previously hidden behaviour will that change the way we relate to our past?

The software in question has been developed by researchers at MIT and works with any existing video footage. By amplifying minute changes in pixel shading the software is able to determine fluctuations over time. As a result it is already able to predict fairly complex factors about humans or animals appearing in a video.


The researchers demonstrated the power of the program by analysing a video of a new born baby and extracting its heart rate. In this case, invisible changes in blood flow to the baby’s face created a hidden measure of its heartbeat. By comparing with data from a heart monitor recorded at the same time as the video they were able to confirm that their readings were correct.

While the researchers are currently touting the medical benefits of such a system – to remotely monitor at risk patients – there are undoubtedly huge implications for Artificial Intelligence and computer interfaces in general. The retrospective aspect of this is ably demonstrated in the below video where the researchers are able to pull Christian Bale’s heartbeat from the recent Batman film.

The idea that a computer can see things which are invisible to a human is not new. With the wide array of sensors and interfaces already on the market today, a machine is able to detect phenomena far and beyond the five meagre human senses. However the ability to go back and reassess existing footage with newly developed software and new technologies is something that has so far been little explored.

Consider recent history. Could a machine detect anything invisible to the human eye by analyzing a video of an assassination? Or a politician’s speech? There is a correlation between blood flow and lying, so a machine can be used as a aid to help humans determine whether to believe a rival in a business or diplomatic negotiation. Or could this type of machine become a standard device for job interviews?

The open-source software released by MIT is already a clear step towards a future where machines are indispensable in uncovering the hidden truths around us and it is only one of many such new techniques. When a computer can predict what someone is feeling more accurately than a human then at what stage do we stop trusting our instincts and rely instead on machines to guide our social interactions?

###

Lochlan Bloom is a writer of fiction and non-fiction. His novella Trade, focused on the collision of technology and the sex industry is out now.


@lochlanbloom

Endnote: The Eulerian Video Magnification (EVM) software can be downloaded and run or run it via a web-based interface . There are also plans for a smartphone app although no timeline has been announced.

Get the code: Matlab (2 MB, v1.1 2013-03-02) – reproduces all the results in the paper (see README.txt for details).

This code is provided for non-commercial research purposes only. By downloading and using the code, you are consenting to be bound by all terms of this software release agreement. Contact the authors if you wish to use the code commercially. This work is patent pending.

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.