domingo, 29 de marzo de 2015

Applications are invited for a full-time Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University



Now Hiring Researchers



Applications are invited for a full-time Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years from the date of appointment.

The application deadline is April 27th, 2015.

The post-holder, who will occupy an office at the Future of Humanity Institute in central Oxford, will work closely with Professor Nick Bostrom and other members of the FHI, and with external collaborators. The post-holder will conduct independent research related to the long-term safety of machine intelligence, including technical issues in AI control. The balance between the theoretical and practical aspects of the post is flexible and will be tailored to the research interests of the successful applicant. The Research Fellow will be expected to produce a number of publications, single and/or co-authored.

The successful candidate must demonstrate strong evidence of relevant research potential in the indicated area. Outstanding analytical skills and the ability to engage with results and methods of computer science are essential. A Bachelors degree (2.1 or above, or international equivalent) in mathematics, computer science, statistics, or other relevant subject is essential. A PhD is desirable but not required. Expertise in machine learning is desirable but not required. For further details, see here.

The Cambridge Centre for the Study of Existential Risk (CSER) is also hiring research fellows capable of examining the ethics and evaluation of extreme technological risk, horizon scanning, and/or issues in responsible innovation. For more information on these posts, please see here.


ORIGINAL: FHI Oxford

March 29, 2015 

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.